DETAILED ACTION
This office action is in respond to applicant’s remarks filed on August 21, 2025 in
application 18/476,006.
Claims 1-25 are presented for examination. Claims 1-2 are amended.
35 USC 112 is withdrawn based on amendments.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed August 21, 2025 have been fully considered but they are not persuasive.
Applicant stated that Beedu et al. does not explicitly teach the amended limitation of “establishing communication between the production cluster and the one or more remote cluster to exchange state information is associated with one or more of cluster backup plans, new backups or consistent sets.”
Examiner disagreed. The state information covers backup plans, new backups or consistent sets for which Beedu et al. teach of a set of backup snapshots (para. 42) that would read on the new backups as amended. Exchanging information as claimed is equated to exchanging the backup data.
In regard to the 35 USC 101, rejection, the newly amended limitation of “the state information is associated with one or more of cluster backup plans, new backups or consistent sets” where the state information, for one of ordinary skill in the art, could be interpreted as the backup data.
For these reasons, the rejections are maintained.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1, 3-11, 13-17, 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 6-8, 10-11, 14-16, 18 of U.S. Patent No. 11,880,282. Although the claims at issue are not identical, they are not patentably distinct from each other because the scopes of the claims are broader but otherwise recite similar limitations.
Current application 18/476,006
Patent 11,880,282 (app. 17/476,393)
Claim 1
a production cluster with one or more applications running on the production cluster;
one or more remote clusters configured to continuously restore the one or more applications running on the production cluster from a backup of each of the one or more applications; and
an event target configured to establish communication between the production cluster by exchanging state information between the production cluster and the remote clusters;
wherein: the state information is associated with one or more of cluster backup plans, new backups or consistent sets, each of the remote clusters and the production cluster comprises a syncher service and a watcher service executing thereon, and the backup is generated by a backup service on the production cluster based on a backup plan associated with each of the one or more applications.
Claim 1
a first cluster with containerized applications
continuous restore process for a containerized application executing on a first cluster;
persistent volume … creating a data mover pod executing on the first cluster for moving at least some of the application data from the new persistent volume to a backup target based on the backup plan schedule;
data mover pod for moving at least some of the application data from the new persistent volume to a backup target based on the backup plan schedule and data synch pod for creating a persistent volume on the second cluster
Claim 3
the backup plan includes one or more of scheduling policies, retention policies, or continuous restore policies,
a schedule policy defines a backup schedule including a frequency at which backups are generated and a type of the backup, the type including a full backup or an incremental backup,
a retention policy defines a number of backups to retain, and
a continuous restore policy defines a set of remote clusters on which an application is restored and a number of consistent sets to maintain on each remote cluster of the set of remote clusters.
Claim 1
generating a backup plan comprising a backup plan schedule
creating a data mover pod based on the backup plan schedule
Claims 10-11
increasing or decreasing the number of persistent volumes using the data synch pod based on a change to the backup plan
Claim 1
continuous restore process for a containerized application executing on a first cluster; recovering the containerized application at the second cluster based on at least some of the application data moved to the persistent volume
Claim 4
wherein the watcher service on a particular remote cluster specified in the backup plan is executed to identify that the particular remote cluster participates in continuous restore operations of the application
Claim 1
data synch pod for creating a persistent volume on the second cluster
Claim 5
the event target is a backup storage, the backup storage is at least one of a simple storage service (S3) compatible storage or a network file system (NFS) storage, and
the event target is configured to be accessible to the production cluster and the one or more remote clusters such that the production cluster can communicate with the one or more remote clusters through the event target without a direct network connection between the production cluster and the one or more remote clusters
Claim 15
backup target comprises a simple storage service (S3) backup target
Claim 1
wherein at least one of the persistent volume on the first cluster or the persistent volume on the second cluster is located remotely from the backup target
Claim 6
wherein the syncher service on the production cluster is executed to copy the backup plan to the event target
Claim 1
creating a data mover pod executing on the first cluster for moving at least some of the application data from the new persistent volume to a backup target based on the backup plan schedule
Claim 7
identify, by the backup service on the production cluster, one or more persistent volumes containing application data and application metadata;
copy, by the backup service on the production cluster, the application data and application metadata from the one or more persistent volumes to a backup target based on a backup plan schedule included in the backup plan associated with the application;
identify, by the watcher service on a remote cluster of the one or more remote clusters, a new backup on the backup target;
create, by a continuous restore controller of the remote cluster, a consistent set including a set of persistent volumes;
copy, by the continuous restore controller of the remote cluster, data of the new backup from the backup target to the consistent set;
record, by the syncher service on the remote cluster, the created consistent set on the event target;
identify, by the watcher service on the production cluster, a record of the consistent set corresponding to the new backup; and
update, by the watcher service on the production cluster, a production cluster backup record based on the identified record.
Claim 1
identifying a persistent volume containing application data of the containerized application in the first cluster
creating a new persistent volume from a snapshot of the identified persistent volume
creating a data mover pod executing on the first cluster for moving at least some of the application data from the new persistent volume to a backup target based on the backup plan schedule
creating a persistent volume on the second cluster
Claim 8
monitor changes to application configuration and a number of the persistent volumes; and
enable a next backup to be updated to reflect the changes.
Claim 6
Wherein the backup plan comprises a recovery point objective that is a last incremental backup in the backup plan
Claim 9
wherein the remote cluster is further configured to perform a continuous restore service to create snapshots of the persistent volumes of the consistent set for each backup to save storage footprint.
Claim 1
snapshot of the identified persistent volume
Claim 10
wherein the remote cluster is further configured to perform a continuous restore service to delete old consistent sets to maintain a required number of consistent sets, wherein the required number is specified in the backup plan.
Claim 1
Responsive to receiving the backup plan and deleting the data mover pod, the new persistent volume, and the snapshot
Claims 10-11
increasing or decreasing the number of persistent volumes using the data synch pod based on a change to the backup plan
Claim 11
wherein: the production cluster is further configured to generate and present one or more graphical user interfaces for a user to modify the backup plan associated with the application on the production cluster to reflect changes in user requirements, and modifying the backup plan comprises modifying at least one of the backup plan schedule, a number of consistent sets to maintain on the remote cluster, a number of backups to maintain on the production cluster, or a number of remote clusters to maintain consistent sets.
Claim 8
Generating the change to the backup plan is initiated using a graphical user interface
Claim 7
Generating a change to the backup plan
Claim 6
The backup plan comprises a recovery point objective that is the last incremental backup in the backup plan
Claim 13
wherein the production cluster is further configured to generate and present a graphical user interface for a user to choose a consistent set on a remote cluster of the one or more remote clusters to restore an application of the one or more applications, wherein restoring the application comprises:
recreating one or more application pods from the backup of the application based on one or more application templates, an application template including at least one of secrets, application configurations, or pod specifications; and
customizing the one or more application templates to fit a remote cluster environment, wherein the customizing comprises changing one or more of load balancer settings, public internet protocol (IP) addresses, domain names, or storage classes.
Claim 8
Generating the change to the backup plan is initiated using a graphical user interface
Claim 6
The backup plan comprises a recovery point objective that is the last incremental backup in the backup plan
Claim 1
creating a data mover pod executing on the first cluster for moving at least some of the application data from the new persistent volume to a backup target based on the backup plan schedule
Claim 2
Application template comprises at least one of a release, an operator, a label or a namespace.
Claim 14
Application template comprises at least a common resource descriptor (CRD), an IP address, a network configuration, a virtual machine, a number of virtual machines, an operating system, and a software version number.
Claim 14
wherein the production cluster is further configured to generate and present a graphical user interface for a user to choose a consistent set on a remote cluster of the one or more remote clusters to test the restore of an application of the one or more applications, wherein testing the restore of the application comprises:
recreating one or more application pods from the backup of the application based on one or more application templates, an application template including at least one of secrets, application configurations, or pod specifications;
customizing the one or more application templates to fit a remote cluster environment, wherein the customizing comprises changing one or more of load balancer settings, public IP addresses, domain names, or storage classes;
shutting down the one or more application pods; and
deleting the one or more application pods while leaving data on a consistent set intact.
Claim 8
Generating the change to the backup plan is initiated using a graphical user interface
Claim 1
Responsive to receiving the backup plan and deleting the data mover pod, the new persistent volume, and the snapshot
Claim 2
Application template comprises at least one of a release, an operator, a label or a namespace.
Claim 14
Application template comprises at least a common resource descriptor (CRD), an IP address, a network configuration, a virtual machine, a number of virtual machines, an operating system, and a software version number.
Claim 5
Executing the application template to generate an application skeleton, subsequently shutting down the application skeleton, and rebooting the application skeleton to restore the containerized application,
Claim 15
wherein the one or more applications are container-based, virtual machine based, or a combination thereof.
Claim 1
Continuous restore process for a containerized application
Claim 6
wherein the event target comprises a plurality of event targets participating in continuously restoring the one or more applications.
Claim 1
Continuous restore process for a containerized application
Claim 17
wherein the production and the one or more remote clusters reside in on-premise data centers, public clouds, or a combination of the foregoing.
Claim 1
Executing on a first and second cluster … persistent volume containing the application data
Claim 16
Wherein the first cluster comprises resources of a cloud service, and the second cluster comprises resources of a different cloud service
Claim 19
wherein the production cluster communicates with the one or more remote clusters to generate one or more graphical user interfaces to display data related to continuous restore operations.
Claim 18
Presenting in the graphical user interface a status of the data synch pod
Claim 19
Wherein the status comprises a latest data synchronization point of the at least one selected remote site cluster
Claim 20
wherein the one or more graphical user interfaces are generated by providing one or more options for a user to create, modify, or delete the backup plan.
Claim 8
Generating the change to the backup plan is initiated using a graphical user interface
"A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). "ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-9, 11-13, 15-23, and 25 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Beedu et al. (US 2022/0350492).
In regard to claim 1, Beedu et al. teach a system for continuously restoring applications comprising:
a production cluster with one or more applications running on the production cluster (first HCI cluster, para. 74);
one or more remote clusters configured to continuously restore the one or more applications running on the production cluster from a backup of each of the one or more applications (second HCI cluster, para. 74); and
an event target configured to establish communication (an application in a container cluster 1101 is configured to be interfaced with hyperconverged computing infrastructure (HCI) storage volumes 134, para. 41) between the production cluster and the one or more remote clusters by exchanging state information between the production cluster and the remote cluster (backup snapshots to a backup storage system corresponding to a backup schedule, fig. 1a, para. 40-48),
wherein: state information is associated with one or more of cluster backup plans, new backups or consistent sets (the application running in the container cluster within a source system can transmit backup snapshots to a backup storage system 111 of a target system, para. 41, each of the remote clusters and the production cluster comprises a syncher service and a watcher service executing thereon, and the backup is generated by a backup service on the production cluster based on a backup plan associated with each of the one or more applications (POD running on a container cluster 1102, para. 52-56).
In regard to claim 2, Beedu et al. teach the system of claim 1, wherein: the production cluster and the one or more remote clusters are configured relative to applications running on each of the production cluster and the one or more remote clusters, and the production cluster and the remote clusters are configured by designating a particular cluster as a product cluster for an application and as a remote cluster for another application (HCI clusters 112.sub.1 and 112.sub2 are configured as active/active cluster, para. 59-60.
In regard to claim 3, Beedu et al. teach the system of claim 1, wherein:
the backup plan includes one or more of scheduling policies, retention policies, or continuous restore policies (changed data can be sent on an ongoing basis to storage cluster 154.sub.TARGET, para. 66-68),
a schedule policy defines a backup schedule including a frequency at which backups are generated (backup schedule, para. 47-48) and a type of the backup, the type including a full backup or an incremental backup (base state or incremental state, para. 98),
a retention policy defines a number of backups to retain (creating, transmitting and storing backups of the HCI storage volumes is based on policies such as may be defined in a service level agreement (SLA), para. 43), and
a continuous restore policy defines a set of remote clusters on which an application is restored and a number of consistent sets to maintain on each remote cluster of the set of remote clusters (synchronized in an active/active configuration, para. 59).
In regard to claim 4, Beedu et al. teach the system of claim 3, wherein the watcher service on a particular remote cluster specified in the backup plan is executed to identify that the particular remote cluster participates in continuous restore operations of the application (orchestrator that operates on the application layer serves to move pod 1 and/or pod2 from node1 and/or node2 respectively to node3 and/or node4, para. 58).
In regard to claim 5, Beedu et al. teach the system of claim 1, wherein:
the event target is a backup storage, the backup storage is at least one of a simple storage service (S3) compatible storage or a network file system (NFS) storage (multiple tiers of storage may include storage that is accessible over communication link such as cloud or network storage, or local storage that can include any combinations of SSDs and HDDs and/or RAPMs and/or hybrid disk drives, para. 143), and
the event target is configured to be accessible to the production cluster and the one or more remote clusters such that the production cluster can communicate with the one or more remote clusters through the event target without a direct network connection between the production cluster and the one or more remote clusters (multiple executable containers that share access to a virtual disk can be assembled into a pod where a pod provide sharing mechanism as well as isolation mechanism, para. 140).
In regard to claim 6, Beedu et al. teach the system of claim 1, wherein the syncher service on the production cluster is executed to copy the backup plan to the event target (application controllers can monitor various operational states of any number of applications … monitor ongoing changes to the configuration and/or data state of the application, 65-66).
In regard to claim 7, Beedu et al. teach the system of claim 1, wherein the backup is a full backup of an application of the one or more applications generated by the backup service (recovery from a base state, para. 98), and wherein the system is configured to:
identify, by the backup service on the production cluster, one or more persistent volumes containing application data and application metadata (organize data items to facilitate a bring-up of a container-based application or cluster to a particular state, para. 100);
copy, by the backup service on the production cluster, the application data and application metadata from the one or more persistent volumes to a backup target based on a backup plan schedule included in the backup plan associated with the application (ongoing readiness by periodically taking snapshots, para. 44, containerized component is subject to a corresponding backup schedule, para. 47);
identify, by the watcher service on a remote cluster of the one or more remote clusters, a new backup on the backup target (orchestrator that operates on the application layer serves to move pod 1 and/or pod2 from node1 and/or node2 respectively to node3 and/or node4, para. 58);
create, by a continuous restore controller of the remote cluster, a consistent set including a set of persistent volumes; copy, by the continuous restore controller of the remote cluster, data of the new backup from the backup target to the consistent set; record, by the syncher service on the remote cluster, the created consistent set on the event target (backup snapshots to a backup storage system corresponding to a backup schedule, fig. 1a, para. 40-48);
identify, by the watcher service on the production cluster, a record of the consistent set corresponding to the new backup; and update, by the watcher service on the production cluster, a production cluster backup record based on the identified record (bundler can be deployed to relate a particular first set of snapshots to a second set of snapshots and can relate the various snapshots in an application-consistent manner and so as to facilitate a recovery of a n application and its data at a restore location, para. 50). It is noted that the watcher service is equated to a program at the remote site to accomplished the claimed tasks.
In regard to claim 8, Beedu et al. teach the system of claim 7, wherein the watcher service on each remote cluster is further configured to: monitor changes to application configuration and a number of the persistent volumes; and enable a next backup to be updated to reflect the changes (monitor ongoing changes to the configuration of the application … the monitored and capture data states are stored in a data structure that is subsequently used to relate the container-based application’s operational states to its corresponding HCI data, para. 66-68).
In regard to claim 9, Beedu et al. teach the system of claim 7, wherein the remote cluster is further configured to perform a continuous restore service to create snapshots of the persistent volumes of the consistent set for each backup to save storage footprint (application running in the container cluster within a source system can transmit backup snapshots to a backup storage system, para. 41-57).
In regard to claim 11, Beedu et al. teach the system of claim 7, wherein: the production cluster is further configured to generate and present one or more graphical user interfaces for a user to modify the backup plan associated with the application on the production cluster to reflect changes in user requirements (pod’s configuration may be defined and modified periodically, para. 46), and modifying the backup plan comprises modifying at least one of the backup plan schedule, a number of consistent sets to maintain on the remote cluster, a number of backups to maintain on the production cluster, or a number of remote clusters to maintain consistent sets (corresponding backup schedule, para. 47).
In regard to claim 12, Beedu et al. teach the system of claim 1, wherein the backup is an incremental backup of an application of the one or more applications (incremental states, para. 98).
In regard to claim 13, Beedu et al. teach the system of claim 1, wherein the production cluster is further configured to generate and present a graphical user interface for a user to choose a consistent set on a remote cluster of the one or more remote clusters to restore an application of the one or more applications (pods can be managed manually through application programming interfaces, (APIs), para. 46), wherein restoring the application comprises:
recreating one or more application pods from the backup of the application based on one or more application templates, an application template including at least one of secrets, application configurations, or pod specifications (recovery point data and bring a particular combination of snapshots into the target infrastructure environment, para. 53); and
customizing the one or more application templates to fit a remote cluster environment, wherein the customizing comprises changing one or more of load balancer settings, public internet protocol (IP) addresses, domain names, or storage classes (deployment scheme can be codified as a chart (Helm Chart) that can refer to a templates directory that holds template files, para. 71).
In regard to claim 15, Beedu et al. teach the system of claim 1, wherein the one or more applications are container-based, virtual machine based, or a combination thereof (containerized components or any set or groupings of containerized components can be subsumed within the bounds of a virtual machine, para. 47).
In regard to claim 16, Beedu et al. teach the system of claim 1, wherein the event target comprises a plurality of event targets participating in continuously restoring the one or more applications (any one or more application data snapshots of the backup storage system, para. 53).
In regard to claim 17, Beedu et al. teach the system of claim 1, wherein the production and the one or more remote clusters reside in on-premise data centers, public clouds, or a combination of the foregoing (cloud infrastructure, para. 109).
In regard to claim 18, Beedu et al. teach the system of claim 1, wherein each of the remote clusters and the production cluster is further configured to: obtain a current state of backups and consistent sets after at least one of the watcher services or syncher services is offline for a period of time; and perform an update of each based on the current state (specific timing of when application metadata is captured is based on the timing of when an underlying application is in a quiescent state, para. 91).
In regard to claim 19, Beedu et al. teach the system of claim 1, wherein the production cluster communicates with the one or more remote clusters to generate one or more graphical user interfaces to display data related to continuous restore operations (a point-in-time status such that the application metadata can be used to reconstruct a backwards-in-time recovery point, para. 70).
In regard to claim 20, Beedu et al. teach the system of claim 19, wherein the one or more graphical user interfaces are generated by providing one or more options for a user to create, modify, or delete the backup plan (at a later moment in time, the states of the computing system can be reconstituted into a running system on a particular target infrastructure, selected as an option that is determined at the later moment in time, para. 104).
In regard to claim 21, Beedu et al. teach the system of claim 19, wherein the one or more graphical user interfaces are generated by displaying a topology diagram, the topology diagram including all the production and remote clusters that participate in continuous restoration of the one or more applications, applications in each cluster, and connectivity to remote clusters on which one or more backups are restored (an application controller can be configured to monitor ongoing changes to the configuration of the application and/or to monitor ongoing changes to the data state of the application. An application controller can be aware of all changes made to application’s configuration and/or deployment topology, para. 66).
In regard to claim 22, Beedu et al. teach the system of claim 19, wherein the one or more graphical user interfaces are generated by displaying a health status of one or more services running on the production and remote clusters, wherein the one or more services includes at least the syncher service and the watcher service (such operational states and any other data can be sent repeatedly as time progresses and/or as the operational states of the constituents of the container cluster change, para. 67).
In regard to claim 23, Beedu et al. teach the system of claim 19, wherein the one or more graphical user interfaces are generated by displaying (i) a backup policy associated with each of the one or more applications (policy-driven storage model used to implement a disaster recovery capability for container-based applications, para. 89) and (ii) a lag between a number of actual consistent sets and a desired number of consistent sets (the specific timing of when application metadata is captured can be based, at least in part on a policy, para. 91).
In regard to claim 25, Beedu et al. teach the system of claim 19, wherein the production cluster and the one or more remote clusters are configured to: generate and display performance metrics associated with creating backups and consistent sets; and provide predictive analytics based on the performance metrics (as an option, one or more variations of application metadata payload structure or any aspect thereof may be implemented, para. 95, an organization of data that can be used to bring a containerized application to a desired state based on a captured state description and knowledge of the application topology, para. 96).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 10, 14, 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beedu et al. (US 2022/0350492) in further view of Bai et al. (US 2015/0378837).
In regard to claim 10, Beedu et al. does not explicitly teach but Bai et al. teach the system of claim 7, wherein the remote cluster is further configured to perform a continuous restore service to delete old consistent sets to maintain a required number of consistent sets, wherein the required number is specified in the backup plan (the container information including the primary copy and/or the backup copies is preferably deleted, marked for deletion or otherwise rendered obsolete after the successful generation of the container file, para. 106).
It would have been obvious to modify the system of Beedu et al. by adding Bai et al. file repair. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid to free up storage space (para. 89).
In regard to claim 14, Beedu et al. teach the system of claim 1, wherein the production cluster is further configured to generate and present a graphical user interface for a user to choose a consistent set on a remote cluster of the one or more remote clusters to test the restore of an application of the one or more applications, wherein testing the restore of the application comprises:
recreating one or more application pods from the backup of the application based on one or more application templates, an application template including at least one of secrets, application configurations, or pod specifications (POD is a grouping of containerized components, a pod may be assigned a unique IP address, within a particular pod, all containers can be reference each other on local host, para. 46);
customizing the one or more application templates to fit a remote cluster environment, wherein the customizing comprises changing one or more of load balancer settings, public IP addresses, domain names, or storage classes (the bundler is able to configure a container cluster to host a running POD with its application data storage that has been restored from any one or more application data snapshots of the backup storage system, para. 53).
Beedu et al. does not explicitly teach but Bai et al. teach shutting down the one or more application pods; and deleting the one or more application pods while leaving data on a consistent set intact (the container information including the primary copy and/or the backup copies is preferably deleted, marked for deletion or otherwise rendered obsolete after the successful generation of the container file, para. 106).
It would have been obvious to modify the system of Beedu et al. by adding Bai et al. file repair. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid to free up storage space (para. 89).
In regard to claim 24, Beedu et al. does not explicitly teach but Bai et al. teach the system of claim 19, wherein the one or more graphical user interfaces are generated by generating and displaying a cost associated with the backup plan, the cost including at least an operation cost in an interval and an amount of compute resources used for the continuous restore (the storage media containing the container information may be searched according to a predetermined or random order based on the reliability, update status, location, size, cost and other factors, para. 104).
It would have been obvious to modify the system of Beedu et al. by adding Bai et al. file repair. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in selecting a preferred copy to be used (para. 104-105).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892.
Rane et al. (US 12,367,402) intelligent backup of containerized environment
Mariappan et al. (US 11,991,077) data interfaces for containers deployed to compute nodes
Kulkarni et al. (US 11,822,442) active-standby pods in a container environment
Balcha (US 11,586,507) cloud-based backup
************
Kanso et al. (US 12,124,924) restoring backup of application states
Bharadwaj et al. (US 11,609,825) backup copy on backup node, production cluster
Kumar et al. (US 2022/0308849) production cluster and application recovery
Lew et al. (US 11,200,207) primary production cluster and backup cluster
Dalal et al. (US 10,884,876) backup cluster to restore records to the production cluster
Mallik et al. (US 10,664,3587) backup platform, container format
Kathpal et al. (US 10,613,944) shards to restore production cluster
Wang et al. (US 2020/0042343) pod, container backup, checkpoint
Datta (US 2019/0236051) source and target cluster, snapshots
Iwasaki et al. (US 2016/0154706) backup copy on disaster recovery cluster
Garai et al. (US 2013/0326265) restore multi-tier application at a production cluster
Lim (US 6,526,521) cluster framework and failover operation for application
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN TRUONG whose telephone number is 408-918-7552. The examiner can normally be reached on 10AM-6PM PST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Ashish can be reached on 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Loan L.T. Truong/Primary Examiner, Art Unit 2114 Loan.truong@uspto.gov