DETAILED ACTION
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shetty et al. (US 2024/0256391) hereinafter Shetty, in view of CHATTERJEE et al. (US 2022/0012134) hereinafter CHATTERJEE.
In claim 1, Shetty discloses “A method of automated orchestration of cyber protection for a set of storage volumes using intermittent consistency, comprising:
defining a cascaded remote data forwarding facility, the cascaded remote data forwarding facility including a first leg on which asynchronous remote data forwarding is used to mirror data of a set of storage volumes from a first data center to a second data center, the cascaded remote data forwarding facility also including a second leg on which adaptive copy data forwarding is used to mirror data of the set of storage volumes from the second data center to a cyber recovery vault ([0056] the data center 140 is a mirrored copy of the data center 130 to provide non-disruptive operations at all times even in the presence of failures including, but not limited to, network disconnection between the data centers 130 and 140 and the mediator 120, which can also be located at a data center. The cluster 155 of data center 150 can have an asynchronous relationship or be a vault retention of the cluster 135 of the data center 130 [0063] the data center 240 is a mirrored copy of the data center 230 to provide non-disruptive operations at all times even in the presence of failures including, but not limited to, network disconnection between the data centers 230 and 240 and the mediator 220, which can also be a data center [0067] A data replication relationship between the primary and secondary storage sites guarantees non-disruptiveness due to allowing I/O operations to be handled with the secondary mirror copy of data. However, there are timing windows between the primary storage site being non-operational and the secondary mirror copy being ready to serve I/O operations where a second failure can lead to disruption. For example, a controller failure in a cluster hosting the secondary mirror copy of the data. The automatic unplanned failover feature of the present design guarantees non-disruptive operations (e.g., operations of business enterprise applications, operations of software application) even in the presence of these multiple failures);
transmitting data on the first leg of the cascaded remote data forwarding facility from the first data center to the second data center ([0070] the cluster 320 has a data copy 331 that is a mirrored copy of the data copy 330 to provide non-disruptive operations at all times even in the presence of multiple failures including, but not limited to, network disconnection between the data centers 302 and 304 and the mediator 360);
transmitting the data on the second leg of the cascaded remote data forwarding facility from the second data center to the cyber recovery vault ([0070] The cluster 355 may have an asynchronous replication relationship with cluster 310 or a mirror vault policy. The cluster 355 includes a configuration database 358, multiple storage nodes 356a-b each having a respective mediator agent 359a-b, and an Application Programming Interface (API) 357);
monitoring, on the second data center, a number of invalid tracks of the data owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility ([0064] The system 202 can utilize communications 290 and 291 to synchronize a mirrored copy of data of the data center 240 with a primary copy of the data of the data center 230. Either of the communications 290 and 291 between the data centers 230 and 240 may have a failure 295. In a similar manner, a communication 292 between data center 230 and mediator 220 may have a failure 296 while a communication 293 between the data center 240 and the mediator 220 may have a failure 297);
determining a consistent state of data on a set of storage volumes at the cyber recovery vault ([0068] each cluster can have up to 5 consistency groups with each consistency group having up to 12 volumes. The system 202 provides an automatic unplanned failover feature at a consistency group granularity. The unplanned failover feature allows switching storage access from a primary copy of the data center 230 to a mirror copy of the data center 240 or vice versa [0074] The mediator agents (e.g., 313, 314, 323, 324, 359a, 359b) are configured on each node within a cluster. The system 300 can perform appropriate actions based on event processing of the mediator agents. The mediator agent(s) processes events that are generated at a lower level (e.g., volume level, node level) and generates an output for a consistency group level. In one example, the nodes 311, 312, 321, and 322 form a consistency group. The mediator agent provides services for various events (e.g., simultaneous events, conflicting events) generated in a business data replication relationship between each cluster);
in response to the determined consistent state of the data, creating a snapset of the storage volumes in the cyber recovery vault ([0102] At operation 608, the computer-implemented method transfers the snapshot copy including a change in data to the third storage node based on an asynchronous mirror policy. At operation 610, the computer-implemented method intercepts the snapshot create operation on the primary storage site and synchronously replicates the snapshot create operation to transfer the snapshot copy to the second storage node to provide a common snapshot between the second storage node and the third storage node to avoid a baseline data transfer from the second storage node to the third storage node if a failover (e.g., an automatic unplanned failover (AUFO), a planned failover) occurs from the primary storage site to the secondary storage site. The AUFO may occur due to a failure event of the primary storage site); and
after creation of the snapset of the storage volumes in the cyber recovery vault, resuming transmission of data on the first leg of the cascaded remote data forwarding facility ([0103] At operation 612, the computer-implemented method determines whether the snapshot copy is successfully transferred to the second storage node. If so, then a previous snapshot copy of the one or more replicated storage objects of the second storage node is removed from the second storage node at operation 614. If not, then a previous snapshot copy of the one or more replicated storage objects of the second storage node is maintained in the second storage node at operation 616 when the snapshot copy is not successfully transferred to the second storage node [0104] At operation 620 of FIG. 6B, the computer-implemented method initiates, with the second update schedule, a new snapshot create operation with a new snapshot copy for the one or more storage objects of the first storage node when a sync engine is not active due to the synchronous replication relationship being Out-of-Sync)”.
Shetty does not appear to explicitly disclose however, CHATTERJEE discloses “in response to a determination that the number of invalid tracks owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility is less than or equal to a maximum threshold value, suspending transmission of data on the first leg of the remote data forwarding facility while continuing to transmit data on the second leg of the remote data forwarding facility ([0202] system 100 may also determine whether a metric or other indication satisfies particular storage criteria sufficient to perform an action. For example, a storage policy or other definition might indicate that a storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection. In some embodiments, risk factors may be quantified into certain measurable service or risk levels. For example, certain applications and associated data may be considered to be more important relative to other data and services. Financial compliance data, for example, may be of greater importance than marketing materials, etc. Network administrators may assign priority values or “weights” to certain data and/or applications corresponding to the relative importance. The level of compliance of secondary copy operations specified for these applications may also be assigned a certain value. Thus, the health, impact, and overall importance of a service may be determined, such as by measuring the compliance value and calculating the product of the priority value and the compliance value to determine the “service level” and comparing it to certain operational thresholds to determine whether it is acceptable)”.
Hence, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Shetty and CHATTERJEE, the suggestion/motivation for doing so would have been to provide a technological solution for protecting data using an illustrative cloud-based air-gapped data storage management system that is specially equipped to obtain access to other (source) systems' backup copies, make replicas of those external copies within the illustrative (destination) system, parse key proprietary metadata found in the replica copies, and integrate the replica copies into the destination system as though they were natively created there ([0004]).
In claim 2, Shetty teaches
The method of claim 1, wherein determining the consistent state of data on the set of storage volumes at the cyber recovery vault comprises: monitoring, on the second data center, the number of invalid tracks of the data owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility after suspending transmission of data on the first leg of the remote data forwarding facility; and in response to a determination that the number of invalid tracks owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility has reached zero invalid tracks, determining that the consistent state of the data has been achieved at the cyber recovery vault ([0036] A synchronous data replication from a primary copy of data of a consistency group (CG) at a primary storage system at a first site (primary storage site) to a secondary copy of data at a secondary storage system of a second site (secondary storage site) can fail due to many reasons including inter cluster connectivity issues. These issues can occur if the secondary storage site can not differentiate between the primary storage site being down, in isolation, or just a network partition. A trigger for the automated failover is generated from a data path and if the data path is lost, can lead to disruption. For example, if the primary storage site is not operational or is isolated (e.g., network partition leading to both inter cluster connectivity and connectivity to an external Mediator are lost), then a data replication relationship (or relationship) between the primary and secondary storage sites guarantees non-disruptiveness due to allowing I/O operations to be handled with the secondary mirror copy of data of the second site).
In claim 3, CHATTERJEE teaches
The method of claim 1, wherein the maximum threshold value of the number of invalid tracks owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility is determined based on a timeout threshold of the first leg of the remote data forwarding facility ([0180] an HSM copy may include primary data 112 or a secondary copy 116 that exceeds a given size threshold or a given age threshold. Often, and unlike some types of archive copies, HSM data that is removed or aged from the source is replaced by a logical reference pointer or stub. The reference pointer or stub can be stored in the primary storage device 104 or other source storage device, such as a secondary storage device 108 to replace the deleted source data and to point to or otherwise indicate the new location in (another) secondary storage device 108).
In claim 4, CHATTERJEE teaches
The method of claim 3, wherein the timeout threshold of the first leg of the remote data forwarding facility is an amount of time that the first leg of the remote data forwarding facility may remain in a suspended state before being dropped ([0163] data satisfying criteria for removal (e.g., data of a threshold age or size) may be removed from source storage. The source data may be primary data 112 or a secondary copy 116, depending on the situation. As with backup copies, archive copies can be stored in a format in which the data is compressed, encrypted, deduplicated, and/or otherwise modified from the format of the original application or source copy. In addition, archive copies may be retained for relatively long periods of time (e.g., years) and, in some cases are never deleted. In certain embodiments, archive copies may be made and kept for extended periods in order to meet compliance regulations).
In claim 5, CHATTERJEE teaches
The method of claim 3, wherein the maximum threshold value of the number of invalid tracks owed by the second data center to the cyber recovery vault on the second leg of the remote data forwarding facility is further determined based on an amount of time it takes to transmit each invalid track from the second data center to the cyber recovery vault ([0202] system 100 may also determine whether a metric or other indication satisfies particular storage criteria sufficient to perform an action. For example, a storage policy or other definition might indicate that a storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection. In some embodiments, risk factors may be quantified into certain measurable service or risk levels. For example, certain applications and associated data may be considered to be more important relative to other data and services. Financial compliance data, for example, may be of greater importance than marketing materials, etc. Network administrators may assign priority values or “weights” to certain data and/or applications corresponding to the relative importance. The level of compliance of secondary copy operations specified for these applications may also be assigned a certain value. Thus, the health, impact, and overall importance of a service may be determined, such as by measuring the compliance value and calculating the product of the priority value and the compliance value to determine the “service level” and comparing it to certain operational thresholds to determine whether it is acceptable).
In claim 6, Shetty teaches
The method of claim 1, further comprising linking the snapset of the storage volumes to a target set of devices in the cyber recovery vault ([0107] In order for the Vault Destination to continue to function with the secondary site B (e.g., Sync destination) as a new data source, a common snapshot needs to be available between the two copies and therefore, in steady state of a 3-site fan-out topology the tertiary storage site C (e.g., Async destination) and secondary site B (e.g., Sync destination) do need to maintain a common snapshot).
In claim 7, CHATTERJEE teaches
The method of claim 1, further comprising iterating the steps of monitoring the number of invalid tracks, determining that the number of invalid tracks is less than or equal to the maximum threshold value, suspending transmission of data on the first leg, determining the consistent state, creating a snapset, and resuming transmission ([0202] system 100 may also determine whether a metric or other indication satisfies particular storage criteria sufficient to perform an action. For example, a storage policy or other definition might indicate that a storage manager 140 should initiate a particular action if a storage metric or other indication drops below or otherwise fails to satisfy specified criteria such as a threshold of data protection. In some embodiments, risk factors may be quantified into certain measurable service or risk levels. For example, certain applications and associated data may be considered to be more important relative to other data and services. Financial compliance data, for example, may be of greater importance than marketing materials, etc. Network administrators may assign priority values or “weights” to certain data and/or applications corresponding to the relative importance. The level of compliance of secondary copy operations specified for these applications may also be assigned a certain value. Thus, the health, impact, and overall importance of a service may be determined, such as by measuring the compliance value and calculating the product of the priority value and the compliance value to determine the “service level” and comparing it to certain operational thresholds to determine whether it is acceptable [0368] At block 1104, system 360 restores data in replica copy(ies) 366 to cloud-based storage in cloud computing environment 320, e.g., to data storage resource 704 [0166] a snapshot may generally capture the directory structure of an object in primary data 112 such as a file or volume or other data set at a particular moment in time and may also preserve file attributes and contents).
In claim 8, CHATTERJEE teaches
The method of claim 7, wherein the step of iterating is initiated at a regular cadence ([0360] At block 1004, system 360 uses token 361 to perform an authorized and authenticated read of cloud-based copies 316-2 generated by source system 310. The read operation is said to be “on demand,” because it is initiated by destination system 360 according to its own preferences, which are unknown to source system 310. Illustratively, this operation occurs on a schedule, e.g., daily, but any cadence may be configured in system 360, e.g., more or less frequently, based on detecting an event, based on detecting new backup copies 316-2, user-invoked, etc.).
In claim 9, CHATTERJEE teaches
The method of claim 7, wherein the step of iterating is initiated upon closure of an airgap between the second data center and the cyber recovery vault ([0314] Data storage management (source) system 310 is analogous to system 100 and further comprises additional features such as adding supplemental metadata to backup copies to be stored in cloud computing environments such as 320. System 310 represents a source of backup copies and supplemental metadata that are later consumed by destination system 360. Like production data and computing environment 301, source system 310 may be implemented in a non-cloud data center, cloud computing environment, and/or any combination thereof, without limitation. In some embodiments, system 310 is co-resident with destination system 360 in cloud computing environment 320, but notably they are governed by distinct customer subscription accounts in order to maintain the air gap [0318] Authentication token 361 is a data structure that is maintained by destination system 360 for accessing backup copies 316 generated by another system, e.g., by system 310. Authentication token 361 enables destination system 360 to perform authorized and authenticated reads from a data repository that is maintained by source system 310, and which is otherwise inaccessible to others. Authentication token 361 is supplied to destination system 360 by an authorized user, e.g., system administrator, but is not kept in source system 310 in order to enforce the air gap between system 310 and system 360. As a result, system 310 has no knowledge of or communicative coupling with system 360. However, system 360 is authorized (using token 361) to read backup copies generated by system 310 and stored in cloud computing environment 320. Authentication token 361 is an embodiment of an authentication technology employed by illustrative system 360 to make on demand authenticated “pulls” to read and replicate backup copies 316-2. Other authentication technologies that do or do not comprise authentication token 361 may be used here to perform the authenticated reads so long as they are suitable to maintain the “air gap” between source system 310 and destination system 360).
In claim 10, Shetty teaches
The method of claim 1, wherein asynchronous remote data forwarding is a data mirroring mode in which each respective track of data is mirrored from the first data center to the second data center over the first leg of the cascaded remote data forwarding facility when the respective track is received at the first data center; and wherein adaptive copy data forwarding is a data replication mode configured to enable bulk copy operations to be implemented between the second data center and the cyber recovery vault over the second leg of the cascaded remote data forwarding facility ([0123] An asynchronous replication relationship (or alternatively mirror vault policy) may exist between the one or more storage objects hosted by the first storage node of the first storage cluster and one or more replicated storage objects hosted by a third storage node of a third storage cluster of the third storage site [0124] At operation 902, the computer-implemented method provides a synchronous replication relationship from one or more storage objects of the first storage node to one or more replicated storage objects of the second storage node. At operation 904, the computer-implemented method provides an asynchronous replication relationship or mirror vault policy with an asynchronous update schedule from the one or more storage objects of the first storage node to one or more replicated storage objects of the third storage node for an initial protection configuration. At operation 906, the computer-implemented method tracks, with the third storage node of the tertiary site, a state of the secondary storage site).
Claims 11-20 are essentially same as claims 1-10 except that they recite claimed invention as a system and are rejected for the same reasons as applied hereinabove.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed on 892 form.
Examiner’s Note: Examiner has cited particular figures, and paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested for the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUAWEN A PENG whose telephone number is (571)270-5215. The examiner can normally be reached Mon thru Fri 9 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUAWEN A PENG/Primary Examiner, Art Unit 2169