Response to Amendment
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to amendment and RCE filed on 2/20/26. Claims 1, 4-11 and 14-20 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-11 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nipunage et al (USPN. 11,467736) in view of Jean et al (USPN. 20210342366).
Regarding claims 1 and 11, Nipunage teaches a system and method comprising (figs. 2 and 3, source server 208/303, and storage appliances 260, 270 and 370A-C as storage appliances. Note, for simplicity, fig. 3 is used for mapping, note that fig. 2 comprising source and storage volumes the capability applies to fig. 3, see col. 9, lines 1-17, “module 311 can implement the functions of dropped write detection module 211”):
a first storage appliance for receiving data writes and incrementing an associated last sequence number (LSN) X upon each data write thereto (figs. 2 and 3, Client 301, server 303 and Storage 370B, col. 9, line 58 to col. 10, line 11, Storage 370B receives data “write request” and incremented LSN “updated monotic counter value”, Nipunage);
a second storage appliance for receiving data writes and incrementing an associated LSN Y upon each data write thereto (figs. 2 and 3, Client 301, server 303 and Storage 370B, col. 9, line 58 to col. 10, line 11, Storage 370A receives data “write request” and incremented LSN “updated monotic counter value”, Nipunage); and
an external device coupled to the first storage appliance and the second storage appliance (figs. 2 and 3, source server 208/303 is coupled to storage appliances 260, 270 and 370A and 370B as storage appliances);
wherein the first storage appliance and the second storage appliance comprise a cluster management database with data writes being replicated on both the first storage appliance and the second storage appliance (figs. 2, 3 and 7, source server 208/303, col. 9, lines 8-36 and 47-57, data writes and updates are distributed at least across storages 370A and 370B, and col. 18, lines 44-49, “distributed database”). To the degree that Nipunage does not explicitly state “cluster management database” and “appliance”, the use of such terminology and concept is well known in the field. One such system, Jean teaches peer appliances comprise a cluster management database (fig. 2B, central DB 214.2 managing plurality of storage appliances 210.2, 210.3…, par. 47, Jean). It would have been obvious to one of ordinary skill in the art at the effective filing date to integrate a cluster management DB 214.2 of Jean in Nipunage Server 208/303. One would have been motivated to provide Server 208/303 direct access to shared distributed resources in storage volumes across the network.
Nipunage/Jean integrated system comprises a plurality of Primary appliance and Secondary appliance (see figs. 2B and 2C, Primary Appliance and Peer appliance 210.1, 210.2, 210.4, pars. 49-50, Jean),
wherein one of the first storage appliance and the second storage appliance acts as a primary instance of the cluster management database and the other of the first storage appliance and the second storage appliance acts as a secondary instance of the cluster management database (figs. 2B and 2C, Management DB 214.1-214.4, par. 46 and 49, sync replicated DBs, Jean);
upon the first storage appliance beginning to shut down, during a soft shutdown of the first storage appliance the LSN X is written to the external device (fig. 3, Server 303 module 311, col. 9, lines 26-32, storage volume 306 can be used as a data store for metadata and used by dropped write module 311 to determine whether a dropped write has occurred, note that the metadata information analyzed is reflective of the appliances/storages and col. 11, lines 1-9, dropped write correction 313 can correct the dropped write with regard to storage 306 which confirms a dropped write occurred, see also fault tolerance, col. 9, lines 47-57, Nipunage, and fig. 2B and par. 49, software or hardware failure support indicates a soft shutdown occurred and is reflected in the system, Jean); and,
wherein the first storage appliance is the primary instance and when the first storage appliance shuts down, the second storage appliance is promoted to become the primary instance and the LSN X stored in the external device is cleared and the second storage appliance continues to receive data writes and increment LSN Y (figs. 2-4, col. 10, lines 37-44 and col. 12, lines 12-27, system compares counters and determines that write drop has not occurred, counter values/metadata can be reset, Nipunage, and figs. 2A-2C, pars. 49-50, hardware failure, Peer Storage becomes primary Storage).
Regarding claims 4 and 14, Nipunage/Jean combined teach wherein, upon the second storage appliance beginning to shut down, the LSN Y is written to the external device (figs. 2 and 3, col. 10, lines 37-44, system compares counters for second storage and determines that write drop has not occurred, or no error, Nipunage).
Regarding claims 5 and 15, Nipunage/Jean combined teach wherein, when the first storage appliance is restarted, it compares its LSN X to the LSN Y stored in the external device (col. 9, lines 53-57, upon failure, data can be regenerated from other storages, Nipunage).
Regarding claims 6 and 16, Nipunage/Jean combined teach wherein, when LSN X equals LSN Y, the first storage appliance is promoted to become the primary instance (col. 10, lines 35-44, counter values of storages match, no error, hence Primary storage remains, Nipunage).
Regarding claims 7 and 17, Nipunage/Jean combined teach wherein, when the LSN X is less than the LSN Y, the first storage appliance is not promoted to become the primary instance (col. 10, lines 45-57, metadata do not match implies first storage has an error hence not promoted, Nipunage).
Regarding claims 8 and 18, Nipunage/Jean combined teach wherein, when the first storage appliance is restarted, it compares its LSN X to the LSN Y of the second storage appliance (col. 9, lines 54-57, if corrupt, the storage data can be regenerated, col. 10, lines 29-60, metadata counters are compared to other storages, Nipunage, and see fig. 2C, Jean).
9. The system of claim 8 wherein, when the LSN X is less than the LSN Y, the second storage appliance remains the primary instance (figs. 2 and 3, col. 10, lines 47-57, system compares counters for first and second storage and determines that second storage is higher, note that in that case the same second storage would continue as primary, Nipunage).
10. The system of claim 9 wherein writes to the second storage appliance before the first storage appliance is restarted are written to the first storage appliance (figs. 2 and 3, col. 10, lines 47-60, system compares counters of second to first storage and determines to write to all storages that are running in the form of Data and Parity Data).
20. A computing system comprising:
a memory and a processor configured to (figs. 2 and 3, source server 208/303, and storage appliances 260, 270 and 370A-C as storage appliances. Note, for simplicity, fig. 3 is used for mapping, note that fig. 2 comprising source and storage volumes the capability applies to fig. 3, see col. 9, lines 1-17, “module 311 can implement the functions of dropped write detection module 211”):
receive data writes at a first storage appliance and increment an associated last sequence number (LSN) X upon each data write thereto (LSN) X upon each data write thereto (figs. 2 and 3, Client 301, server 303 and Storage 370B, col. 9, line 58 to col. 10, line 11, Storage 370B receives data “write request” and incremented LSN “updated monotic counter value”, Nipunage);
receive data writes at a second storage appliance and increment an associated LSN Y upon each data write thereto (figs. 2 and 3, Client 301, server 303 and Storage 370B, col. 9, line 58 to col. 10, line 11, Storage 370A receives data “write request” and incremented LSN “updated monotic counter value”, Nipunage); and
write the LSN X to an external witness device upon the first storage appliance beginning to shut down during a soft shutdown of the first storage appliance ((fig. 3, Server 303 module 311, col. 9, lines 26-32, storage volume 306 can be used as a data store for metadata and used by dropped write module 311 to determine whether a dropped write has occurred, note that the metadata information analyzed is reflective of the appliances/storages and col. 11, lines 1-9, dropped write correction 313 can correct the dropped write with regard to storage 306 which confirms a dropped write occurred, see also fault tolerance, col. 9, lines 47-57, Nipunage).
wherein the first storage appliance and the second storage appliance comprise a cluster management database with data writes being replicated on both the first storage appliance and the second storage appliance (figs. 2, 3 and 7, source server 208/303, col. 9, lines 8-36 and 47-57, data writes and updates are distributed at least across storages 370A and 370B, and col. 18, lines 44-49, “distributed database”). To the degree that Nipunage does not explicitly state “cluster management database” and “appliance”, the use of such terminology and concept is well known in the field. One such system, Jean teaches peer appliances comprise a cluster management database (fig. 2B, central DB 214.2 managing plurality of storage appliances 210.2, 210.3…, par. 47, Jean). It would have been obvious to one of ordinary skill in the art at the effective filing date to integrate a cluster management DB 214.2 of Jean in Nipunage Server 208/303. One would have been motivated to provide Server 208/303 direct access to shared distributed resources in storage volumes across the network.
Nipunage/Jean integrated system comprises a plurality of Primary appliance and Secondary appliance (see figs. 2B and 2C, Primary Appliance and Peer appliance 210.1, 210.2, 210.4, pars. 49-50, Jean),
wherein one of the first storage appliance and the second storage appliance acts as a primary instance of the cluster management database and the other of the first storage appliance and the second storage appliance acts as a secondary instance of the cluster management database (figs. 2B and 2C, Management DB 214.1-214.4, par. 46 and 49, sync replicated DBs, Jean);
wherein the first storage appliance is the primary instance and when the first storage appliance shuts down, the second storage appliance is promoted to become the primary instance and the LSN X stored in the external device is cleared and the second storage appliance continues to receive data writes and increment LSN Y (figs. 2-4, col. 10, lines 37-44 and col. 12, lines 12-27, system compares counters and determines that write drop has not occurred, counter values/metadata can be reset, Nipunage, and figs. 2A-2C, pars. 49-50, hardware failure, Peer Storage becomes primary Storage, Jean). In addition, Jean teaches in fig. 2B and par. 49 a software failure and hardware failure to further support a soft shutdown as claimed.
Response to Arguments
Applicant's arguments filed 2/20/26 have been fully considered but they are not persuasive. See remarks below.
Applicant alleges that during appliance shutdown, the prior art is silent regarding writing an LSN X to an external device as claimed.
Examiner disagrees. Jean system clearly teaches in figures 2A-2C a primary system having a hardware failure in which case a new, peer storage becomes the primary. The combination of Nipunage in view of Jean clearly teaches the alleged limitation wherein modified Nipunage upon the first storage appliance beginning to soft shutdown (par. 49, Jean), a first storage appliance LSN X is written to the external device (fig. 3, Server 303 module 311, col. 9, lines 26-32, storage volume 306 can be used as a data store for metadata and used by dropped write module 311 to determine whether a dropped write has occurred, note that the metadata information analyzed is reflective of the appliances/storages and col. 11, lines 1-9, dropped write correction 313 can correct the dropped write with regard to storage 306 which confirms a dropped write occurred, see also fault tolerance, col. 9, lines 47-57, Nipunage, and fig. 2B and par. 49, software or hardware failure support indicates a soft shutdown occurred and is reflected in the system, Jean), as rejected in the updated office action. In synchronization systems, when an error is detected, the backup or secondary storage system takes over, as done in Jean (fig. 2C). Errors/issues happen during writing data mostly and storage systems may become out of sync. When the soft shutdown is detected in Jean (par. 49, Jean) the modified Nipunage comprising the other storage system stores the relevant metadata and data to retrieve and correct the dropped write by updating metadata 372A and extended metadata 374A which comprise and describe the updates made to the actual write data LSN X coomprising an audit trail/metadata describing the changes/update (col. 11, lines 1-9, Nipunage.) As such, the alleged allegation is believed moot.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure in the field of data replication:
USPN. 20200097384: par. 34 and 36, incrementing change.
USPN. 20140040206
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARCIN R FILIPCZYK whose telephone number is (571)272-4019. The examiner can normally be reached M-F 7-4 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
March 17, 2026
/MARCIN R FILIPCZYK/Primary Examiner, Art Unit 2153