Prosecution Insights
Last updated: April 19, 2026
Application No. 18/368,755

STORAGE SYSTEM AND DATA PROTECTION METHOD

Final Rejection §103§112
Filed
Sep 15, 2023
Examiner
WESTBROOK, MICHAEL L
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Hitachi, Ltd.
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
80%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
160 granted / 216 resolved
+19.1% vs TC avg
Moderate +6% lift
Without
With
+6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
17 currently pending
Career history
233
Total Applications
across all art units

Statute-Specific Performance

§101
3.8%
-36.2% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
23.8%
-16.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication from applicant received on September 25, 2025. Response to Amendment Applicant's submission filed on September 25, 2025 has been entered. Claims 8-9 have been added. Claims 1-9 are pending in the current application. The negative limitations presented in the independent claims appear to be supported in [0013] and [0058] of applicant’s specification. For example, see [0013] “Each controller and the drive are connected via, for example, a switch (BE Switch)109.” and [0058] “A storage system 100 of the present example includes a plurality of controllers 103 and a drive 110 that is a storage device, and each controller and the drive are connected via, for example, a switch (BE Switch) 109.” Claim Objections Claims 1-9 are objected to because of the following informalities: In claim 1, both instances of “all the one or more should be changed to “all of the one or more. In claim 7, both instances of “all the one or more should be changed to “all of the one or more. In claim 7, line 8, “the couple to” should be deleted. In claim 8, line 3, “recover” should be “recovers”. In claim 9, line 3, “recover” should be “recovers”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 1, lines 26-27 and 30, “the storage device” now lacks antecedent basis as a plurality of storage devices is set forth in the claim. In claim 6, line 5, “the storage device” now lacks antecedent basis. In claim 7, lines 27-28 and 33-34, “the storage device” now lacks antecedent basis as a plurality of storage devices is set forth in the claim. In claim 8, line 3, “other” should be “another” for clear antecedent basis with claim 1. In claim 9, line 3, “other” should be “another” for clear antecedent basis with claim 1. All dependent claims are rejected for having the same deficiency as the claim(s) that they depend on. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Sicola et al. (Hereinafter Sicola, U.S. Patent No. 7,111,189) in view of Okuno et al. (Hereinafter Okuno, U.S. Patent No. 7,849,260). Regarding claim 1, Sicola teaches: A storage system comprising: one or more non-volatile storage devices (See Col. 7 lines 14-17 “Storage arrays 203 and 213 typically comprise a plurality of magnetic disk storage devices, but could also include or consist of other types of mass storage devices such as semiconductor memory.”); and a plurality of storage controllers coupled to a host and coupled to the one or more non-volatile storage devices that control reading and writing for the one or more non-volatile storage devices (See Col. 8 lines 29-34 “Each pair of array controllers 201/202 and 211/212 (and associated storage array) is also called a storage node (e.g., 301 and 302), and has a unique Fibre Channel Node Identifier. As shown in FIG. 3, array controller pair A1/A2 comprise storage node 301, and array controller pair B1/B2 comprise storage node 302.” See Figures 2-4, which depict controllers 201/202 (also referenced as A1/A2) and 211/212 (also referenced as B1/B2). See Figure 3, in which the storage controllers 201/202 (also referenced as A1/A2) and 211/212 (also referenced as B1/B2) are all coupled to host 1 and host 2 and are coupled to the storage arrays 203 and 213 via a network of connections/interconnections, switches, paths, etc. The storage controllers being coupled to a host and storage device via a network of connections/interconnections, switches, paths, etc. are also depicted in Figure 2.), wherein the one or more non-volatile storage devices store user data (See Col. 7 lines 14-17 “Storage arrays 203 and 213 typically comprise a plurality of magnetic disk storage devices, but could also include or consist of other types of mass storage devices such as semiconductor memory.” See Col. 6 line 59 – Col. 7 line 2 “Various methods for synchronizing the data between the local and remote array are possible in the context of the present system. These synchronization methods range from full synchronous to fully asynchronous data transmission, as explained below. The system user's ability to choose these methods provides the user with the capability to vary system reliability with respect to potential disasters and the recovery after such a disaster. The present system allows choices to be made by the user based on factors which include likelihood of disasters and the critical nature of the user's data.” See Col. 1, lines 18-21 “It is desirable to provide the ability for rapid recovery of user data from a disaster or significant error event at a data processing facility. This type of capability is often termed `disaster tolerance`.” See Col. 1, lines 27-29 “It is also desirable for user applications to continue to run while data replication proceeds in the background.” See Col. 5, lines 54-56 “The system of the present invention comprises a data backup and remote copy system which provides disaster tolerance.” See Col. 8, lines 54-56 “The other port of each controller is the `remote copy` port, used for disaster tolerant backup.”), wherein each of the plurality of storage controllers includes a processor and a memory (See Col. 12 line 16-17 “In synchronous operation mode, data is written simultaneously to local controller cache memory” See Col. 12 line 46-47 “remote target controller B1 writes data to its write-back cache” See Col. 13 line 16 “the controller's non-volatile write-back cache `micro-log`” See Col. 13 line 64-67 “writes were logged to the initiator controller A1's write-back cache which is also mirrored in partner controller A2's non-volatile write-back cache (as a backup copy)” See Col. 15 line 55-56 “controller write-back cache” See Claim 1 of Sicola “storing, on a log unit in primary cache memory in the first array controller” See Claim 1 of Sicola “mirroring the primary cache in backup cache memory in the second array controller at the first site” See Claim 9 of Sicola “storing the data for each write transaction from the host computer in mirrored cache memory in both the first array controller and the second array controller” The controllers are processors themselves, and contain cache memory.), wherein each of the plurality of the storage controllers are configured to individually implement a first memory protection method that is a memory copying method (See Col. 13, lines 34-38 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213.”), and a second memory protection method that is a log saving method (See Col. 15, lines 38-47 “The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015. In this situation, as indicated by arrows 1020, write operations from the local host (ref. no. 101, shown in FIGS. 2 and 3), are directed by the initiator array controller (either 201 or 202 in FIGS. 2 and 3) to both array 203 and log unit 1000.”), and upon a first storage controller, which is any one of the plurality of storage controllers, executing the first memory protection method, data on a first memory of the first storage controller is copied to a second memory of a corresponding second storage controller, among the plurality of storage controllers (See top section of Figure 10. See Col. 15 lines 35-39 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213.” See Abstract “In the situation wherein an array controller fails during an asynchronous copy operation, the partner array controller uses a `micro log` stored in mirrored cache memory to recover transactions, in order, which were `missed` by the backup storage array when the array controller failure occurred.” The first protection method corresponds to the prior art’s backup method from a local/initiator array 201/202 to a remote/target array 211/212. Both arrays have a pair of controllers 201/202 (also referenced as A1/A2) and 211/212 (also referenced as B1/B2), as indicated in Figures 2-4 and the abstract. See Abstract and Figures 2-4 and Figure 10, in which multiple controllers have the ability to implement the protection method as a way to provide redundancy in failover operations.), and upon the first controller executing the second memory protection method, a log related to update of the data on the first memory is generated and the log is written into the one or more non-volatile storage devices (See lower section of Figure 10. See Col. 15 lines 32-34 “FIG. 10 is a high-level flow diagram showing a write history log operation performed by the present system 100 when both links are down, or when the remote site is down.” See Col. 15 lines 39-48 “The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015. In this situation, as indicated by arrows 1020, write operations from the local host (ref. no. 101, shown in FIGS. 2 and 3), are directed by the initiator array controller (either 201 or 202 in FIGS. 2 and 3) to both array 203 and log unit 1000.” See Col. 16 lines 8-13 “As shown in FIG. 11, at step 1105, access from site 218 to target array 213 is broken, as indicated by arrow 1015 in FIG. 10. At step 1110, the write history logging operation of the present system is initiated by array controller 201 in response to a link failover situation” See Col. 16 lines 15-19 “At step 1115, write operations requested by host computer 101/102 are redirected by associated initiator array controller 201 (optionally, controller 202) from target controller 211 to log unit 1000.” See Col. 15 lines 26-30 “The log unit is preferably located on the same storage array as the local remote copy set member, but in an alternative embodiment, the log unit could be located on a separate storage device coupled to the array controller associated with the local remote copy set member.” See Col. 13 line 64-67 “these `outstanding` writes were logged to the initiator controller A1's write-back cache which is also mirrored in partner controller A2's non-volatile write-back cache (as a backup copy),” See Abstract “In the situation wherein an array controller fails during an asynchronous copy operation, the partner array controller uses a `micro log` stored in mirrored cache memory to recover transactions, in order, which were `missed` by the backup storage array when the array controller failure occurred.” See Abstract and Figures 2-4, Figure 10 and Figure 12, in which multiple controllers have the ability to implement the second protection method as a way to provide redundancy in failover operations.), wherein the first storage controller stores a write request from a host for the storage device as cache data in the first memory (See Col. 12 lines 33-35 “As shown in FIG. 7, at step 701, host computer 101 issues a write command to local controller A1 (201),” See Col. 12 lines 36-39 “At step 710, the controller passes the write command down to the VA level software 530 (FIG. 5) as a normal write. At step 715, VA 530 writes the data into its write-back cache” See Figure 7.), returns a write completion response to the host after protecting the cache data in the first memory protection method or the second memory protection method (See Col. 12 lines 41-56 “On write completion, VA 530 retains the cache lock and calls the PPRC manager 515. At step 720, PPRC manager 515 sends the write data to remote target controller B1 (211) via host port initiator module 510. The data is sent through the remote copy dedicated host port 109 via path 221D, and across fabric 103A. Next, at step 725, remote target controller B1 writes data to its write-back cache (or directly to media if a write through operation). Then, at step 730, controller B1 sends the completion status back to initiator controller A1. Once PPRC manager 515 in controller A1 has received a completion status from target controller, it notifies VA 530 of the completion, at step 735. At step 740, VA 530 completes the write in the normal path (media write if write through), releases the cache lock, and completes the operation at step 745 by sending a completion status to the host 101.” See Figure 7. A write completion response is returned after protecting the cache data by copying it to a remote target.), and destages the cache data into the storage device after the write completion response (See Col. 15 lines 53-54 “Enabling write-back would require a DMA copy of the data so that it could be written to media at a later time” See Figure 7 and Col. 12 lines 33-56, in which data is stored in the write back cache and then a write completion response is received. Data stored in the write-back cache is destaged at a later time after receiving the data (i.e. write completion), as such operation is how write-back caching operations are implemented.), and wherein the first storage controller switches between the first memory protection method and the second memory protection method to be used according to an operation state of another of the plurality of storage controllers (See Col. 15 lines 35-48 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213. The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015. In this situation, as indicated by arrows 1020, write operations from the local host (ref. no. 101, shown in FIGS. 2 and 3), are directed by the initiator array controller (either 201 or 202 in FIGS. 2 and 3) to both array 203 and log unit 1000” See Col. 16 lines 8-13 “As shown in FIG. 11, at step 1105, access from site 218 to target array 213 is broken, as indicated by arrow 1015 in FIG. 10. At step 1110, the write history logging operation of the present system is initiated by array controller 201 in response to a link failover situation” See Col. 16 lines 15-19 “At step 1115, write operations requested by host computer 101/102 are redirected by associated initiator array controller 201 (optionally, controller 202) from target controller 211 to log unit 1000.” See Figure 10, in which the controller(s) 201/202 in the initiator array switches between a normal operation (first memory protection method) of backing up data to a remote site and a write history logging operation (second memory protection method) of directing write operations to array 203 and log unit 1000 according to the operation state of remote pair array controllers 211/212.). Sicola does not explicitly disclose the totality of what Okuno teaches: and each of the plurality of storage controllers are respectively physically connected to all the one or more non-volatile storage devices in the storage system without any other storage controllers connected between the each of the plurality of storage controllers and the one or more non-volatile storage devices and each of the plurality of storage controllers is configured to control all the one or more non-volatile storage devices without using another intervening storage controller (See Figure 1 of Okuno, in which Controllers 6A and 6B are connected to storage apparatus’ 4A-4D without any other storage controllers between them respectively and can controller the storage apparatus’ without using another intervening controller respectively.). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the transaction log failover method of Sicola with the storage controller system of Okuno (as depicted in Figure 1 of Okuno) to optimize I/O, increase throughput and increase the amount the data that can be cached by the system. Okuno specifically teaches at column 3, lines 12-17, that this results in “speeding up the processing time and its control method in response to a command in a simple manner while reducing the load of the controller that received a command”. Regarding claim 2, Sicola teaches: The storage system according to claim 1, wherein the first storage controller is associated with the second storage controller to form a redundant configuration (See Col. 3 lines 41-43 “wherein each site comprises a host computer and associated data storage array, with redundant array controllers and adapters.” See Col. 8 lines 29-34 “Each pair of array controllers 201/202 and 211/212 (and associated storage array) is also called a storage node (e.g., 301 and 302), and has a unique Fibre Channel Node Identifier. As shown in FIG. 3, array controller pair A1/A2 comprise storage node 301, and array controller pair B1/B2 comprise storage node 302.” The storage controller and associated another storage controller corresponds to Controllers 201/202 (also referenced as A1/A2) depicted in Figures 2-4.), and is used in the first memory protection method (See top section of Figure 10. See Col. 15 lines 35-39 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213.” The first protection method corresponds to the prior art’s backup method from a local/initiator array 201/202 to a remote/target array 211/212.), and is used in the second memory protection method in a case where the second storage controller is failed (See Col. 15 lines 35-48 “The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015. In this situation, as indicated by arrows 1020, write operations from the local host (ref. no. 101, shown in FIGS. 2 and 3), are directed by the initiator array controller (either 201 or 202 in FIGS. 2 and 3) to both array 203 and log unit 1000” See Col. 16 lines 8-13 “As shown in FIG. 11, at step 1105, access from site 218 to target array 213 is broken, as indicated by arrow 1015 in FIG. 10. At step 1110, the write history logging operation of the present system is initiated by array controller 201 in response to a link failover situation” See Col. 16 lines 15-19 “At step 1115, write operations requested by host computer 101/102 are redirected by associated initiator array controller 201 (optionally, controller 202) from target controller 211 to log unit 1000.” See Figure 10, in which the controller(s) 201/202 in the initiator array are used in a write history logging operation (second memory protection method) to direct write operations to array 203 and log unit 1000 when remote pair array controllers 211/212 are inoperative (failed).). Regarding claim 3, Sicola teaches: The storage system according to claim 2, wherein a memory in which the log is written is provided inside the first storage controller (See Col. 12 line 16-17 “In synchronous operation mode, data is written simultaneously to local controller cache memory” See Col. 12 line 46-47 “remote target controller B1 writes data to its write-back cache” See Col. 13 line 16 “the controller's non-volatile write-back cache `micro-log`” See Col. 13 line 64-67 “writes were logged to the initiator controller A1's write-back cache which is also mirrored in partner controller A2's non-volatile write-back cache (as a backup copy)” See Col. 15 line 55-56 “controller write-back cache” See Claim 1 of Sicola “storing, on a log unit in primary cache memory in the first array controller” See Claim 1 of Sicola “mirroring the primary cache in backup cache memory in the second array controller at the first site” See Claim 9 of Sicola “storing the data for each write transaction from the host computer in mirrored cache memory in both the first array controller and the second array controller” Log data may be written to non-volatile cache memory that is provided inside a controller.). Regarding claim 4, Sicola teaches: The storage system according to claim 2, wherein a part of the one or more non-volatile storage devices stores the log (See Col. 15 lines 26-30 “The log unit is preferably located on the same storage array as the local remote copy set member, but in an alternative embodiment, the log unit could be located on a separate storage device coupled to the array controller associated with the local remote copy set member.”). Regarding claim 5, Sicola teaches: The storage system according to claim 1, wherein in the destage, the data is stored in a final storage area of the storage device by using a storage function provided by the storage system (See Col. 15 lines 53-54 “Enabling write-back would require a DMA copy of the data so that it could be written to media at a later time” See Figure 7 and Col. 12 lines 33-56, in which data is stored in the write back cache. Data stored in the write-back cache is destaged at a later time after receiving the data, as such operation is how write-back caching operations are implemented.). Claim 7 is rejected for the same reasons as claim 1. Claim 6 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Sicola in view of Okuno in view of Obviousness in view of Bhagi et al. (Hereinafter Bhagi, U.S. Publication No. 2023/0305934). Regarding claim 6, Sicola teaches: The storage system according to claim 2, wherein upon determining that the second storage controller is failed, the first storage controller performs operation switching from the memory copying method to the log saving method (See Col. 15 lines 35-48 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213. The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015. In this situation, as indicated by arrows 1020, write operations from the local host (ref. no. 101, shown in FIGS. 2 and 3), are directed by the initiator array controller (either 201 or 202 in FIGS. 2 and 3) to both array 203 and log unit 1000” See Col. 16 lines 8-13 “As shown in FIG. 11, at step 1105, access from site 218 to target array 213 is broken, as indicated by arrow 1015 in FIG. 10. At step 1110, the write history logging operation of the present system is initiated by array controller 201 in response to a link failover situation” See Col. 16 lines 15-19 “At step 1115, write operations requested by host computer 101/102 are redirected by associated initiator array controller 201 (optionally, controller 202) from target controller 211 to log unit 1000.” See Figure 10, in which the controller(s) 201/202 in the initiator array switches between a normal operation (first memory protection method) of backing up data to a remote site and a write history logging operation (second memory protection method) of directing write operations to array 203 and log unit 1000 when remote pair array controllers 211/212 are inoperative (failed).),… wherein upon determining recovery from the failing of the second storage controller is detected (See Figure 11 Step 1120 “Access to Target Re-Established”), the first storage controller copies the data on the first memory to the second memory of the second storage controller (See Figure 11 step 1155 “Host Write Data Directed to Target” See Col. 15 line 49-51 “at step 1155, normal backup operation of system 100 is resumed” See Col. 15 line 58-64 “A log unit is `replayed` to the remote site `partner` controller when the link is restored, the remote site has been restored, or when the local site has been restored (during a site failback, described below with respect to FIG. 12). Replaying the log means sending all commands and data over to the remote partner in order to all remote copy sets associated with the log unit.”),…, and performs operation switching from the log saving method to the memory copying method (See Col. 15 line 49-51 “at step 1155, normal backup operation of system 100 is resumed” See Col. 15 lines 35-48 “The top section of FIG. 10 depicts normal operation of the present system 100, where arrow 1005 shows write data from host computer 101 being stored on local (initiator) array 203. Arrow 1010 indicates that the write data is normally backed up on remote (target) array 213. The lower section of FIG. 10 shows system 100 operation when the links between the local and remote sites are down, or when the remote pair of array controllers 211/212 are inoperative, and thus array 213 is inaccessible to local site 218, as indicated by the broken arrow 1015.” See Figure 10 and Figure 11 Step 1120 and Step 1155, in which the controller(s) 201/202 in the initiator array switches back to normal operation (first memory protection method) of backing up data to a remote site from a write history logging operation (second memory protection method) of directing write operations to array 203 and log unit 1000 after the remote pair array controllers 211/212 has re-stablished a connection and has recovered from a failure.). Sicola does not explicitly disclose: preferentially destages pre-switching cache data that is cache data before the operation switching into the storage device, and destages post-switching cache data that is cache data after the operation switching, after destaging all the pre-switching cache data, and Although Sicola does disclose destaging cache data (See Col. 15 lines 53-54 “Enabling write-back would require a DMA copy of the data so that it could be written to media at a later time” See Figure 7 and Col. 12 lines 33-56, in which data is stored in the write back. Data stored in the write-back cache is destaged at a later time after receiving the data, as such operation is how write-back caching operations are implemented.), Sicola does not explicitly disclose exactly when the data will be destaged in relation to the operation switching aspect. In view of the prior art and the state of the art regarding the functionality of write-back caching/destaging operations, it would have been obvious to try destaging the data at different times, choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success that would result in the claimed limitation. Sicola does not explicitly disclose what Bhagi teaches: deletes the log (See [0009] “The tracking database also indicates whether and when a transaction log has been transformed into corresponding one or more proprietary backup data chunks. The tracking database further indicates whether and when each backup data chunk has been restored to a particular failover management database, thus reflecting when a given transaction log has been successfully applied to a particular destination. After a transaction log has been successfully applied to every destination management database, the transaction log may be removed from the data storage resources.”) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the transaction log failover method of Sicola and the storage controller system of Okuno with the transaction log management method of Bhagi to save space by deleting the transaction log after using the transaction log to serve its intended purpose, thus improving storage space management. Regarding claim 8, Sicola teaches: The storage system according to claim 1, wherein in the case of the second memory protection method, a storage controller, among the plurality of storage controllers, recover the state of the other storage controller (See Abstract “In the situation wherein an array controller fails during an asynchronous copy operation, the partner array controller uses a `micro log` stored in mirrored cache memory to recover transactions, in order, which were `missed` by the backup storage array when the array controller failure occurred.” See Abstract and Figures 2-4, Figure 10 and Figure 12, in which multiple controllers have the ability to implement the second protection method as a way to provide redundancy in failover operations.) Sicola does not explicitly disclose what Bhagi teaches: after deleting the log (See [0009] “The tracking database also indicates whether and when a transaction log has been transformed into corresponding one or more proprietary backup data chunks. The tracking database further indicates whether and when each backup data chunk has been restored to a particular failover management database, thus reflecting when a given transaction log has been successfully applied to a particular destination. After a transaction log has been successfully applied to every destination management database, the transaction log may be removed from the data storage resources.”). Although Sicola does disclose a storage controller, among the plurality of storage controllers, recovering the state of the other storage controller in the case of the second memory protection method, Sicola does not explicitly disclose exactly when the state of the other storage controller will be recovered in relation to deleting the log. Bhagi discloses deleting the log. Based on the teachings disclosed in Sicola and Bhagi and the ordinary skill of a person in the art, it would have been obvious to try recovering the state of the other storage controller after deleting the log, choosing from a finite number of identified, predictable timeframes to recover the state of the other storage controller, with a reasonable expectation of success that would result in the claimed limitation. Claim 9 is rejected for the same reasons as claim 8. Response to Arguments On pages 9-11 of Applicant’s arguments filed on September 25, 2025, applicant submitted that Sicola does not disclose the amendments filed on September 25, 2025. Such arguments are persuasive, however, the amendments filed on September 25, 2025 are rejected herein in view of newly found prior art Okuno. On page 11 of applicant’s arguments, applicant submitted that in Sicola, the log is invalidated after the controller recovers, citing 880, Figure 8B and col. 14, l. 26. Applicant then submitted that Sicola does not disclose “wherein upon determining recovery from the failing of the second storage controller is, the first storage controller copies the data on the first memory to the second memory of the second storage controller, deletes the log,….,” as set forth in claim 6. Examiner clarifies that the totality of the argued limitation is taught by Sicola in view of Obviousness in view of Bhagi. Applicant’s representative has argued only the primary reference (i.e. Sicola) without providing arguments for the combination of the references used to reject the argued limitation of claim 6. Since claim 6 has been rejected with several references (and obviousness rationale), and applicant’s representative has not provided arguments for the totality of the 103 rejection as a whole, including all references and/or the combination, such arguments are not persuasive. On page 12 of applicant’s arguments, applicant submitted that newly added claims 8-9 are not taught in the prior art. Examiner respectfully disagrees, and has rejected claims 8-9 over Sicola in view of Okuno in view of Obviousness in view of Bhagi. See rejection of claim 8. All pending claims are rejected herein for the reasons stated in the Response to Arguments section of the Office Action. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL L WESTBROOK whose telephone number is (571)270-5028. The examiner can normally be reached Mon-Fri 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL L WESTBROOK/Examiner, Art Unit 2139 /REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Sep 19, 2024
Non-Final Rejection — §103, §112
Dec 11, 2024
Response Filed
Mar 22, 2025
Final Rejection — §103, §112
Jun 20, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Jun 27, 2025
Non-Final Rejection — §103, §112
Sep 25, 2025
Response Filed
Jan 05, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547313
DEVICE AND METHOD FOR IMPLEMENTING LIVE MIGRATION
2y 5m to grant Granted Feb 10, 2026
Patent 12535964
SYSTEMS AND METHODS FOR HYBRID STORAGE
2y 5m to grant Granted Jan 27, 2026
Patent 12530138
MEMORY CONTROL DEVICE AND REFRESH CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 20, 2026
Patent 12517656
COOPERATIVE ADAPTIVE THROTTLING BETWEEN HOSTS AND DATA STORAGE SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12504876
FLEXIBLE METADATA REGIONS FOR A MEMORY DEVICE
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
80%
With Interview (+6.0%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month