DETAILED ACTION
1. This Office Action is taken in response to Applicants’ Amendments and Remarks filed on 1/29/2026 regarding application 18/218,362 filed on 7/5/2023.
Claims 1-20 are pending for consideration.
2. Response to Amendments and Remarks
Applicants’ amendments and remarks have been fully and carefully considered, with the Examiner’s response set forth below.
(1) In response to the amendments and remarks, an updated claim analysis has been made. Refer to the corresponding sections of the following Office Action for details.
3. Examiner’s Note
(1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient.
(2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Objections
4. Claims 1-7 are objected to because of the following informalities:
Claim 1 recites “a data storage drive under control of the computing processor that stores instructions, which when executed by the processor, direct the computing processor to execute application programs …” it appears that “the processor” is intended to be “the computing processor,” as in the case of claim 8.
Appropriate correction is required.
Claims 2-7 are objected to by virtue of their dependency from claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim 1, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev et al. (US Patent Application Publication 2025/0199692, hereinafter Denysyev), and in view of Wilkinson et al. (US Patent Application Publication 2019/0179687, hereinafter Wilkinson).
As to claim 1, Denysyev teaches A mesh storage device implemented within a local area network [as shown in figure 1A-1D; FIG. 1A illustrates an example system for data storage, in accordance with some implementations. System 100 (also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 100 may include the same, more, or fewer elements configured in the same or different manner in other implementations. System 100 includes a number of computing devices 164. Computing devices (also referred to as “client devices” herein) may be for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices 164 are coupled for data communications to one or more storage arrays 102 through a storage area network (SAN) 158 or a local area network (LAN) 160 … (¶ 0039-0045); In the example depicted in FIG. 3A, the storage system 306 is coupled to the cloud services provider 302 via a data communications link 304. The data communications link 304 may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system 306 and the cloud services provider 302 … (¶ 0123)]; comprising:
a computing processor [as shown in figure 1A, where there are multiple computing devices (164A and 164B), and controllers (110A-110D); processing device, figure 1B, 104];
a communication transceiver under control of the computing processor [host bus adapters, figure 1B, 103A-103C; figure 3A, 304; Storage array controller 110 may be implemented in a variety of ways, including as a Field Programmable Gate Array (FPGA), a Programmable Logic Chip (PLC), an Application Specific Integrated Circuit (ASIC), System-on-Chip (SOC), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller 110 may include, for example, a data communications adapter configured to support communications via the SAN 158 or LAN 160 … (¶ 0044)]; and
a data storage drive under control of the computing processor that stores instructions, which when executed by the processor, direct the computing processor to execute application programs in coordination with a plurality of peer mesh storage devices simultaneously wirelessly connected together and with the mesh storage device to form the local area network [The LAN 160 may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN 160 may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN 160 may include Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Internet Protocol (IP), HyperText Transfer Protocol (HTTP), Wireless Access Protocol (WAP), Handheld Device Transport Protocol (HDTP), Session Initiation Protocol (SIP), Real Time Protocol (RTP), or the like (¶ 0042); Storage array controller 101 may include one or more processing devices 104 and random access memory (RAM) 111. Processing device 104 (or controller 101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 104 (or controller 101) may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets … In some implementations, instructions 113 are stored in RAM 111. Instructions 113 may include computer program instructions for performing operations in in a direct-mapped flash storage system … (¶ 0054-0055); The embodiments depicted with reference to FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster … Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system … (¶ 0083); The storage system 306 depicted in FIG. 3B also includes software resources 314 that, when executed by processing resources 312 within the storage system 306, may perform various tasks. The software resources 314 may include, for example, one or more modules of computer program instructions that when executed by processing resources 312 within the storage system 306 are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems … (¶ 0137)], the application programs including:
a file transfer manager which, when executed by the computing processor, causes the computing processor to: receive a first data file from a user device connected to the local area network, and store the first data file, on one or more of the plurality of peer mesh storage devices [System 100 includes a number of computing devices 164. Computing devices (also referred to as “client devices” herein) may be for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices 164 are coupled for data communications to one or more storage arrays 102 through a storage area network (SAN) 158 or a local area network (LAN) 160 (¶ 0040); In one embodiment, two storage controllers (e.g., 125a and 125b) provide storage services, such as a small computer system interface (SCSI) block storage array, a file server, an object server, a database or data analytics service, etc. … In one embodiment, under direction from a storage controller 125a, 125b, a storage device controller 119a, 119b may be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM 121 of FIG. 1C) without involvement of the storage controllers 125a, 125b … (¶ 0076-0078); In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system … A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots … (¶ 0095-0098)]; and retrieve a second data file requested by the user device from storage on one or more of the plurality of peer mesh storage devices, and cause the retrieved second data file to be transmitted over the local area network to the user device [Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities … As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units … (¶ 0102-0103); … Authorities 168 fulfill client requests by issuing the necessary reads and writes to the blades 252 on whose storage units 152 the corresponding data or metadata resides. Endpoints 272 parse client connection requests received from switch fabric 146 supervisory software, relay the client connection requests to the authorities 168 responsible for fulfillment, and relay the authorities' 168 responses to clients … Because authorities 168 are stateless, they can migrate between blades 252 … (¶ 0113-0115)]; and
a storage integrity manager which, when executed by the computing processor, causes the computing processor to: evaluate data storage integrity for a suspect storage device selected from the plurality of peer mesh storage devices [In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments (¶ 0105); The storage system 306 depicted in FIG. 3B also includes software resources 314 that, when executed by processing resources 312 within the storage system 306, may perform various tasks. The software resources 314 may include, for example, one or more modules of computer program instructions that when executed by processing resources 312 within the storage system 306 are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems … (¶ 0137)];
determining whether a data write of a given data to and a data read of the given data from the suspect storage device are accurate [… In reverse, when data is read, the authority 168 for the segment ID containing the data is located as described above. The host CPU 156 of the storage node 150 on which the non-volatile solid state storage 152 and corresponding authority 168 reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU 156 of storage node 150 then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network … (¶ 0094); To further provide reliability and performance for storage systems, stored devices, storage services, and so forth, embodiments may provide predictive device wear and failure detection to proactively identify and notify about potential or imminent device failures … In some embodiments, a failure prediction model may be trained by gathering statistically sampled field data over time that may be relevant for detecting early or eventual storage component failure (e.g., die, package, interconnect, capacitor, device controller, or other partial or total storage device failures) and providing the sampled data along with component failure data to an RNN, to train or develop a statistical failure prediction model for early or eventual component failures. For example, the initial failure prediction model may estimate, model, or predict early failures and then collect additional component failure data to further update and train the failure prediction model. As more failure data and corresponding sampled field data is collected and provided to update the model, the more accurate the failure prediction model may become (¶ 0157-0158);
Wilkinson more expressively teaches this limitation -- Since data stored on data storage systems may become errored, it is necessary to provide utility procedures and associated hardware, firmware and software, to check the integrity of the stored data. Procedures which perform test actions on a storage system before it is put in production use are sometimes called exercisers, in particular disk exercisers, since they put the volumes of a storage system through their paces to test whether they are operating reliably—typically by writing test data to the storage volumes and then reading the data back again to check that the read-back data is identical to the test data that was written … When testing storage systems and related devices such as storage controllers, it is useful to generate test patterns of data writes. When the data is read back later, the read-back data can be checked for conformity to the test pattern, thereby to check whether the data was correctly stored, or correctly manipulated, by the system under test … (¶ 0010-0012)]; and
upon determination that an integrity factor for the suspect storage device indicates a likelihood of a fault in performance of the suspect storage device, direct the suspect storage device to transfer data to others of the plurality of peer mesh storage devices before the suspect storage device fails [In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments (¶ 0105); To further provide reliability and performance for storage systems, stored devices, storage services, and so forth, embodiments may provide predictive device wear and failure detection to proactively identify and notify about potential or imminent device failures … For example, the RNN may be trained to predict a rate of device degradation based on the operating parameters and to estimate when the disk has more than a threshold likelihood of reaching critical wear levels (e.g., with a moderate to high risk of failure). Based on the estimates of the RNN, the system may proactively provide notifications or alerts when a device may be nearing failure and may need to be replaced, and provides opportunities for migrating data or avoiding writes of new data to failing or degraded storage devices or parts of storage devices, thus reducing the potential for data loss or more expensive recoveries resulting from such failures … (¶ 0157-0158)].
Regarding claim 1, Denysyev teaches detecting and correcting read/write errors/failures [… In reverse, when data is read, the authority 168 for the segment ID containing the data is located as described above. The host CPU 156 of the storage node 150 on which the non-volatile solid state storage 152 and corresponding authority 168 reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU 156 of storage node 150 then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network … (¶ 0094); To further provide reliability and performance for storage systems, stored devices, storage services, and so forth, embodiments may provide predictive device wear and failure detection to proactively identify and notify about potential or imminent device failures … In some embodiments, a failure prediction model may be trained by gathering statistically sampled field data over time that may be relevant for detecting early or eventual storage component failure (e.g., die, package, interconnect, capacitor, device controller, or other partial or total storage device failures) and providing the sampled data along with component failure data to an RNN, to train or develop a statistical failure prediction model for early or eventual component failures. For example, the initial failure prediction model may estimate, model, or predict early failures and then collect additional component failure data to further update and train the failure prediction model. As more failure data and corresponding sampled field data is collected and provided to update the model, the more accurate the failure prediction model may become (¶ 0157-0158)], but does not expressively teach determining whether a data write of a given data to and a data read of the given data from the suspect storage device are accurate.
However, Wilkinson specifically teaches determining whether a data write of a given data to and a data read of the given data from the suspect storage device are accurate [Since data stored on data storage systems may become errored, it is necessary to provide utility procedures and associated hardware, firmware and software, to check the integrity of the stored data. Procedures which perform test actions on a storage system before it is put in production use are sometimes called exercisers, in particular disk exercisers, since they put the volumes of a storage system through their paces to test whether they are operating reliably—typically by writing test data to the storage volumes and then reading the data back again to check that the read-back data is identical to the test data that was written … When testing storage systems and related devices such as storage controllers, it is useful to generate test patterns of data writes. When the data is read back later, the read-back data can be checked for conformity to the test pattern, thereby to check whether the data was correctly stored, or correctly manipulated, by the system under test … (¶ 0010-0012)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to determine whether a data write of a given data to and a data read of the given data from the suspect storage device are accurate, as specifically demonstrated by Wilkinson, and to incorporate it into the existing scheme disclosed by Denysyev, because Wilkinson teaches doing so allows validation of successful write operations and data been stored correctly [A method and apparatus for validating operation of a data volume on a storage medium. A data integrity component is provided which writes data blocks to the volume in a sequence, each data block storing a sequence number and also write status information specifying the sequence numbers of those preceding data blocks in the stream which are still being written to the volume at the time the data block is generated. Data validation is performed by reading back the stored data blocks from the volume and checking that the sequence numbers stored in them match those that should be present based on the sequence numbers stored in the write status information of the last-written data block found on the volume (abstract)].
As to claim 7, Denysyev in view of Wilkinson teaches The mesh storage device of claim 1, wherein storage integrity manager further causes the computing processor to: notify the user device of the likelihood of the fault in the suspect storage device [Denysyev -- To further provide reliability and performance for storage systems, stored devices, storage services, and so forth, embodiments may provide predictive device wear and failure detection to proactively identify and notify about potential or imminent device failures … For example, the RNN may be trained to predict a rate of device degradation based on the operating parameters and to estimate when the disk has more than a threshold likelihood of reaching critical wear levels (e.g., with a moderate to high risk of failure). Based on the estimates of the RNN, the system may proactively provide notifications or alerts when a device may be nearing failure and may need to be replaced, and provides opportunities for migrating data or avoiding writes of new data to failing or degraded storage devices or parts of storage devices, thus reducing the potential for data loss or more expensive recoveries resulting from such failures … (¶ 0157-0158)].
6. Claims 8, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev in view of Wilkinson, and further in view of Tamao et al. (US Patent Application Publication 2024/0184453, hereinafter Tamao).
As to claim 8, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details.
Further, regarding claim 8, Denysyev in view of Wilkinson does not expressively teach determining whether a number of write cycles performed by the suspect storage device indicate that the suspect storage device is approaching a rated number of write cycles.
However, Tamao specifically teaches determining whether a number of write cycles performed by the suspect storage device indicate that the suspect storage device is approaching a rated number of write cycles [An information processing system includes processing circuitry, a first storage, and a second storage. The processing circuitry stores the same specific data in the first storage and the second storage. The processing circuitry compares the specific data stored in the first storage with the specific data stored in the second storage. The processing circuitry determines that one of the first storage and the second storage has reached end-of-life on condition that the specific data stored in the first storage and the specific data stored in the second storage do not agree with each other (abstract); Japanese Laid-Open Patent Publication No. 2016-170604 discloses an information processing system that includes an execution unit and a storage. The execution unit calculates the write count to the storage. The execution unit determines whether the calculated write count exceeds a predetermined upper limit value. When the calculated write count exceeds the upper limit value, the execution unit determines that the storage has reached end-of-life … (¶ 000-00032)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to determine whether a number of write cycles performed by the suspect storage device indicate that the suspect storage device is approaching a rated number of write cycles, as specifically demonstrated by Tamao, and to incorporate it into the existing scheme disclosed by Denysyev in view of Wilkinson, so that the storage devices that approach their end of life may be replaced before failures occur, hence preventing loosing data.
As to claim 14, it recites substantially the same limitations as in claim 7, and is rejected for the same reasons set forth in the analysis of claim 7. Refer to “As to claim 7” presented earlier in this Office Action for details.
As to claim 15, it recites substantially the same limitations as in claim 8, and is rejected for the same reasons set forth in the analysis of claim 8. Refer to “As to claim 8” presented earlier in this Office Action for details.
7. Claims 2-3, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev in view of Wilkinson, and further in view of Pandian et al. (US Patent 10,481,800, hereinafter Pandian).
As to claim 2, Denysyev in view of Wilkinson teaches The mesh storage device of claim 1, wherein the application programs further include a redundant storage manager which, when executed by the computing processor, and in response to the request to retrieve the second data file [Denysyev -- … Storage tasks may include writing data received from the computing devices 164 to storage array 102, erasing data from storage array 102, retrieving data from storage array 102 and providing data to computing devices 164, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (RAID) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth … It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive 171 (¶ 0043-0047); Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments … (¶ 0092)].
Regarding claim 2, Denysyev in view of Wilkinson does not teach determine read and write loads on the data storage drive and also on the peer mesh storage devices; identify one mesh storage device among the mesh storage drive and the peer mesh storage devices with a least data transfer demand and a copy of the second data file stored thereon as a selected mesh storage device; and instruct the selected mesh storage device to retrieve and transmit the second data file over the local area network to the user device.
However, Pandian specifically teaches the cited limitations. Specifically, Pandian teaches determine read and write loads on the data storage drive and also on the peer mesh storage devices [… As an example, the data server is capable of reading file data from the NAS file system and forwarding the file data to the mover … The mover can write the file data to the tape device during a backup operation. For restore operations, the mover can read file data from the tape device, transfer the file data to the data server, which in turn can write the file data to the NAS file system … In one aspect, the other node can be selected based on an analysis of the state of the cluster, for example, based on load data determined by a load balancing module 108 … (c4 L48 to c5 L22); … For example, NAS node 1 (102.sub.1) can select NAS node 2 (102.sub.2) to run the NDMP session if determined that NAS node 2 (102.sub.2) has more available resources and/or less load than that of NAS node 1 (102.sub.1) … (c5 L60 to c6 L16); As an example, the load balancing module 108 can determine load factors for all (or some) nodes within the cluster and generate a ranked list of the nodes based on their load factors … (c8 L56 to c9 L5)];
identify one mesh storage device among the mesh storage drive and the peer mesh storage devices with a least data transfer demand and a copy of the second data file stored thereon as a selected mesh storage device [as shown in figures 1-6; As an example, the load balancing module 108 can determine load factors for all (or some) nodes within the cluster and generate a ranked list of the nodes based on their load factors … (c8 L56 to c9 L5); At 908, a ranked list of NAS nodes of the cluster can be generated based on the respective total load factors of the NAS nodes. Further, at 910, a selection of a NAS node for running a redirected NDMP session can be determined based on the ranked list. As an example, a NAS node with the lowest load and/or having greatest amount of available resources can be selected (c13 L9-15)] and
instruct the selected mesh storage device to retrieve and transmit the second data file over the local area network to the user device [as shown in figures 1-6; Similar to the NDMP session described above with respect to system 200, the DMA 104 can initiate a NDMP session with a node in the cluster, for example, NAS node 1 (102.sub.1). In one aspect, on receiving the NDMP session request from the DMA 104, the NAS node 1 (102.sub.1) can analyze a state of (e.g., load) one or more nodes in the cluster, for example, based on data received from a load balancing module 108, and can select a node (e.g., NAS node 2 (102.sub.2)) in the cluster to which the NDMP session can be redirected … In one aspect, the data server 202 can read file data and transfer the file data to the mover 304 locally/internally (as shown at 306). Further, the mover 304 can write the file data to a tape device 308. It is noted that the subject specification is not limited to a tape device 308, but most any target device, such as, but not limited to, virtual tape libraries and/or cloud devices, can be utilized (c7 L5-34)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to select a storage device/node based on load factor for file transferring, as specifically demonstrated by Pandian, and to incorporate it into the existing scheme disclosed by Denysyev in view of Wilkinson, because Pandian teaches doing so allows balancing NDMP load across the NAS cluster and improve resource utilization across cluster [A network attached storage (NAS) cluster can run with a set of heterogeneous hardware nodes, where not all nodes in the cluster have access to the same target connectivities. In one aspect, network data management protocol (NDMP) sessions can be redirected from a first node of the NAS cluster to a second node of the NAS cluster to balance NDMP load across the NAS cluster and improve resource utilization across cluster. Further, the NDMP load can be actively monitored to balance it continuously to increase resource utilization efficiency (abstract)].
As to claim 3, Denysyev in view of Wilkinson & Pandian teaches The mesh storage device of claim 1, wherein the application programs further include a redundant storage manager which, when executed by the computing processor, and in response to receipt of the first data file [Denysyev -- … Storage tasks may include writing data received from the computing devices 164 to storage array 102, erasing data from storage array 102, retrieving data from storage array 102 and providing data to computing devices 164, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (RAID) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth … It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive 171 (¶ 0043-0047); Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments … (¶ 0092)], causes the computing processor to: determine read and write loads on the data storage drive and also on the peer mesh storage devices [Pandian -- … As an example, the data server is capable of reading file data from the NAS file system and forwarding the file data to the mover … The mover can write the file data to the tape device during a backup operation. For restore operations, the mover can read file data from the tape device, transfer the file data to the data server, which in turn can write the file data to the NAS file system … In one aspect, the other node can be selected based on an analysis of the state of the cluster, for example, based on load data determined by a load balancing module 108 … (c4 L48 to c5 L22); … For example, NAS node 1 (102.sub.1) can select NAS node 2 (102.sub.2) to run the NDMP session if determined that NAS node 2 (102.sub.2) has more available resources and/or less load than that of NAS node 1 (102.sub.1) … (c5 L60 to c6 L16); As an example, the load balancing module 108 can determine load factors for all (or some) nodes within the cluster and generate a ranked list of the nodes based on their load factors … (c8 L56 to c9 L5)]; identify one mesh storage device among the mesh storage device and the peer mesh storage devices with a least data transfer demand and adequate storage space to store the first data file as a selected mesh storage device [Pandian -- as shown in figures 1-6; As an example, the load balancing module 108 can determine load factors for all (or some) nodes within the cluster and generate a ranked list of the nodes based on their load factors … (c8 L56 to c9 L5); Further, the load factor determination component 506 can calculate a memory load factor, f(mem) for the node as follows: … Where, the free_memory+inactive_memory+cache_memory represents the total available memory of the node and the max_memory represents the total memory of the node. It is noted that the load factors, f(cpu) and/or f(mem), can be determined at regular intervals for each node of the cluster (c9 L18-31); At 908, a ranked list of NAS nodes of the cluster can be generated based on the respective total load factors of the NAS nodes. Further, at 910, a selection of a NAS node for running a redirected NDMP session can be determined based on the ranked list. As an example, a NAS node with the lowest load and/or having greatest amount of available resources can be selected (c13 L9-15)]; route the first data file over the local area network to the selected mesh storage device; and instruct the selected mesh storage device to store the first data file thereon [Denysyev -- Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments (¶ 0102)].
As to claim 6, Denysyev in view of Wilkinson & Pandian teaches The mesh storage device of claim 1, wherein the application programs further include a storage capacity manager which, when executed by the computing processor, and in response to the request to retrieve the second data file, causes the computing processor to: evaluate data storage levels for the data storage drive and the peer mesh storage devices [Pandian -- Further, the load factor determination component 506 can calculate a memory load factor, f(mem) for the node as follows: … Where, the free_memory+inactive_memory+cache_memory represents the total available memory of the node and the max_memory represents the total memory of the node. It is noted that the load factors, f(cpu) and/or f(mem), can be determined at regular intervals for each node of the cluster … Where, #ndmp_sessions represents the number of NDMP sessions running on the node and the max_ndmp represents the maximum number of NDMP sessions that can run on the node (e.g., determined based on comparing the amount of memory required by one NDMP session and the total amount of memory available on the node). Based on (1), (2), and (3), the load factor determination component 506 can determine the node load factor, f(load) (c9 L18-49); At 908, a ranked list of NAS nodes of the cluster can be generated based on the respective total load factors of the NAS nodes. Further, at 910, a selection of a NAS node for running a redirected NDMP session can be determined based on the ranked list. As an example, a NAS node with the lowest load and/or having greatest amount of available resources can be selected (c13 L9-15)]; and upon determination that a data storage level for the data storage drive or one of the peer mesh storage devices exceeds a storage threshold level, direct the data storage drive or the one of the peer mesh storage devices exceeding the storage threshold level to transfer data to others of the peer mesh storage devices, and notify the user device of the exceeding of the storage threshold level by the data storage drive or the one of the peer mesh storage devices [Pandian -- as shown in figures 1-6; … As an example, a maximum threshold of total memory to be utilized by NDMP sessions per node can be a percentage of the total memory per node. If the memory consumed exceeds the maximum threshold, the memory throttler component 606 can throttle the NDMP backup session. For example, the memory throttler component 606 can determine a percentage by which the memory threshold is being exceeded and forward this percentage to the NDMP backup session as part of a throttle request … (c10 L53-67)].
8. Claims 9-10, 13, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev in view of Wilkinson & Tamao, and further in view of Pandian et al. (US Patent 10,481,800, hereinafter Pandian).
As to claim 9, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 10, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details.
As to claim 13, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details.
As to claim 16, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details.
As to claim 17, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details.
As to claim 20, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details.
9. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev in view of Wilkinson, and further in view of Chew (US Patent 9,626,376).
As to claim 4, Denysyev in view of Wilkinson teaches partition the first data file into subparts and distribute the subparts between the mesh storage devices registered on the local area network for storage of one or more respective subparts on respective ones of the data storage drive and peer the mesh storage devices [Denysyev – A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection (¶ 0065); The embodiments depicted with reference to FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata … (¶ 0083)].
Regarding claim 4, Denysyev in view of Wilkinson does not teach determining whether the peer mesh storage devices and the mesh storage device constitute mesh storage devices registered on the local area network.
However, Chew specifically teaches determining whether the peer mesh storage devices and the mesh storage device constitute mesh storage devices registered on the local area network [In an embodiment, the wireless adapter device 300 is configured to work with only authorized storage devices. That is, the wireless adapter device 300 may authenticate a storage device before providing direct and/or wireless access to the storage device. In an embodiment, the authentication may ensure that only a storage device allowed to have direct or wireless access is connected to the wireless adapter device 300. In an embodiment, the authentication may ensure that only compatible storage devices are connected to the wireless adapter device 300. In an embodiment, the wireless access module 310 may provide such authentication of storage devices (c7 L46-57)]; and if not, deny storage of the first data file on the data storage drive and on any of the peer mesh storage devices [In an embodiment, the wireless adapter device 300 is configured to work with only authorized storage devices. That is, the wireless adapter device 300 may authenticate a storage device before providing direct and/or wireless access to the storage device. In an embodiment, the authentication may ensure that only a storage device allowed to have direct or wireless access is connected to the wireless adapter device 300. In an embodiment, the authentication may ensure that only compatible storage devices are connected to the wireless adapter device 300. In an embodiment, the wireless access module 310 may provide such authentication of storage devices (c7 L46-57)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to determine whether the peer mesh storage devices and the mesh storage device constitute mesh storage devices registered on the local area network, as specifically demonstrated by Chew, and to incorporate it into the existing scheme disclosed by Denysyev in view of Wilkinson, in order to ensure the security and integrity of the data stored in the storage devices.
As to claim 5, Denysyev in view of Wilkinson & Chew teaches The mesh storage device of claim 1, wherein the application programs further include an encryption manager which, when executed by the computing processor, and in response to the request to retrieve the second data file, causes the computing processor to: determine whether the peer mesh storage devices and the mesh storage device constitute mesh storage devices registered on the local area network [Chew -- In an embodiment, the wireless adapter device 300 is configured to work with only authorized storage devices. That is, the wireless adapter device 300 may authenticate a storage device before providing direct and/or wireless access to the storage device. In an embodiment, the authentication may ensure that only a storage device allowed to have direct or wireless access is connected to the wireless adapter device 300. In an embodiment, the authentication may ensure that only compatible storage devices are connected to the wireless adapter device 300. In an embodiment, the wireless access module 310 may provide such authentication of storage devices (c7 L46-57)]; and if so, coordinate with the mesh storage devices registered on the local area network to locate a plurality of subparts of the second data file, wherein one or more respective subparts are stored on respective ones of the data storage drive and the peer mesh storage devices [Denysyev – as shown in figures 1, and 2A-2G; A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection (¶ 0065); The embodiments depicted with reference to FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata … (¶ 0083)]; retrieve the plurality of subparts of the second data file from the data storage drive and the peer mesh storage devices; reconstruct the second data file from the retrieved subparts to fora reconstructed second data file, and transmit the reconstructed second data file to the user device [Denysyev – as shown in figures 1, and 2A-2G; A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection (¶ 0065); The embodiments depicted with reference to FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata … (¶ 0083)]; or if not, deny retrieval of the subparts of the second data file from the data storage drive and the peer mesh storage devices [Chew -- In an embodiment, the wireless adapter device 300 is configured to work with only authorized storage devices. That is, the wireless adapter device 300 may authenticate a storage device before providing direct and/or wireless access to the storage device. In an embodiment, the authentication may ensure that only a storage device allowed to have direct or wireless access is connected to the wireless adapter device 300. In an embodiment, the authentication may ensure that only compatible storage devices are connected to the wireless adapter device 300. In an embodiment, the wireless access module 310 may provide such authentication of storage devices (c7 L46-57)].
10. Claims 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Denysyev in view of Wilkinson & Tamao, and further in view of Chew (US Patent 9,626,376).
As to claim 11, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details.
As to claim 12, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details.
As to claim 18, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details.
As to claim 19, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details.
Conclusion
11. Claims 1-20 are rejected as explained above.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on 571-272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/SHENG JEN TSAI/Primary Examiner, Art Unit 2139