Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered.
The instant application having Application No. 17/512,180 has claims 1-4, 6, 8-11, 13, 15-18, and 20 pending filed on 10/27/2021; there are 3 independent claim and 12 dependent claims, all of which are ready for examination by the examiner.
Response to Arguments
This Office Action is in response to applicant’s communication filed on December 4, 2025 in response to PTO Office Action dated September 4, 2025. The Applicant’s remarks and amendments to the claims and/or specification were considered with the results that follow.
Claim Rejections - 35 USC § 103
35 USC § 103 Rejection of claims 1-4, 6, 8-11, 13, 15-18, and 20
Applicant's arguments filed on 12/04/2025 with respect to the claims 1-4, 6, 8-11, 13, 15-18, and 20 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) ELEMENT IN CLAIM FOR A COMBINATION. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: component in claims 15-18 and 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8-13 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ives et al (US Patent 10740005) in view of in view of Mathew et al (US Patent 9015123) and in further view of Hartman et al (US PGPUB 20150088882) and Lacapra et al (US PGPUB 20180011874).
As per claim 1:
Ives teaches:
“A method for configuring client application nodes in a distributed system, the method comprising” (Col 1 Lines 49-58 (the invention is a data storage system comprising: one or more physical storage devices; a plurality of data nodes exposing a plurality of portions of a plurality of data entities and performs a method comprising))
“detecting, by a client application node, a file system, wherein the file system is not mounted on the client application node” (Col 7 Lines 63-67, Col 8 Lines 12-15 and Fig. 3 (The system includes client computer systems (client nodes), data storage system, network, a metadata server (metadata node), and data servers where the data storage system may include one or more data storage systems and the file systems and associated services provided by data servers may comprise a distributed file system))
“in response to the detecting, determining a metadata node on which the file system is mounted” (Col 8 lines 23-24 (the metadata server (node) may be used in connection with providing metadata))
“and wherein the management node deploys the metadata node and the plurality of client application nodes” (Col 8 Lines 33-48 and Col 16 Lines 1-15 (the data storage system may include a data resource manager that handles distribution and allocation of data storage system resources for use by the virtual machines and virtualized environment, the distributed file system (DFS) stores file system metadata may be stored on a dedicated metadata (MD) server and a client may communicate with different ones of the servers depending on what particular portion of the file content the client wants)).
Ives does not EXPLICITLY disclose: sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices; generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Mathew teaches:
“sending a request to the metadata node to obtain a scale out volume record associated with the file system” (Col 14 Lines 30-45 (an inode is a data structure, which is used to store information, such as metadata, about a data container and path names of data objects in the server system are stored in association with a namespace ))
“wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume” (Col 14 Lines 30-45 and Col 15 Lines 39-41 (the metadata contained in an inode may include data information consisting of the server system stores the global object ID of the data object directly within the directory entry of the data object))
“a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size” (Col 20 Lines 56-66 (the main purpose of an inode is to store metadata about a particular data file, including a pointer to the tree structure of the data file, the size of the data file, the number of data blocks in the data file, the link count (number of references to that data file in the dataset), permissions that are associated with the data file and may also include other metadata))
“and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs” (Col 8 Lines 10-16 (the storage of information on storage devices can be implemented as one or more storage volumes that include a collection of physical storage disks cooperating to define an overall logical arrangement of volume block number (VBN) space on the volume(s) which can be organized as a RAID group))
“wherein the MRG map includes a MRG UUID of each MRG in the members” (Col 14 Lines 55-64 (the pointer like an inode number directly maps the path name to an inode associated with the data object and the object locator of the data object could be the global object ID of the data object))
“wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices” (Col 5 Lines 15-25 (each storage device can be implemented as an individual disk, multiple disks like a RAID group or any other suitable mass storage device(s)))
“and wherein data and metadata associated with the file system is stored across the subset of storage devices” (Col 9 Lines 56-63 (the storage manager indexes into a metadata file to access an appropriate entry and retrieve a logical virtual block number (VBN), then passes a message structure including the logical VBN to the RAID system and the logical VBN is mapped to a disk identifier and disk block number (DBN)))
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Mathew and apply them on teachings of Ives for the method “sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices”. One would be motivated as the data storage nodes are used to store the data associated with each of the files and directories present in the file system of the clustered storage system, this separation (i.e., the separation of the file system of the clustered storage system from the data associated with the files and directories of the file system) results in any content modification to the data of the files and directories stored in the data storage nodes to happen independent of the file system of the clustered storage system (Mathew, Col 2 Lines 23-31).
Ives and Mathew do not EXPLICITLY disclose: generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Hartman teaches:
“generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file” (Paragraph [0101] and Fig. 21 (a node receiving an access to a mapping file and a specified offset from a client would generate a response containing the location data which specifies the locations in the cluster where the corresponding chunk of the file is stored))
“wherein the plurality of client application nodes comprises the client application node” (Paragraph [0008] and Paragraph [0042] and Fig. 1 (a cluster is configured so that tasks may be issued from any node in the cluster to any other or all other nodes in the cluster and the cluster includes clients (client application nodes), storage nodes where the clients (client application nodes) and storage nodes are communicatively coupled using a communications network))
“wherein the topology file is obtained from a management node” (Paragraph [0067] (the management apparatus (management node) includes a interface for controlling and monitoring the storage device and the management device functions (topology file) can be provided to the clients))
“and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system” (Paragraph [0094] and Paragraph [0102] (in a file system provided across the cluster, a mount point provides access to a mounted file system and a client communicating with the cluster receives the response and is able to send tasks associated with the chunk stored thereon directly to the nodes specified in the response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Hartman and apply them on teachings of Ives and Mathew for the method “generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system”. One would be motivated as a distributed storage system includes nodes which store metadata, object identifiers, and location information associated with a plurality of files in a file system and a plurality of nodes that store the plurality of files where a node in the cluster is capable of receiving a request for a file in the file system from a client and determining the location of the file within the cluster (Hartman, Paragraph [0041]).
Ives, Mathew and Hartman do not EXPLICITLY disclose: wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Lacapra teaches:
“wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices” (Paragraph [0023] (storing an instantiation of the file in each of a plurality of storage providers; storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme where the metadata including at least a list of the storage providers, sending a request by the client to the target storage provide; providing the list of the storage providers by the target storage provider to the client in response to the request and selecting one of the listed storage providers by the client using a predetermined selection scheme))
“wherein the interaction of the client application node with the file system comprises” (Paragraph [0086] (a client application interacts with the storage system to manipulate files includes))
“issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application where the filesystem interface is a set of POSIX application programming interfaces (APIs)))
“wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network (translate the data requests to the IO requests), mounting a storage area network (SAN) and that the remote devices are mounted using a proprietary filesystem, such that the core operating system is unaware that the file data are stored remotely (provides a mechanism transparent to the client application node)))
“wherein the client application node is unaware of the POSIX and the translation of the data requests” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application and respond with file data or operation results with the inner workings of client filesystem are generally opaque to client application and the filesystem interface is a set of POSIX application programming interfaces (APIs).( the client application node is unaware of the POSIX and the translation of the data requests)))
“and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system” (Paragraph [0023] and Paragraph [0091] (providing access by a client to a file in a storage system, storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme, providing the list of the storage providers by the target storage provider to the client in response to the request and communicating with the selected storage provider by the client in order to access the file instantiation stored in the selected storage provider where a node may use a POSIX API to request that the local operating system perform a filesystem transaction in response to a client request))
“thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network, mounting a storage area network (SAN) and that the remote devices are mounted, such that the core operating system is unaware that the file data are stored remotely (the file system were mounted to the client application node)))
“and providing, by the file system container, responses to the data requests using the POSIX” (Paragraph [0086] (the client filesystem is to respond with file data or operation results, the client application may communicate with filesystem using a specified interface that the latter implements and where the filesystem interface is a set of POSIX application programming interfaces (APIs))).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Lacapra and apply them on teachings of Ives, Mathew and Hartman for the method “wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX”. One would be motivated as according to the principles of portability and isolation, it is advantageous that the filesystem on a storage client be unaware of the number of physical storage servers and yet in order to provide service efficiency, a storage client may contact all of the physical storage servers controlled by a storage provider with a single network message (Lacapra, Paragraph [0139]).
As per claim 2:
Ives, Mathew, Hartman and Lacapra teach the method as specified in the parent claim 1 above.
Hartman further teaches:
“prior to detecting the file system” (Paragraph [0004] (in distributed file systems, methods for locating data within the file system))
“locally mounting a second file system, wherein the second file system is remotely mounted on the management node” (Paragraph [0040] (users may use industry standard protocols without modification to mount file systems (second file system) , access files within the cluster from nodes, and perform other tasks on and/or in the cluster))
“detecting, after locally mounting the second file system, namespace information” (Paragraph [0040] ( the cluster provides a global namespace allowing users to see the entire file system regardless of the node used for access to the file system)).
“and mounting, using the namespace information, a namespace on the client application node, wherein the namespace comprises file system information for the file system, wherein the file system information specifies the metadata node” (Paragraph [0038] and Paragraph [0045] (one or more nodes, which may be implemented as servers, are responsible for handling the namespace, metadata, and location information of files where the nodes are responsible for access to files in the file system and the namespace includes a hierarchical tree-based file path and naming scheme common in most file systems)).
As per claim 3:
Ives, Mathew, Hartman and Lacapra teach the method as specified in the parent claim 1 above.
Hartman further teaches:
“wherein the mapping between the plurality of storage devices and the scale out volume is generated by a file system client executing in a file system container on the client application node, wherein the file system container is separate from the application container” (Paragraph [0101] and Paragraph [0104] (a client requests a node in the cluster to access a mapping file corresponding to a particular file and the client specifies an offset using a remote file protocol used to access the particular file and a node receiving an access to a mapping file and a specified offset from a client would generate a response containing the location data which specifies the locations in the cluster where the corresponding chunk of the file is stored )).
As per claim 4:
Ives, Mathew, Hartman and Lacapra teach the computer-implemented method as specified in the parent claim 1 above.
Ives further teaches:
“receiving a notification, via a second file system, in response to file system information being stored in the second file system, wherein the second file system is located on the management node accessible to the client application node” (Col 8 Lines 12-22 (each of the data servers may be used in connection with processing received client requests, the file systems and associated services provided by data servers may comprise a distributed file system or distributed object system and the data servers may collectively serve as a front end to the entire distributed object system comprising multiple objects or distributed file system comprising multiple file systems))
“wherein the file system information specifies the metadata node on which the file system is mounted” (Col 8 Lines 23-24 (the metadata server (node) may be used in connection with providing metadata)).
As per claim 6:
Ives, Mathew, Hartman and Lacapra teach the computer-implemented method as specified in the parent claim 1 above.
Hartman further teaches:
“wherein the mapping is further generated using MRG records associated with the set of MRGs” (Paragraph [0047] and Paragraph [0059] (the node may reference a special mapping file to determine the object identifier and the location of the file in the storage area where the storage area provides the server system with a storage area of the storage area includes a RAID group)).
As per claim 8:
Ives teaches:
“A non-transitory computer readable medium comprising instructions which, when executed by a processor, enables the processor to perform a method, the method comprising” (Col 2 Lines 57-59 (a computer readable medium comprising code stored thereon, that when executed, performs a method for processing requests comprising))
“detecting, by a client application node, a file system, wherein the file system is not mounted on the client application node” (Col 7 Lines 63-67, Col 8 Lines 12-15 and Fig. 3 (The system includes client computer systems (client nodes), data storage system, network, a metadata server (metadata node), and data servers where the data storage system may include one or more data storage systems and the file systems and associated services provided by data servers may comprise a distributed file system))
“in response to the detecting, determining a metadata node on which the file system is mounted” (Col 8 lines 23-24 (the metadata server (node) may be used in connection with providing metadata))
“and wherein the management node deploys the metadata node and the plurality of client application nodes” (Col 8 Lines 33-48 and Col 16 Lines 1-15 (the data storage system may include a data resource manager that handles distribution and allocation of data storage system resources for use by the virtual machines and virtualized environment, the distributed file system (DFS) stores file system metadata may be stored on a dedicated metadata (MD) server and a client may communicate with different ones of the servers depending on what particular portion of the file content the client wants)).
Ives does not EXPLICITLY disclose: sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices; generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Mathew teaches:
“sending a request to the metadata node to obtain a scale out volume record associated with the file system” (Col 14 Lines 30-45 (an inode is a data structure, which is used to store information, such as metadata, about a data container and path names of data objects in the server system are stored in association with a namespace ))
“wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume” (Col 14 Lines 30-45 and Col 15 Lines 39-41 (the metadata contained in an inode may include data information consisting of the server system stores the global object ID of the data object directly within the directory entry of the data object))
“a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size” (Col 20 Lines 56-66 (the main purpose of an inode is to store metadata about a particular data file, including a pointer to the tree structure of the data file, the size of the data file, the number of data blocks in the data file, the link count (number of references to that data file in the dataset), permissions that are associated with the data file and may also include other metadata))
“and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs” (Col 8 Lines 10-16 (the storage of information on storage devices can be implemented as one or more storage volumes that include a collection of physical storage disks cooperating to define an overall logical arrangement of volume block number (VBN) space on the volume(s) which can be organized as a RAID group))
“wherein the MRG map includes a MRG UUID of each MRG in the members” (Col 14 Lines 55-64 (the pointer like an inode number directly maps the path name to an inode associated with the data object and the object locator of the data object could be the global object ID of the data object))
“wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices” (Col 5 Lines 15-25 (each storage device can be implemented as an individual disk, multiple disks like a RAID group or any other suitable mass storage device(s)))
“and wherein data and metadata associated with the file system is stored across the subset of storage devices” (Col 9 Lines 56-63 (the storage manager indexes into a metadata file to access an appropriate entry and retrieve a logical virtual block number (VBN), then passes a message structure including the logical VBN to the RAID system and the logical VBN is mapped to a disk identifier and disk block number (DBN)))
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Mathew and apply them on teachings of Ives for a non-transitory computer readable medium “sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices”. One would be motivated as the data storage nodes are used to store the data associated with each of the files and directories present in the file system of the clustered storage system, this separation (i.e., the separation of the file system of the clustered storage system from the data associated with the files and directories of the file system) results in any content modification to the data of the files and directories stored in the data storage nodes to happen independent of the file system of the clustered storage system (Mathew, Col 2 Lines 23-31).
Ives and Mathew do not EXPLICITLY disclose: generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Hartman teaches:
“generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file” (Paragraph [0101] and Fig. 21 (a node receiving an access to a mapping file and a specified offset from a client would generate a response containing the location data which specifies the locations in the cluster where the corresponding chunk of the file is stored))
“wherein the plurality of client application nodes comprises the client application node” (Paragraph [0008] and Paragraph [0042] and Fig. 1 (a cluster is configured so that tasks may be issued from any node in the cluster to any other or all other nodes in the cluster and the cluster includes clients (client application nodes), storage nodes where the clients (client application nodes) and storage nodes are communicatively coupled using a communications network))
“wherein the topology file is obtained from a management node” (Paragraph [0067] (the management apparatus (management node) includes a interface for controlling and monitoring the storage device and the management device functions (topology file) can be provided to the clients))
“and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system” (Paragraph [0094] and Paragraph [0102] (in a file system provided across the cluster, a mount point provides access to a mounted file system and a client communicating with the cluster receives the response and is able to send tasks associated with the chunk stored thereon directly to the nodes specified in the response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Hartman and apply them on teachings of Ives and Mathew for a non-transitory computer readable medium “generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system”. One would be motivated as a distributed storage system includes nodes which store metadata, object identifiers, and location information associated with a plurality of files in a file system and a plurality of nodes that store the plurality of files where a node in the cluster is capable of receiving a request for a file in the file system from a client and determining the location of the file within the cluster (Hartman, Paragraph [0041]).
Ives, Mathew and Hartman do not EXPLICITLY disclose: wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Lacapra teaches:
“wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices” (Paragraph [0023] (storing an instantiation of the file in each of a plurality of storage providers; storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme where the metadata including at least a list of the storage providers, sending a request by the client to the target storage provide; providing the list of the storage providers by the target storage provider to the client in response to the request and selecting one of the listed storage providers by the client using a predetermined selection scheme))
“wherein the interaction of the client application node with the file system comprises” (Paragraph [0086] (a client application interacts with the storage system to manipulate files includes))
“issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application where the filesystem interface is a set of POSIX application programming interfaces (APIs)))
“wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network (translate the data requests to the IO requests), mounting a storage area network (SAN) and that the remote devices are mounted using a proprietary filesystem, such that the core operating system is unaware that the file data are stored remotely (provides a mechanism transparent to the client application node)))
“wherein the client application node is unaware of the POSIX and the translation of the data requests” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application and respond with file data or operation results with the inner workings of client filesystem are generally opaque to client application and the filesystem interface is a set of POSIX application programming interfaces (APIs).( the client application node is unaware of the POSIX and the translation of the data requests)))
“and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system” (Paragraph [0023] and Paragraph [0091] (providing access by a client to a file in a storage system, storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme, providing the list of the storage providers by the target storage provider to the client in response to the request and communicating with the selected storage provider by the client in order to access the file instantiation stored in the selected storage provider where a node may use a POSIX API to request that the local operating system perform a filesystem transaction in response to a client request))
“thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network, mounting a storage area network (SAN) and that the remote devices are mounted, such that the core operating system is unaware that the file data are stored remotely (the file system were mounted to the client application node)))
“and providing, by the file system container, responses to the data requests using the POSIX” (Paragraph [0086] (the client filesystem is to respond with file data or operation results, the client application may communicate with filesystem using a specified interface that the latter implements and where the filesystem interface is a set of POSIX application programming interfaces (APIs))).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Lacapra and apply them on teachings of Ives, Mathew and Hartman for a non-transitory computer readable medium “wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX”. One would be motivated as according to the principles of portability and isolation, it is advantageous that the filesystem on a storage client be unaware of the number of physical storage servers and yet in order to provide service efficiency, a storage client may contact all of the physical storage servers controlled by a storage provider with a single network message (Lacapra, Paragraph [0139]).
As per claim 9, the claim is rejected based upon the same rationale given for the parent claim 8 and the claim 2 above.
As per claim 10, the claim is rejected based upon the same rationale given for the parent claim 8 and the claim 3 above.
As per claim 11, the claim is rejected based upon the same rationale given for the parent claim 8 and the claim 4 above.
As per claim 13, the claim is rejected based upon the same rationale given for the parent claim 8 and the claim 6 above.
As per claim 15:
Ives teaches:
“A node comprising” (Col 1 Lines 49-50 (a plurality of data nodes comprising))
“memory” (Col 1 Lines 56 (memory comprising))
“a processor configured to execute instructions, wherein when the instructions are executed the node performs a method, the method comprising” (Col 1 Lines 56-62 (the code stored in memory, when executed, performs a method comprising: receiving a request from a client at a first of the set of at least two data nodes to perform an operation with respect to the first data portion; and processing the request comprising))
“detecting, by a client application node excuting on the processor, a file system, wherein the file system is not mounted on the client application node” (Col 7 Lines 63-67, Col 8 Lines 12-15 and Fig. 3 (the system includes client computer systems (client nodes), data storage system, network, a metadata server (metadata node), and data servers where the data storage system may include one or more data storage systems and the file systems and associated services provided by data servers may comprise a distributed file system))
“in response to the detecting, determining a metadata node on which the file system is mounted” (Col 8 lines 23-24 (the metadata server (node) may be used in connection with providing metadata))
“and wherein the management node deploys the metadata node and the plurality of client application nodes” (Col 8 Lines 33-48 and Col 16 Lines 1-15 (the data storage system may include a data resource manager that handles distribution and allocation of data storage system resources for use by the virtual machines and virtualized environment, the distributed file system (DFS) stores file system metadata may be stored on a dedicated metadata (MD) server and a client may communicate with different ones of the servers depending on what particular portion of the file content the client wants)).
Ives does not EXPLICITLY disclose: sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices; generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Mathew teaches:
“sending a request to the metadata node to obtain a scale out volume record associated with the file system” (Col 14 Lines 30-45 (an inode is a data structure, which is used to store information, such as metadata, about a data container and path names of data objects in the server system are stored in association with a namespace ))
“wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume” (Col 14 Lines 30-45 and Col 15 Lines 39-41 (the metadata contained in an inode may include data information consisting of the server system stores the global object ID of the data object directly within the directory entry of the data object))
“a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size” (Col 20 Lines 56-66 (the main purpose of an inode is to store metadata about a particular data file, including a pointer to the tree structure of the data file, the size of the data file, the number of data blocks in the data file, the link count (number of references to that data file in the dataset), permissions that are associated with the data file and may also include other metadata))
“and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs” (Col 8 Lines 10-16 (the storage of information on storage devices can be implemented as one or more storage volumes that include a collection of physical storage disks cooperating to define an overall logical arrangement of volume block number (VBN) space on the volume(s) which can be organized as a RAID group))
“wherein the MRG map includes a MRG UUID of each MRG in the members” (Col 14 Lines 55-64 (the pointer like an inode number directly maps the path name to an inode associated with the data object and the object locator of the data object could be the global object ID of the data object))
“wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices” (Col 5 Lines 15-25 (each storage device can be implemented as an individual disk, multiple disks like a RAID group or any other suitable mass storage device(s)))
“and wherein data and metadata associated with the file system is stored across the subset of storage devices” (Col 9 Lines 56-63 (the storage manager indexes into a metadata file to access an appropriate entry and retrieve a logical virtual block number (VBN), then passes a message structure including the logical VBN to the RAID system and the logical VBN is mapped to a disk identifier and disk block number (DBN)))
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Mathew and apply them on teachings of Ives for the node “sending a request to the metadata node to obtain a scale out volume record associated with the file system; wherein the scale out volume record comprises: a scale out volume universally unique identifier (UUID), a protection type to be implemented by members of the scale out volume; a protection type to be implemented by members of the scale out volume, a number of the members of the scale out volume, a scale out volume type, a scale out volume size; and a mapped Redundant Array of Independent Disks (RAID) group (MRG) map for the members comprising a set of MRGs; wherein the MRG map includes a MRG UUID of each MRG in the members; wherein each MRG in the set of MRGs corresponds to a subset of storage devices from a plurality of storage devices; and wherein data and metadata associated with the file system is stored across the subset of storage devices”. One would be motivated as the data storage nodes are used to store the data associated with each of the files and directories present in the file system of the clustered storage system, this separation (i.e., the separation of the file system of the clustered storage system from the data associated with the files and directories of the file system) results in any content modification to the data of the files and directories stored in the data storage nodes to happen independent of the file system of the clustered storage system (Mathew, Col 2 Lines 23-31).
Ives and Mathew do not EXPLICITLY disclose: generating a mapping between the plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Hartman teaches:
“generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file” (Paragraph [0101] and Fig. 21 (a node receiving an access to a mapping file and a specified offset from a client would generate a response containing the location data which specifies the locations in the cluster where the corresponding chunk of the file is stored))
“wherein the plurality of client application nodes comprises the client application node” (Paragraph [0008] and Paragraph [0042] and Fig. 1 (a cluster is configured so that tasks may be issued from any node in the cluster to any other or all other nodes in the cluster and the cluster includes clients (client application nodes), storage nodes where the clients (client application nodes) and storage nodes are communicatively coupled using a communications network))
“wherein the topology file is obtained from a management node” (Paragraph [0067] (the management apparatus (management node) includes a interface for controlling and monitoring the storage device and the management device functions (topology file) can be provided to the clients))
“and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system” (Paragraph [0094] and Paragraph [0102] (in a file system provided across the cluster, a mount point provides access to a mounted file system and a client communicating with the cluster receives the response and is able to send tasks associated with the chunk stored thereon directly to the nodes specified in the response)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Hartman and apply them on teachings of Ives and Mathew for the node “generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; wherein the plurality of client application nodes comprises the client application node; wherein the topology file is obtained from a management node; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system”. One would be motivated as a distributed storage system includes nodes which store metadata, object identifiers, and location information associated with a plurality of files in a file system and a plurality of nodes that store the plurality of files where a node in the cluster is capable of receiving a request for a file in the file system from a client and determining the location of the file within the cluster (Hartman, Paragraph [0041]).
Ives, Mathew and Hartman do not EXPLICITLY disclose: wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX.
However, in an analogous art, Lacapra teaches:
“wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices” (Paragraph [0023] (storing an instantiation of the file in each of a plurality of storage providers; storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme where the metadata including at least a list of the storage providers, sending a request by the client to the target storage provide; providing the list of the storage providers by the target storage provider to the client in response to the request and selecting one of the listed storage providers by the client using a predetermined selection scheme))
“wherein the interaction of the client application node with the file system comprises” (Paragraph [0086] (a client application interacts with the storage system to manipulate files includes))
“issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application where the filesystem interface is a set of POSIX application programming interfaces (APIs)))
“wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network (translate the data requests to the IO requests), mounting a storage area network (SAN) and that the remote devices are mounted using a proprietary filesystem, such that the core operating system is unaware that the file data are stored remotely (provides a mechanism transparent to the client application node)))
“wherein the client application node is unaware of the POSIX and the translation of the data requests” (Paragraph [0086] (the client filesystem is the point of contact between the client application and the rest of the storage system, the client filesystem is to receive filesystem requests from client application and respond with file data or operation results with the inner workings of client filesystem are generally opaque to client application and the filesystem interface is a set of POSIX application programming interfaces (APIs).( the client application node is unaware of the POSIX and the translation of the data requests)))
“and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system” (Paragraph [0023] and Paragraph [0091] (providing access by a client to a file in a storage system, storing metadata for the file in a target storage provider selected based at least in part on the pathname using a predetermined mapping scheme, providing the list of the storage providers by the target storage provider to the client in response to the request and communicating with the selected storage provider by the client in order to access the file instantiation stored in the selected storage provider where a node may use a POSIX API to request that the local operating system perform a filesystem transaction in response to a client request))
“thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node” (Paragraph [0015] and Paragraph [0079] (the client application running on storage client generates file operation requests, the filesystem manages file storage and interacts with both the application and the servers via a network, mounting a storage area network (SAN) and that the remote devices are mounted, such that the core operating system is unaware that the file data are stored remotely (the file system were mounted to the client application node)))
“and providing, by the file system container, responses to the data requests using the POSIX” (Paragraph [0086] (the client filesystem is to respond with file data or operation results, the client application may communicate with filesystem using a specified interface that the latter implements and where the filesystem interface is a set of POSIX application programming interfaces (APIs))).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Lacapra and apply them on teachings of Ives, Mathew and Hartman for the node “wherein the topology file comprises information about the plurality of storage devices to enable a plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices; wherein the interaction of the client application node with the file system comprises; issuing, by an operating system of the client application node, data requests using a portable operating system interface (POSIX) to a file system container of the client application node; wherein the file system container provides a mechanism transparent to the client application node usable to translate the data requests to the IO requests; wherein the client application node is unaware of the POSIX and the translation of the data requests; and wherein the file system container utilizes the mapping between the plurality of storage devices and the scale out volume to process the POSIX data requests received from the operating system and translate the POSIX data requests into the IO requests for the file system; thereby enabling the client application node to interact with the file system as if the file system were mounted to the client application node; and providing, by the file system container, responses to the data requests using the POSIX”. One would be motivated as according to the principles of portability and isolation, it is advantageous that the filesystem on a storage client be unaware of the number of physical storage servers and yet in order to provide service efficiency, a storage client may contact all of the physical storage servers controlled by a storage provider with a single network message (Lacapra, Paragraph [0139]).
As per claim 16, the claim is rejected based upon the same rationale given for the parent claim 15 and the claim 2 above.
As per claim 17, the claim is rejected based upon the same rationale given for the parent claim 15 and the claim 3 above.
As per claim 18, the claim is rejected based upon the same rationale given for the parent claim 15 and the claim 4 above.
As per claim 20, the claim is rejected based upon the same rationale given for the parent claim 15 and the claim 6 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Maybee et al, (US PGPUB 20180196818), techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy application. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes.
Cheung Andrew, (US PGPUB 20160080492), provides a system and method for implementing a cloud service with private storage. The system includes a storage device, a cloud server, and a client device. The system is configured in a way that the private storage device designated/owned by a user of the cloud service to initiate a communication with the cloud server to register as the user data storage location for a particular account of the cloud service rather than using a “central public storage” location as in a traditional public cloud service.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAMAL K DEWAN whose telephone number is (571)272-2196. The examiner can normally be reached on Mon-Fri 8:00 AM – 5:00 PM (EST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONY MAHMOUDI can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Kamal K Dewan/
Examiner, Art Unit 2163
/TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163