DETAILED ACTION
This office action is in response to the above identified application filed on April 21, 2025. The application contains claims 1-20.
Claims 1-20 are pending
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The present application is a Continuation in Part of 18595785, filed 03/05/2024, and claims foreign priority to 202441062849, filed 08/20/2024.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on July 16, 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Examiner’s Note
The “one or more processing resources” included in the distributed storage system in claims 14-20 is interpreted to be hardware structure based on the following excerpt in paragraph [0163] of the specification:
“The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause one or more processing resources (e.g., one or more general-purpose or special-purpose processors) programmed with the instructions to perform the steps”
Accordingly, claims 14-20 are interpreted to fall in the category of patent eligible subject matter of a machine.
Specification
The abstract of the disclosure is objected to because it contains the legal term “embodiment”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Objections
Claims 5, 12, and 18 are objected to because of the following informalities:
Claim 5, line 3: “a time” should read “at a time”
Claim 12, line 2: “a time” should read “at a time”
Claim 18, line 2: “a time” should read “at a time”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3, 12, 13, and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 3 recites the limitation "the first set of context data" in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 3 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 3 recites the limitation "the data block" in line 4. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 3 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 12 recites the limitation "The method of claim 1" in line 1. There is insufficient antecedent basis for this limitation in the claim as claim 1 is not a method. Therefore, claim 12 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 13 recites the limitation "The method of claim 1" in line 1. There is insufficient antecedent basis for this limitation in the claim as claim 1 is not a method. Therefore, claim 13 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 16 recites the limitation "the first set of context data" in line 1. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 16 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 16 recites the limitation "the data block" in line 5. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 16 is indefinite and rejected under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dronamraju et al. (US 20220391359 A1), in view of Park et al. (US 8200630 B1).
With regard to claim 1,
Dronamraju teaches
a non-transitory machine readable medium storing instructions, which when executed by one or more processing resources of a distributed storage system (Abstract: a distributed storage management system. Fig. 1; [0160]: a processing system), cause the distributed storage system to:
receive, within a node of a plurality of nodes of a cluster representing the distributed storage system, a read request from a requestor to read data from a file (Fig. 9; [0110]: file system instance 906 may receive a read request from application 914 at data management subsystem 916 of file system instance 906, where the read request identifies a particular data block in a file. Fig. 4; [0076]: distributed file system 400 is implemented across cluster 402 of nodes 404, which include node 406, node 407, and node 408. Fig. 4; [0084]: receive a read request via application layer 422 mapped to file system volume 424 within node 406);
determine an expected set of context data associated with the read request including at least a first volume identifier (ID) associated with the file that is unique across the cluster … of a dynamically extensible file system (DEFS) in which the file is stored (Fig. 4; [0084]: the received read request may reference both metadata and data, wherein metadata reads on “context data associated with the read request”. Fig. 9; [0110]: this read request may include a volume identifier that identifies a volume, such as file system volume 918, from which data is to be read, wherein the fact that the volume identifier identifies a particular file system volume indicates its being “unique across the cluster”. “a DEFS” is taught by the combination of Fig. 1; [0042]; [0044]: a distributed file system over a cluster of nods, [0039]: the distributed file system is capable of mapping multiple file system volumes to the underlying distributed block layer, and [0039]: the distributed file system enables scaling and load balancing and the distributed block layer is capable of automatically and independently growing to accommodate the needs of the file system volume, satisfying the definition for “DEFS” in [0050] of the specification); and
prior to returning a block of data of the requested data to the requestor, verify:
a second volume ID contained within a second set of context data associated with the block of data matches the first volume ID in the expected set of context data (Fig. 17; [0156]-[0157]: the read request including a volume identifier (operation 1708). The volume identifier is mapped to a file system volume managed by the data management subsystem (operation 1710). A data block within the file system volume is associated, by the data management subsystem, with a block identifier that corresponds to a data block of a logical block device in a distributed block layer of the distributed file system (operation 1712));
Dronamraju does not teach
determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored; and
prior to returning a block of data of the requested data to the requestor, verify:
an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check.
Park teaches
determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored; and prior to returning a block of data of the requested data to the requestor, verify: an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check (Fig. 9; Col. 9, lines 49-62: this process of determination may involve a request for metadata from the clustered computing network and compare the latest modification times for the file in the requested volume and the file in the other volume, wherein a DEFS is taught by the primary reference as discussed above, wherein the latest modification time corresponds to an “epoch value” per its definition in [0040] of the specification).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dronamraju to incorporate the teachings of Park to determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored and prior to returning a block of data of the requested data to the requestor, verify an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check. Doing so would determine that the files are the same if the latest modification times are the same for both files as taught by Park (Col. 9, lines 58-60).
With regard to claim 2,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Park further teaches
the non-transitory machine readable medium of claim 1, wherein the cluster-wide timeline check is satisfied when the epoch value is less than or equal to the current epoch value (Fig. 9; Col. 9, lines 49-62: if the latest modification times are the same for both files, the method can assume that the files are the same, i.e., “equal to”).
With regard to claim 3,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the non-transitory machine readable medium of claim 1, wherein the first set of context data further includes a file block number (FBN) based on a starting offset specified by the read request, and wherein the instructions further cause the distributed storage system to verify the FBN matches a virtual volume block number (VVBN) of a volume used to reference the data block or an FBN contained within the second set of context data (Fig. 9; [0111]: process the read request and use information in a tree of indirect blocks corresponding to the file to locate a block number for the data block holding the requested data, wherein a block number corresponds to a “FBN” and matching the FBNs is inherently taught, and a FBN is assigned sequentially from 0 hence “based on a starting offset is inherent).
With regard to claim 4,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the non-transitory machine readable medium of claim 1, wherein the requestor is a client of the distributed storage system (Fig. 9; [0110]: file system instance 906 may receive a read request from application 914 at data management subsystem 916 of file system instance 906, wherein application 914 is “a client of the distributed storage system”).
With regard to claim 5,
As discussed in claim 4, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the non-transitory machine readable medium of claim 4, wherein a volume containing the file was hosted by a different DEFS of the node or another node of the plurality of nodes a time at which the file was created (Dronamraju, Fig. 9; [0112]: the block identifier determines that the location of the one or more data blocks is on a node in distributed file system 900 other than node 907. The data that is read may then be sent to application 914 via data management subsystem 916) and a volume ID associated with the file was generated based on a combination of a cluster-wide, unique ID of the different DEFS and a volume counter value associated with the different DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 6,
As discussed in claim 4, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the non-transitory machine readable medium of claim 4, wherein a volume containing the file was hosted by the DEFS at a time at which the file was created (Dronamraju, Fig. 9; [0112]: block service 924 and storage manager 926 can retrieve the data to be read using the block identifier, i.e., “the file was hosted by the DEFS”) and a volume ID associated with the file was generated based on a combination of a cluster-wide, unique ID of the DEFS and a volume counter value associated with the DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 7,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the non-transitory machine readable medium of claim 1, wherein the requestor is a subsystem or workflow of the distributed storage system and wherein the file comprises a metafile containing metadata used by the DEFS (Fig. 17; [0156]: this read request may be received from a client (e.g., a client node) or application, i.e., “a subsystem”. Fig. 1; [0050]: inodes are used to identify files and file attributes such as creation time, access permissions, size, and block location, etc., wherein inode reads on “a metafile containing metadata” about the file to be used by the DEFS).
With regard to claim 8,
As discussed in claim 7, Dronamraju and Park teach all the limitations therein.
Park further teaches
the non-transitory machine readable medium of claim 7, wherein during creation of the metafile, a volume ID was associated with metafile that was previously generated based on a combination of a reserved value and a fixed counter value selected based on a type of the metafile (Col. 1, lines 47-51: the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on “a reserved value” of an inode number reads on "a fixed counter value selected based on a type of the metafile").
With regard to claim 9,
Dronamraju teaches
a method (Abstract: a distributed storage management system) comprising:
receiving, by a dynamically extensible file system (DEFS) of a node of a plurality of nodes of a cluster representing a distributed storage system, a read request from a client to read data from a file contained within a volume hosted by the DEFS (Fig. 9; [0110]: file system instance 906 may receive a read request from application 914 at data management subsystem 916 of file system instance 906, where the read request identifies a particular data block in a file. Fig. 4; [0076]: distributed file system 400 is implemented across cluster 402 of nodes 404, which include node 406, node 407, and node 408. Fig. 4; [0084]: receive a read request via application layer 422 mapped to file system volume 424 within node 406. “DEFS” is taught by the combination of Fig. 1; [0042]; [0044]: a distributed file system over a cluster of nods, [0039]: the distributed file system is capable of mapping multiple file system volumes to the underlying distributed block layer, and [0039]: the distributed file system enables scaling and load balancing and the distributed block layer is capable of automatically and independently growing to accommodate the needs of the file system volume, satisfying the definition for “DEFS” in [0050] of the specification);
determining a first set of context data associated with the read request including at least a first buffer tree identifier (bufftree ID) associated with the file that is unique across the cluster … (Fig. 17; [0159]; Fig. 6; [0099]: access the data in the data block using the block identifier identified in operation 1712 (operation 1714). The block identifier is stored within the filesystem volume's buffer tree in the data management subsystem and is directly mapped to the file system volume data, wherein the block identifier corresponds to the bufftree ID and the fact that the buffer tree identifier identifies the particular file system volume data indicates its being “unique across the cluster”); and
prior to returning a block of data of the requested data to the client, verifying:
a second bufftree ID contained within a second set of context data associated with the block of data matches the first bufftree ID (Fig. 17; [0156]-[0157]: the read request including a volume identifier (operation 1708). The volume identifier is mapped to a file system volume managed by the data management subsystem (operation 1710). A data block within the file system volume is associated, by the data management subsystem, with a block identifier that corresponds to a data block of a logical block device in a distributed block layer of the distributed file system (operation 1712));
Dronamraju does not teach
determine a current epoch value of the DEFS; and
prior to returning a block of data of the requested data to the client, verifying:
an epoch value contained within the second set of context data and the current epoch value satisfy a cluster-wide timeline check.
Park teaches
determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored; and prior to returning a block of data of the requested data to the requestor, verify: an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check (Fig. 9; Col. 9, lines 49-62: this process of determination may involve a request for metadata from the clustered computing network and compare the latest modification times for the file in the requested volume and the file in the other volume, wherein a DEFS is taught by the primary reference as discussed above, wherein the latest modification time corresponds to an “epoch value” per its definition in [0040] of the specification).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dronamraju to incorporate the teachings of Park to determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored and prior to returning a block of data of the requested data to the requestor, verify an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check. Doing so would determine that the files are the same if the latest modification times are the same for both files as taught by Park (Col. 9, lines 58-60).
With regard to claim 10,
As discussed in claim 9, Dronamraju and Park teach all the limitations therein.
Park further teaches
the method of claim 9, wherein the cluster-wide timeline check is satisfied when the epoch value is less than the current epoch value (Fig. 9; Col. 9, lines 49-62: if the latest modification times are the same for both files, the method can assume that the files are the same. In view of the “equal to” teaching of the epoch value, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified it to either less than or greater than to suit individual needs).
With regard to claim 11,
As discussed in claim 9, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the method of claim 9, wherein the first set of context data further includes a file block number (FBN) based on a starting offset specified by the read request, and wherein the method further comprises verifying the FBN matches an FBN contained within the second set of context data. (Fig. 9; [0111]: process the read request and use information in a tree of indirect blocks corresponding to the file to locate a block number for the data block holding the requested data, wherein a block number corresponds to a “FBN” and matching the FBNs is inherently taught, and a FBN is assigned sequentially from 0 hence “based on a starting offset is inherent).
With regard to claim 12,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the method of claim 1, wherein a volume containing the file was hosted by a different DEFS of the node or another node of the plurality of nodes a time at which the file was created (Dronamraju, Fig. 9; [0112]: the block identifier determines that the location of the one or more data blocks is on a node in distributed file system 900 other than node 907. The data that is read may then be sent to application 914 via data management subsystem 916) and a bufftree ID associated with the file was generated based on a combination of an ID of the different DEFS and a volume counter value associated with the different DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 13,
As discussed in claim 1, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the method of claim 1, wherein a volume containing the file was hosted by the DEFS at a time at which the file was created (Dronamraju, Fig. 9; [0112]: block service 924 and storage manager 926 can retrieve the data to be read using the block identifier, i.e., “the file was hosted by the DEFS”) and a bufftree ID associated with the file was generated based on a combination of an ID of the DEFS and a volume counter value associated with the DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 14,
Dronamraju teaches
a distributed storage system (Abstract: a distributed storage management system) comprising:
one or more processing resources (Fig. 1; [0160]: a processing system); and instructions that when executed by the one or more processing resources cause the distributed storage system to:
receive, within a node of a plurality of nodes of a cluster representing the distributed storage system, a read request from a requestor to read data from a file (Fig. 9; [0110]: file system instance 906 may receive a read request from application 914 at data management subsystem 916 of file system instance 906, where the read request identifies a particular data block in a file. Fig. 4; [0076]: distributed file system 400 is implemented across cluster 402 of nodes 404, which include node 406, node 407, and node 408. Fig. 4; [0084]: receive a read request via application layer 422 mapped to file system volume 424 within node 406);
determine an expected set of context data associated with the read request including at least a first buffer tree identifier (bufftree ID) associated with the file that is unique across the cluster … of a dynamically extensible file system (DEFS) in which the file is stored (Fig. 17; [0159]; Fig. 6; [0099]: access the data in the data block using the block identifier identified in operation 1712 (operation 1714). The block identifier is stored within the filesystem volume's buffer tree in the data management subsystem and is directly mapped to the file system volume data, wherein the block identifier corresponds to the bufftree ID and the fact that the buffer tree identifier identifies the particular file system volume data indicates its being “unique across the cluster”. “a DEFS” is taught by the combination of Fig. 1; [0042]; [0044]: a distributed file system over a cluster of nods, [0039]: the distributed file system is capable of mapping multiple file system volumes to the underlying distributed block layer, and [0039]: the distributed file system enables scaling and load balancing and the distributed block layer is capable of automatically and independently growing to accommodate the needs of the file system volume, satisfying the definition for “DEFS” in [0050] of the specification); and
prior to returning a block of data of the requested data to the requestor, verify:
a second bufftree ID contained within a second set of context data associated with the block of data matches the first bufftree ID in the expected set of context data (Fig. 17; [0156]-[0157]: the read request including a volume identifier (operation 1708). The volume identifier is mapped to a file system volume managed by the data management subsystem (operation 1710). A data block within the file system volume is associated, by the data management subsystem, with a block identifier that corresponds to a data block of a logical block device in a distributed block layer of the distributed file system (operation 1712));
Dronamraju does not teach
determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored; and
prior to returning a block of data of the requested data to the requestor, verify:
an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check.
Park teaches
determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored; and prior to returning a block of data of the requested data to the requestor, verify: an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check (Fig. 9; Col. 9, lines 49-62: this process of determination may involve a request for metadata from the clustered computing network and compare the latest modification times for the file in the requested volume and the file in the other volume, wherein a DEFS is taught by the primary reference as discussed above, wherein the latest modification time corresponds to an “epoch value” per its definition in [0040] of the specification).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Dronamraju to incorporate the teachings of Park to determine a current epoch value of a dynamically extensible file system (DEFS) in which the file is stored and prior to returning a block of data of the requested data to the requestor, verify an epoch value contained within the second set of context data and the current epoch value of the expected set of context data satisfy a cluster-wide timeline check. Doing so would determine that the files are the same if the latest modification times are the same for both files as taught by Park (Col. 9, lines 58-60).
With regard to claim 15,
As discussed in claim 14, Dronamraju and Park teach all the limitations therein.
Park further teaches
the distributed storage system of claim 14, wherein the cluster-wide timeline check is satisfied when the epoch value is less than or equal to the current epoch value (Fig. 9; Col. 9, lines 49-62: if the latest modification times are the same for both files, the method can assume that the files are the same, i.e., “equal to”).
With regard to claim 16,
As discussed in claim 14, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the distributed storage system of claim 14, wherein the first set of context data further includes a file block number (FBN) based on a starting offset specified by the read request, and wherein the instructions further cause the distributed storage system to verify the FBN matches a virtual volume block number (VVBN) of a volume used to reference the data block or an FBN contained within the second set of context data (Fig. 9; [0111]: process the read request and use information in a tree of indirect blocks corresponding to the file to locate a block number for the data block holding the requested data, wherein a block number corresponds to a “FBN” and matching the FBNs is inherently taught, and a FBN is assigned sequentially from 0 hence “based on a starting offset is inherent).
With regard to claim 17,
As discussed in claim 14, Dronamraju and Park teach all the limitations therein.
Dronamraju further teaches
the distributed storage system of claim 14, wherein the requestor is a client of the distributed storage system (Fig. 9; [0110]: file system instance 906 may receive a read request from application 914 at data management subsystem 916 of file system instance 906, wherein application 914 is “a client of the distributed storage system”).
With regard to claim 18,
As discussed in claim 17, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the distributed storage system of claim 17, wherein a volume containing the file was hosted by a different DEFS of the node or another node of the plurality of nodes a time at which the file was created (Dronamraju, Fig. 9; [0112]: the block identifier determines that the location of the one or more data blocks is on a node in distributed file system 900 other than node 907. The data that is read may then be sent to application 914 via data management subsystem 916) and a bufftree ID associated with the file was generated based on a combination of a cluster-wide, unique ID of the different DEFS and a volume counter value associated with the different DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 19,
As discussed in claim 17, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the distributed storage system of claim 17, wherein a volume containing the file was hosted by the DEFS at a time at which the file was created (Dronamraju, Fig. 9; [0112]: block service 924 and storage manager 926 can retrieve the data to be read using the block identifier, i.e., “the file was hosted by the DEFS”) and a bufftree ID associated with the file was generated based on a combination of a cluster-wide, unique ID of the DEFS and a volume counter value associated with the DEFS (Park, Col. 1, lines 47-51: associate the data blocks of the file with a first volume identifier corresponding to the first volume, where the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on a “unique ID” of the DEFS and generation number reads on "a volume counter value").
With regard to claim 20,
As discussed in claim 14, Dronamraju and Park teach all the limitations therein.
Dronamraju and Park further teach
the distributed storage system of claim 14, wherein the requestor is a subsystem or workflow of the distributed storage system, wherein the file comprises a metafile containing metadata used by the DEFS (Dronamraju, Fig. 17; [0156]: this read request may be received from a client (e.g., a client node) or application, i.e., “a subsystem”. Fig. 1; [0050]: inodes are used to identify files and file attributes such as creation time, access permissions, size, and block location, etc., wherein inode reads on “a metafile containing metadata” about the file to be used by the DEFS), and wherein during creation of the metafile, a volume ID was associated with metafile that was previously generated based on a combination of a reserved value and a fixed counter value selected based on a type of the metafile (Park, Col. 1, lines 47-51: the first volume identifier uniquely identifies a file via a combination of an inode number, generation number, and/or file system identifier (FSID)), wherein FSID reads on “a reserved value” of an inode number reads on "a fixed counter value selected based on a type of the metafile").
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOQIN HU whose telephone number is (571)272-1792. The examiner can normally be reached on Monday-Friday 7:00am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOQIN HU/Examiner, Art Unit 2168
/CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168