Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Spezia et al. (FR 3103299, published 05/21/2021).
Regarding Claim 1, Spezia discloses a computing device comprising:
a controller (“a distributed computer system 100, with a data distribution entity 3, a database 30 and a data retrieval and delivery entity 50 as well as the client applications 1, 1' and a daemon-fuse 51, with Immutable data blocks20 being replicated and stored by the data distribution entity on the database30 and queried via a quorum read65 by the data retrieval and delivery entity50, is shown in Fig.1,” Spezia); and
a machine-readable storage storing instructions (“a distributed computer system 100, with a data distribution entity 3, a database 30 and a data retrieval and delivery entity 50 as well as the client applications 1, 1' and a daemon-fuse 51, with Immutable data blocks20 being replicated and stored by the data distribution entity on the database30 and queried via a quorum read65 by the data retrieval and delivery entity50, is shown in Fig.1,” Spezia), the instructions executable by the processor to:
identify, by a metadata scanner, a plurality of files included in a filesystem, wherein each file in the filesystem comprises one or more logical blocks (“store immutable blocks of data, a data distribution entity configured to divide source data into immutable data blocks and metadata, wherein the data distribution entity is configured to replicate and store the immutable data blocks on at least two different storage node(s) of the database, wherein the metadata includes values referring to the immutable data blocks in said at least two storage nodes for a key-value database call,” Spezia), and wherein the filesystem is included in a backup (“copies of immutable data blocks are stored on a plurality of different storage nodes in a redundant manner,” since the filesystem relates to making copies of data blocks, then the filesystem is included in a backup as claimed; Spezia);
issue, by the metadata scanner, a read call for a logical block of a file included in the filesystem (“read request comprises a plurality of individual parallel requests to different storage nodes storing the same immutable data block,” Spezia);
translate, by a filesystem layer, the read call into a set of translated read calls (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
for each translated read call of the set of translated read calls, determine, by a metadata extractor, whether the translated read call is to read a metadata block of the filesystem (“fuse daemon is configured to retrieve blocks of data delivered by the database in the fastest response and is configured to discard results delivered subsequent to the fastest response, wherein the daemon fuse is configured to generate a virtual file comprising the corresponding data range from the retrieved data blocks,” Spezia);
in response to a determination that the translated read call is to read the metadata block, obtain, by the metadata extractor, the metadata block from a persistent storage device (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia); and
store, by the metadata extractor, the obtained metadata block in a metadata cache of the computing device (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia).
Regarding Claim 2, Spezia discloses a computing device of claim 1, wherein the set of translated read calls comprises a data read and a metadata read (“In some examples, the values understood by the metadata refer to values referring to the immutable blocks of data for a key-value database call, these reference values being generated through hash values of each block of corresponding data. To provide an example, each block of immutable data is hashed and the resulting hash value is used as a key for that block of immutable data and a reference is made in metadata which is for example file metadata. As such, there may be actual records in the metadata file regarding immutable blocks of data understood by the file for which the range is requested. These records may include information indicating which hash value corresponding to said key-value in the key-value database call corresponds to which block of immutable data in the file. As mentioned above, each referenced block of immutable data is replicated to at least three nodes. For example, those immutable data blocks containing data within the requested range of data are fetched, those that do not contain data inside this range are not fetched,” Spezia).
Regarding Claim 3, Spezia discloses a computing device of claim 2, including instructions executable by the controller to:
generate, by the metadata scanner, a data read buffer to receive a result of the data read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
populate, by the metadata scanner, a data read signature into the data read buffer (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia); and
generate, by the metadata scanner, a metadata read buffer to receive a result of the metadata read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia).
Regarding Claim 4, Spezia discloses a computing device of claim 3, including instructions executable by the controller to:
receive, by the metadata extractor, the data read from the filesystem layer (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia);
in response to a receipt of the data read, determine, by the metadata extractor, whether the data read buffer includes the data read signature (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia); and
in response to a determination that the data read buffer includes the data read signature, determine that the data read is not to read the metadata block (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia).
Regarding Claim 5, Spezia discloses a computing device of claim 4, including instructions executable by the controller to:
in response to a determination that the data read is not to read the metadata block, set, by the metadata extractor, the data read as completed, wherein the data read is not executed (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia).
Regarding Claim 6, Spezia discloses a computing device of claim 3, including instructions executable by the controller to:
receive, by the metadata extractor, the metadata read from the filesystem layer (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia);
in response to a receipt of the metadata read, determine, by the metadata extractor, whether the metadata read buffer includes the data read signature (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia); and
in response to a determination that the metadata read buffer does not include the data read signature, determine that the read call is to read the metadata block (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia).
Regarding Claim 7, Spezia discloses a computing device of claim 6, including instructions executable by the controller to:
in response to the determination that the read call is to read the metadata block, determine whether the metadata block has to be loaded into the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia); and
in response to a determination that the metadata block has to be loaded into the metadata cache, obtain the metadata block from the persistent storage device (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia).
Regarding Claim 8, Spezia discloses a computing device of claim 7, including instructions executable by the controller to:
in response to the determination that the read call is to read the metadata block, perform a look-up of the metadata block in a set of load flags, wherein the set of load flags indicate which blocks remain to be loaded in the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia); and
determine, based on the look-up of the metadata block in the set of load flags, that the metadata block has to be loaded into the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia).
Regarding Claim 9, Spezia discloses a computing device of claim 1, including instructions executable by the controller to:
prior to issuing the read call, issue, by the metadata scanner, an open system call for the file using a command flag to invoke a direct input/output (I/O) mode (“In some examples two or more client-side applications use a common middleware client library to access the virtual file. Middleware could be defined, for example, as a layer of software that sits between the operating system and the applications on either side of a distributed computer system in a network. Providing a common middleware client library to access the virtual file saves resources in the communication between the virtual file and client-side applications. This communication could be done on dedicated interfaces. In some examples, the data retrieval and delivery entity includes a file interface compatible with at least one portable operating system, the file interface being an interface to at least a back end of a client application and the file interface supporting sequential and random read access. This file interface could be used to implement the communication on the middleware client library. In some instances, the operating system of the data retrieval and delivery entity is a UNIX-based operating system,” Spezia).
Regarding Claim 10, Spezia discloses a computing device of claim 1, wherein the metadata scanner is executed in a user space of a system memory of the computing device (“each time a client opens a file they will first need to establish a location for the file and will need to clean up the location or re-establish the location when necessary. Then when the garbage collector daemon for example scans the file to purge it it will check if there is a location on the file or not. This is for example implemented by placing an element (filename: e.g. TTL) with the expiration time. If many clients open the file, they will write to the same item. If this item expires, it means no one needs the file anymore and the item can be deleted,” Spezia), and wherein the metadata filter is executed in a kernel space of the system memory (“The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia).
Regarding Claim 11, Spezia discloses a method comprising:
identifying, by a metadata scanner executed by a controller, a plurality of files included in a filesystem, wherein each file in the filesystem comprises one or more logical blocks (“store immutable blocks of data, a data distribution entity configured to divide source data into immutable data blocks and metadata, wherein the data distribution entity is configured to replicate and store the immutable data blocks on at least two different storage node(s) of the database, wherein the metadata includes values referring to the immutable data blocks in said at least two storage nodes for a key-value database call,” Spezia), and wherein the filesystem is included in a backup (“copies of immutable data blocks are stored on a plurality of different storage nodes in a redundant manner,” since the filesystem relates to making copies of data blocks, then the filesystem is included in a backup as claimed; Spezia);
issuing, by the metadata scanner, a read call for a logical block of a file included in the filesystem (“read request comprises a plurality of individual parallel requests to different storage nodes storing the same immutable data block,” Spezia);
generating, by the metadata scanner, a read buffer associated with the read call (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
determining, by a metadata extractor executed by the controller, whether the read buffer includes a data read signature indicating a data block read (“fuse daemon is configured to retrieve blocks of data delivered by the database in the fastest response and is configured to discard results delivered subsequent to the fastest response, wherein the daemon fuse is configured to generate a virtual file comprising the corresponding data range from the retrieved data blocks,” Spezia);
in response to a determination that the read buffer lacks the data read signature, obtaining, by the metadata extractor, a metadata block from a persistent storage (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia); and
storing, by the metadata extractor, the obtained metadata block in a metadata cache of the computing device (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia).
Regarding Claim 12, Spezia discloses a method of claim 11, comprising:
in response to a determination that the read buffer includes the data read signature, marking, by the metadata extractor, the data read as completed, wherein the data read is not executed (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia).
Regarding Claim 13, Spezia discloses a method of claim 11, comprising:
translating, by a filesystem layer, the read call into a data read and a metadata read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
generating, by the metadata scanner, a data read buffer to receive a result of the data read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
populating, by the metadata scanner, a data read signature into the data read buffer (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia); and
generating, by the metadata scanner, a metadata read buffer to receive a result of the metadata read, wherein the read buffer is one of the data read buffer and the metadata read buffer (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia).
Regarding Claim 14, Spezia discloses a method of claim 11, comprising:
in response to the determination that the read buffer lacks the data read signature, determining whether the metadata block has to be loaded into the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia); and
in response to a determination that the metadata block has to be loaded into the metadata cache, obtaining the metadata block from the persistent storage device (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia).
Regarding Claim 15, Spezia discloses a method of claim 11, comprising:
prior to issuing the read call, issuing, by the metadata scanner, an open system call for the file using a command flag to invoke a direct input/output (I/O) mode (“In some examples two or more client-side applications use a common middleware client library to access the virtual file. Middleware could be defined, for example, as a layer of software that sits between the operating system and the applications on either side of a distributed computer system in a network. Providing a common middleware client library to access the virtual file saves resources in the communication between the virtual file and client-side applications. This communication could be done on dedicated interfaces. In some examples, the data retrieval and delivery entity includes a file interface compatible with at least one portable operating system, the file interface being an interface to at least a back end of a client application and the file interface supporting sequential and random read access. This file interface could be used to implement the communication on the middleware client library. In some instances, the operating system of the data retrieval and delivery entity is a UNIX-based operating system,” Spezia).
Regarding Claim 16, Spezia discloses a non-transitory machine-readable medium storing instructions that upon execution cause a controller to:
identify, by a metadata scanner, a plurality of files included in a filesystem, wherein each file in the filesystem comprises one or more logical blocks (“store immutable blocks of data, a data distribution entity configured to divide source data into immutable data blocks and metadata, wherein the data distribution entity is configured to replicate and store the immutable data blocks on at least two different storage node(s) of the database, wherein the metadata includes values referring to the immutable data blocks in said at least two storage nodes for a key-value database call,” Spezia), and wherein the filesystem is included in a backup (“copies of immutable data blocks are stored on a plurality of different storage nodes in a redundant manner,” since the filesystem relates to making copies of data blocks, then the filesystem is included in a backup as claimed; Spezia);
issue, by the metadata scanner, a read call for a logical block of a file included in the filesystem (“read request comprises a plurality of individual parallel requests to different storage nodes storing the same immutable data block,” Spezia);
translate, by a filesystem layer, the read call into a set of translated read calls (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
for each translated read call of the set of translated read calls, determine, by a metadata extractor, whether the translated read call is to read a metadata block of the filesystem (“fuse daemon is configured to retrieve blocks of data delivered by the database in the fastest response and is configured to discard results delivered subsequent to the fastest response, wherein the daemon fuse is configured to generate a virtual file comprising the corresponding data range from the retrieved data blocks,” Spezia);
in response to a determination that the translated read call is to read the metadata block, obtain, by the metadata extractor, the metadata block from a persistent storage device (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia); and
store, by the metadata extractor, the obtained metadata block in a metadata cache of the computing device (“the new data block can be fetched and pushed to the cache page by the fuse daemon,” Spezia).
Regarding Claim 17, Spezia discloses a non-transitory machine-readable medium of claim 16, including instructions that upon execution cause the controller to:
in response to a determination that the translated read call is not to read the metadata block, mark the data read as completed, wherein the data read is not executed (“In some examples, the fuse daemon is configured to perform the polling for the quorum read operation three times in parallel and in which the same blocks of data are stored five times on at least five database storage nodes different. In this example, immutable data blocks are replicated to more storage nodes than are currently needed to service the three queries of the quorum read. However, these increase data availability because even if two storage nodes fail, there are still enough working storage nodes to complete the quorum read operation successfully. Additionally, the number of polls in a quorum read operation could be flexibly increased to five polls when storing five copies of immutable data blocks on five storage nodes,” Spezia).
Regarding Claim 18, Spezia discloses a non-transitory machine-readable medium of claim 16, including instructions that upon execution cause the controller to:
translate, by a filesystem layer, the read call into a data read and a metadata read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
generate, by the metadata scanner, a data read buffer to receive a result of the data read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia);
populate, by the metadata scanner, a data read signature into the data read buffer (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia); and
generate, by the metadata scanner, a metadata read buffer to receive a result of the metadata read (“data retrieval and delivery entity includes a fuse daemon for converting a request for a range of data from a file initiated by at least one client-side application into a quorum read call for at least one block of data immutable to the database. The fuse daemon is, for example, a subroutine of an operating system or could be a subroutine of a FUSE filesystem. The fuse daemon converts the request for a range of data from a file to requests for immutable blocks of data,” Spezia).
Regarding Claim 19, Spezia discloses a non-transitory machine-readable medium of claim 18, including instructions that upon execution cause the controller to:
receive, by the metadata extractor, the data read from the filesystem layer (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia);
in response to a receipt of the data read, determine, by the metadata extractor, whether the data read buffer includes the data read signature (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia); and
in response to a determination that the data read buffer includes the data read signature, determine that the data read is not to read the metadata block (“The metadata includes values referring to immutable blocks of data in said at least two storage nodes for a key-value database call. Values referring to immutable data blocks can be a set of unique identifiers for certain data blocks. These identifiers could be created from a number of immutable blocks of data, a value derived from the contents of the data block, or similar. In this sense, the key-value database call is, for example, a database call for an immutable block of data corresponding to the identifier used in the database call request. Metadata could also include information about the file type of data blocks to distinguish between deduplication friendly files (e.g. SQLite) and others,” Spezia).
Regarding Claim 20, Spezia discloses a non-transitory machine-readable medium of claim 16, including instructions that upon execution cause the controller to:
in response to the determination that the read call is to read the metadata block, perform a look-up of the metadata block in a set of load flags, wherein the set of load flags indicate which blocks remain to be loaded in the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia);
determine, based on the look-up of the metadata block in the set of load flags, whether the metadata block has to be loaded into the metadata cache (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia); and
in response to a determination that the metadata block has to be loaded into the metadata cache, obtain the metadata block from the persistent storage device (“In some examples, the data retrieval and delivery entity includes an operating system having an operating system cache page wherein the operating system cache page is configured to store at least portions of the retrieved data blocks that match the data range of a file requested by the client-side application. The operating system is, for example, an operating system that supports POSIX, such as a UNIX operating system. The cache page of said operating system is, for example, a cache of a UNIX kernel, p. ex. LINUX. This cache page can be implemented on an Openshift node that is in communication with the client-side application(s) as a point of delivery (POD). The daemon-fuse can be configured to keep fetched data blocks in the cache page for as long as possible, so until the file and with it — the data blocks — have changed. When more up-to-date data blocks are available, the new data block can be fetched and pushed to the cache page by the fuse daemon. Either all fetched immutable data blocks containing the requested range of a file are stored in the cache page, or only data content extracted from the data block that exactly matches said requested range,” Spezia).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIOVANNA B COLAN whose telephone number is (571)272-2752. The examiner can normally be reached Mon - Fri 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GIOVANNA B COLAN/Primary Examiner, Art Unit 2165 February 22, 2026