Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
The instant application having Application No. 17/401,076 has claims 1-17 and 19-20 pending filed on 08/12/2021; there are 3 independent claim and 16 dependent claims, all of which are ready for examination by the examiner.
Response to Arguments
This Office Action is in response to applicant’s communication filed on October 24, 2025 in response to PTO Office Action dated April 24, 2025. The Applicant’s remarks and amendments to the claims and/or specification were considered with the results that follow.
Claim Rejections
Claim Rejections - 35 USC § 103
35 USC § 103 Rejection of claims 1-17 and 19-20
Independent Claims 1, 9 and 17
CLAIM 1
Applicant argues on pages 7 and 8 in regards to the independent claim 1, “Claim 1, as presented herein recites, inter alia, "mounting the filesystem on the CSD and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations," and "interpreting a data structure associated with the filesystem using the filesystem awareness module" (emphasis added). The Office has cited Davies as potentially disclosing such a feature. Specifically, the Office cites paragraph [0034] of Davies (cloud controller instance provides access to the distributed filesystem by exporting a filesystem mount point) as disclosing the above-referenced feature. However, while Davies discloses distributed filesystem mounting, it does not disclose autonomous mounting by a local module within a CSD a driver routine that does not require any host write operation in the manner recited in claim 1 as amended herewith.“
Examiner respectfully disagrees with arguments on pages 7 and 8 in regards to the independent claim 1. The combination of Mesnier et al (US PGPUB 20220188028), Davies et al (US PGPUB 20140006465), Snellman et al (US PGPUB 20210209077) and Terry et al (US PGPUB 20110113194) teaches all the limitations of the amended independent claim 1. The prior art Terry (Paragraph [0068], Paragraph [0069] and Paragraph [0070]) teaches “and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations” and Terry (Paragraph [0070]) teaches “interpreting a data structure associated with the filesystem using the filesystem awareness module”. Thus, the applicant’s arguments against Davies are moot.
Applicant argues on page 8 in regards to the independent claim 1, “Claim 1 as amended herein further recites "storing the result of executing the computation program using a PCIe interface," (emphasis added). This is advantageous as it allows the storage device to communicate with the NVM using the PCIe to store the results of executing the computation program executed on the storage device. Office has cited paragraph [0077] of Mesnier (the block storage device may be a SSD that may be coupled with the host over a local bus such as Peripheral Component Interconnect Express (PCle)) as disclosing such feature. However, the cited sections - or Mesnier in general - does not disclose "storing the result of executing the computation program using a PCIe interface."
Examiner respectfully disagrees with arguments on page 8 in regards to the independent claim 1. The combination of combination of Mesnier et al (US PGPUB 20220188028), Davies et al (US PGPUB 20140006465), Snellman et al (US PGPUB 20210209077) and Terry et al (US PGPUB 20110113194) teaches all the limitations of the independent claim 1. Mesnier (Paragraph [0077] and Paragraph [0090]) teaches “and storing the result of executing the computation program using a PCIe interface (the technique may include storing a result of the operation performed at a virtual output object location and/or may be returned to a host where the block storage device may be a SSD that may be coupled with the host over a local bus such as PCIe)”. Thus, applicant’s argument that “However, the cited sections - or Mesnier in general - does not disclose ‘storing the result of executing the computation program using a PCIe interface’ ” is incorrect.
Applicant argues on page 8 in regards to the independent claim 1, “At least for the reasons discussed above, claim 1 and the claims dependent therefrom are allowable over the cited references.“
Examiner respectfully disagrees with arguments on page 8 in regards to the independent claim 1 and the claims dependent therefrom. For the reasons specified supra for the Claim 1, the combination of Mesnier et al (US PGPUB 20220188028), Davies et al (US PGPUB 20140006465), Snellman et al (US PGPUB 20210209077) and Terry et al (US PGPUB 20110113194) teaches all the limitations of the independent claim 1 and the claims dependent therefrom . Thus, the claim 1 and the claims dependent therefrom are not allowable.
CLAIMs 9 and 17.
Applicant argues on page 8 in regards to the independent claims 9 and 17, “Furthermore, each of the claims 9 and 17 are also allowable for the reasons discussed above with respect to claim 1.“
Examiner respectfully disagrees with arguments on page 8 in regards to the independent claims 9 and 17. As specified supra for the independent Claim 1, the combination of Mesnier et al (US PGPUB 20220188028), Davies et al (US PGPUB 20140006465), Snellman et al (US PGPUB 20210209077) and Terry et al (US PGPUB 20110113194) teaches all the limitations of the independent claims 9 and 17. The claims 9 and 17 are not allowable.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mesnier et al (US PGPUB 20220188028) in view of Davies et al (US PGPUB 20140006465) and in further view of Snellman et al (US PGPUB 20210209077) and Terry et al (US PGPUB 20110113194).
As per claim 1:
Mesnier teaches:
“A method, comprising” (Paragraph [0068] (a computing process may run on the host))
“receiving, at a computational storage device (CSD), a request to process a file using a computation program stored on one or more non-transitory computer-readable storage media of the CSD” (Paragraph [0066], Paragraph [0081] and Paragraph [0102] (receive a request from a computing process, the request may include a higher-level object (e.g., a file), a requested operation or computation and relating to computational storage, or “compute-in-storage,” which refers to data storage solutions, a computer-readable storage medium having instructions configured to practice all or selected ones of the operations associated with the computing device, a requested operation or computation and relating to computational storage is implemented with the computational capabilities))
“reading physical data blocks associated with the file into a computational storage memory (CSM) of the CSD” (Paragraph [0074] and Paragraph [0088] (one or more standard operations like read operation of the NVM may continue to normally occur while the offloaded compute operation is performed where the operation may require reading multiple block extents for a particular virtual object))
“executing the computation program on the physical data blocks in the CSM” (Paragraph [0103] (the storage medium may include a number of programming instructions and execution of the programming instructions performs various operations described for the compute offload controller, the parsing logic, the compute logic, the compute offloader, the client offload logic, the initiator, the block storage device and/or the compute offloader))
“and storing the result of executing the computation program using a PCIe interface” (Paragraph [0077] and Paragraph [0090] (the technique may include storing a result of the operation performed at a virtual output object location and/or may be returned to a host where the block storage device may be a SSD that may be coupled with the host over a local bus such as PCIe)).
Mesnier does not EXPLICITLY disclose: detecting a filesystem associated with the file within a namespace of CSD; wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; mounting the filesystem on the CSD; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Davies teaches:
“detecting a filesystem associated with the file within a namespace of CSD” (Paragraph [0008] (the initial cloud controller uses the namespace mappings for the global namespace to determine a preferred cloud controller that will handle the request))
“mounting the filesystem on the CSD” (Paragraph [0304] (cloud controller instance provides access to the distributed filesystem by exporting a filesystem mount point)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Davies and apply them on teachings of Mesnier for the method “detecting a filesystem associated with the file within a namespace of CSD; mounting the filesystem on the CSD”. One would be motivated as the cloud controllers that manage the distributed filesystem track ongoing changes for the distributed filesystem and dynamically adjust the mapping of clients systems to cloud controllers and the assignment of namespace mappings to cloud controllers to improve and balance file access performance for the distributed filesystem (Davies, Paragraph [0019]).
Mesnier and Davies do not EXPLICITLY disclose: wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Snellman teaches:
“wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file” (Paragraph [0150] and Paragraph [0157] (FIG. 5 shows an example of a resulting data structure in which a plurality of segments are stored in different directed acyclic graphs and FIG. 6 shows an example of a process by which data may be retrieved from the data structure stored in different directed acyclic graphs))
“and a driver routine associated with the filesystem required to access the file within the namespace” (Paragraph [0157] (the process includes detecting an application querying a database which may include executing the security driver and receiving a read request sent to the database driver by the application))
“and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations” (Paragraph [0353] (a security driver may determine that a database request indicates access of data denied to an application, such as based on the application-level user information corresponding to the request, the data indicated for access by the request, the application policy information governing access to that data and may include applying one or more of the rules to deny access to some values or mask or determinatively mask some values, and optionally return some other values))
“using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD” (Paragraph [0102] and Paragraph [0123] (the node attributes (data structure) include an identifier of the respective node where the identifier may be an identifier within a namespace of the directed acyclic graph, receiving the read command may cause the security driver to access the lower-trust database or other lower-trust data store and retrieve a pointer to a node or sequence of nodes in which the specified document is stored)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Snellman and apply them on teachings of Mesnier and Davies for the method “wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD”. One would be motivated as registering a security driver to receive database requests generated by an application compatible with a database driver, the security driver obtaining a database request generated by the application, detecting, by the security driver, a user agent string appended to the database request, the user agent string including at least one identifier indicative of a user of the application or a client executing the application and obtaining, by the security driver, a policy by which access to a portion of data within a database arrangement by the application (Snellman, Paragraph [0006]).
Mesnier, Davies and Snellman do not EXPLICITLY disclose: and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module.
However, in an analogous art, Terry teaches:
“and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations” (Paragraph [0068], Paragraph [0069] and Paragraph [0070] (a filesystem-aware storage system analyzes host filesystem data structures in order to determine storage usage of the host filesystem, can also identify the data types of objects stored by the filesystem and store the objects using different storage schemes based on the data types and in order to determine the filesystem type, the filesystem-aware block storage device will generally support a set of filesystems for which it "understands" the inner workings sufficiently))
“interpreting a data structure associated with the filesystem using the filesystem awareness module” (Paragraph [0070] (the filesystem-aware block storage device will generally support a set of filesystems for which it utilizes the underlying data structures (e.g., free block bitmaps), once the filesystem type is known, the filesystem-aware block storage device can parse the superblock to find the free block bitmaps for the host filesystem)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Terry and apply them on teachings of Mesnier, Davies and Snellman for the method “and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module”. One would be motivated as such a filesystem-aware block storage device can make intelligent decisions regarding the physical storage of data like the filesystem-aware block storage device can identify blocks that have been released by the host filesystem and reuse the released blocks in order to effectively extend the data storage capacity of the system (Terry, Paragraph [0069]).
As per claim 2:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 1 above.
Mesnier further teaches:
“storing the results of the computation program execution in a cache” (Paragraph [0123] (the results are either stored directly in the context in caches to ensure host-target consistency and then returned to the caller)).
As per claim 3:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 2 above.
Mesnier further teaches:
“further comprising providing access to the result of the computation program execution to a host” (Paragraph [0090] (it may include storing a result of the operation performed, the result may be stored at a virtual output object location and/or may be returned to a host)).
As per claim 4:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 3 above.
Mesnier further teaches:
“wherein providing access to the result of the computation program execution to a host further comprising providing access to the result of the computation program execution to a host via a PCI express interface” (Paragraph [0077] (the block storage device may be a SSD that may be coupled with the host over a local bus such as Peripheral Component Interconnect Express (PCIe))).
As per claim 5:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 1 above.
Mesnier further teaches:
“wherein a filesystem aware module of the CSD receives identification of the filesystem associated with the file from the host” (Paragraph [0117] (an application uses a client object aware storage (OAS) library to create the virtual object and compute descriptor, and to send the command to the initiator for transport)).
As per claim 6:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 5 above.
Mesnier further teaches:
“further comprising syncing with the host before mounting the filesystem on the CSD” (Paragraph [0416] (the file must be “synced” on the host prior to issuing the compute-in-storage operation)).
As per claim 7:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 6 above.
Mesnier further teaches:
“further comprising keeping the file system mounted until receiving an unmount instruction from the host” (Paragraph [0187] (the storage server has access to the entire file or object (keeping the file system mounted until receiving an unmount instruction from the host) and can, therefore, perform the search close to the storage and simply return the matching result to the host)).
As per claim 8:
Mesnier, Davies, Snellman and Terry teach the method as specified in the parent claim 1 above.
Mesnier further teaches:
“wherein the mounted filesystem restricts the computation program operations to read only” (Paragraph [0074] and Paragraph [0585] (one or more standard operations like read operation of the NVM may continue to normally occur while the offloaded compute operation is performed and a particular access restriction to edge services may be applied)).
As per claim 9:
Mesnier teaches:
“One or more non-transitory computer-readable storage media encoding computer-executable instructions for executing on a computer system a computer process, the computer process comprising” (Paragraph [0102] (a computer-readable storage medium having instructions configured to practice all or selected ones of the operations associated with the computing device including))
“receiving, at a computational storage device (CSD), a request to process a file using a computation program stored on the CSD” (Paragraph [0066] and Paragraph [0081] (receive a request from a computing process, the request may include a higher-level object (e.g., a file), a requested operation or computation and relating to computational storage, or “compute-in-storage,” which refers to data storage solutions that are implemented with computational capabilities))
“reading physical data blocks associated with the file into a computational storage memory (CSM) of the CSD” (Paragraph [0074] and Paragraph [0088] (one or more standard operations like read operation of the NVM may continue to normally occur while the offloaded compute operation is performed where the operation may require reading multiple block extents for a particular virtual object))
“executing the computation program on the physical data blocks in the CSM” (Paragraph [0103] (the storage medium may include a number of programming instructions and execution of the programming instructions performs various operations described for the compute offload controller, the parsing logic, the compute logic, the compute offloader, the client offload logic, the initiator, the block storage device and/or the compute offloader))
“and storing the result of executing the computation program using a PCIe interface” (Paragraph [0077] and Paragraph [0090] (the technique may include storing a result of the operation performed at a virtual output object location and/or may be returned to a host where the block storage device may be a SSD that may be coupled with the host over a local bus such as PCIe)).
Mesnier does not EXPLICITLY disclose: detecting a filesystem associated with the file within a namespace of CSD; wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; mounting the filesystem on the CSD; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Davies teaches:
“detecting a filesystem associated with the file within a namespace of CSD” (Paragraph [0008] (the initial cloud controller uses the namespace mappings for the global namespace to determine a preferred cloud controller that will handle the request))
“mounting the filesystem on the CSD” (Paragraph [0304] (cloud controller instance provides access to the distributed filesystem by exporting a filesystem mount point)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Davies and apply them on teachings of Mesnier for one or more non-transitory computer-readable storage media “detecting a filesystem associated with the file within a namespace of CSD; mounting the filesystem on the CSD”. One would be motivated as the cloud controllers that manage the distributed filesystem track ongoing changes for the distributed filesystem and dynamically adjust the mapping of clients systems to cloud controllers and the assignment of namespace mappings to cloud controllers to improve and balance file access performance for the distributed filesystem (Davies, Paragraph [0019]).
Mesnier and Davies do not EXPLICITLY disclose: wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Snellman teaches:
“wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file” (Paragraph [0150] and Paragraph [0157] (FIG. 5 shows an example of a resulting data structure in which a plurality of segments are stored in different directed acyclic graphs and FIG. 6 shows an example of a process by which data may be retrieved from the data structure stored in different directed acyclic graphs))
“and a driver routine associated with the filesystem required to access the file within the namespace” (Paragraph [0157] (the process includes detecting an application querying a database which may include executing the security driver and receiving a read request sent to the database driver by the application))
“and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations” (Paragraph [0353] (a security driver may determine that a database request indicates access of data denied to an application, such as based on the application-level user information corresponding to the request, the data indicated for access by the request, the application policy information governing access to that data and may include applying one or more of the rules to deny access to some values or mask or determinatively mask some values, and optionally return some other values))
“using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD” (Paragraph [0102] and Paragraph [0123] (the node attributes (data structure) include an identifier of the respective node where the identifier may be an identifier within a namespace of the directed acyclic graph, receiving the read command may cause the security driver to access the lower-trust database or other lower-trust data store and retrieve a pointer to a node or sequence of nodes in which the specified document is stored)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Snellman and apply them on teachings of Mesnier and Davies for one or more non-transitory computer-readable storage media “wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD”. One would be motivated as registering a security driver to receive database requests generated by an application compatible with a database driver, the security driver obtaining a database request generated by the application, detecting, by the security driver, a user agent string appended to the database request, the user agent string including at least one identifier indicative of a user of the application or a client executing the application and obtaining, by the security driver, a policy by which access to a portion of data within a database arrangement by the application (Snellman, Paragraph [0006]).
Mesnier, Davies and Snellman do not EXPLICITLY disclose: and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module.
However, in an analogous art, Terry teaches:
“and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations” (Paragraph [0068], Paragraph [0069] and Paragraph [0070] (a filesystem-aware storage system analyzes host filesystem data structures in order to determine storage usage of the host filesystem, can also identify the data types of objects stored by the filesystem and store the objects using different storage schemes based on the data types and in order to determine the filesystem type, the filesystem-aware block storage device will generally support a set of filesystems for which it "understands" the inner workings sufficiently))
“interpreting a data structure associated with the filesystem using the filesystem awareness module” (Paragraph [0070] (the filesystem-aware block storage device will generally support a set of filesystems for which it utilizes the underlying data structures (e.g., free block bitmaps), once the filesystem type is known, the filesystem-aware block storage device can parse the superblock to find the free block bitmaps for the host filesystem)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Terry and apply them on teachings of Mesnier, Davies and Snellman for one or more non-transitory computer-readable storage media “and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module”. One would be motivated as such a filesystem-aware block storage device can make intelligent decisions regarding the physical storage of data like the filesystem-aware block storage device can identify blocks that have been released by the host filesystem and reuse the released blocks in order to effectively extend the data storage capacity of the system (Terry, Paragraph [0069]).
As per claim 10, the claim is rejected based upon the same rationale given for the parent claim 9 and the claim 2 above.
As per claim 11, the claim is rejected based upon the same rationale given for the parent claim 10 and the claim 3 above.
As per claim 12, the claim is rejected based upon the same rationale given for the parent claim 11 and the claim 4 above.
As per claim 13, the claim is rejected based upon the same rationale given for the parent claim 9 and the claim 5 above.
As per claim 14, the claim is rejected based upon the same rationale given for the parent claim 13 and the claim 6 above.
As per claim 15, the claim is rejected based upon the same rationale given for the parent claim 14 and the claim 7 above.
As per claim 16, the claim is rejected based upon the same rationale given for the parent claim 9 and the claim 8 above.
As per claim 17:
Mesnier teaches:
“A system, comprising” (Paragraph [0068] (a computer system comprising))
“a PCIe interface configured to communicate with computational storage memory (CSM) of a computational storage device (CSD) using an NVMe interface” (Paragraph [0073], Paragraph [0077] and Paragraph [0081] (the block storage device may include NVM and a compute offload controller where the compute offload controller may be a NVM controller, a SSD controller, a storage server controller, or any other suitable block-based storage controller, the compute logic may perform the requested compute operation against the virtual input object and the block storage device may be a SSD that may be coupled with the host over a local bus such as Peripheral Component Interconnect Express (PCIe)))
“a computational storage processor (CSP) configured to communicate with one or more hosts using the PCIe interface” (Paragraph [0645] and Paragraph [0665] (the components may communicate over the interconnect where the interconnect may include any number of technologies, including PCI express (PCIe) and the processors are connected using PCIe))
“a filesystem awareness module configured on a computational program memory (CPM) to access one or more of a plurality of filesystems” (Paragraph [0117] (an application uses a client object aware storage (OAS) library to create the virtual object and compute descriptor, and to send the command to the initiator for transport))
“wherein the CSP is configured to” (Paragraph [0073] (the compute offload controller may be configured to))
“receive a request to process a file using a computation program stored on the CSD” (Paragraph [0066] and Paragraph [0081] (receive a request from a computing process, the request may include a higher-level object (e.g., a file), a requested operation or computation and relating to computational storage, or “compute-in-storage,” which refers to data storage solutions that are implemented with computational capabilities))
“read physical data blocks associated with the file into a computational storage memory (CSM) of the CSD” (Paragraph [0074] and Paragraph [0088] (one or more standard operations like read operation of the NVM may continue to normally occur while the offloaded compute operation is performed where the operation may require reading multiple block extents for a particular virtual object))
“execute the computation program on the physical data blocks in the CSM” (Paragraph [0103] (the storage medium may include a number of programming instructions and execution of the programming instructions performs various operations described for the compute offload controller, the parsing logic, the compute logic, the compute offloader, the client offload logic, the initiator, the block storage device and/or the compute offloader))
“and storing the result of executing the computation program using a PCIe interface” (Paragraph [0077] and Paragraph [0090] (the technique may include storing a result of the operation performed at a virtual output object location and/or may be returned to a host where the block storage device may be a SSD that may be coupled with the host over a local bus such as PCIe)).
Mesnier does not EXPLICITLY disclose: detecting a filesystem associated with the file within a namespace of CSD; wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; mounting the filesystem on the CSD; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Davies teaches:
“detecting a filesystem associated with the file within a namespace of CSD” (Paragraph [0008] (the initial cloud controller uses the namespace mappings for the global namespace to determine a preferred cloud controller that will handle the request))
“mounting the filesystem on the CSD” (Paragraph [0304] (cloud controller instance provides access to the distributed filesystem by exporting a filesystem mount point)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Davies and apply them on teachings of Mesnier for the system “detecting a filesystem associated with the file within a namespace of CSD; mounting the filesystem on the CSD”. One would be motivated as the cloud controllers that manage the distributed filesystem track ongoing changes for the distributed filesystem and dynamically adjust the mapping of clients systems to cloud controllers and the assignment of namespace mappings to cloud controllers to improve and balance file access performance for the distributed filesystem (Davies, Paragraph [0019]).
Mesnier and Davies do not EXPLICITLY disclose: wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD.
However, in an analogous art, Snellman teaches:
“wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file” (Paragraph [0150] and Paragraph [0157] (FIG. 5 shows an example of a resulting data structure in which a plurality of segments are stored in different directed acyclic graphs and FIG. 6 shows an example of a process by which data may be retrieved from the data structure stored in different directed acyclic graphs))
“and a driver routine associated with the filesystem required to access the file within the namespace” (Paragraph [0157] (the process includes detecting an application querying a database which may include executing the security driver and receiving a read request sent to the database driver by the application))
“and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations” (Paragraph [0353] (a security driver may determine that a database request indicates access of data denied to an application, such as based on the application-level user information corresponding to the request, the data indicated for access by the request, the application policy information governing access to that data and may include applying one or more of the rules to deny access to some values or mask or determinatively mask some values, and optionally return some other values))
“using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD” (Paragraph [0102] and Paragraph [0123] (the node attributes (data structure) include an identifier of the respective node where the identifier may be an identifier within a namespace of the directed acyclic graph, receiving the read command may cause the security driver to access the lower-trust database or other lower-trust data store and retrieve a pointer to a node or sequence of nodes in which the specified document is stored)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Snellman and apply them on teachings of Mesnier and Davies for the system “wherein the filesystem provides data structure that identifies how data is stored and retrieved from the namespace associated with the file; and a driver routine associated with the filesystem required to access the file within the namespace; and the driver routine associated with the filesystem onto a cache within the CSD wherein the driver routine associated with the filesystem is restricted to read operations; using the data structure associated with the namespace and the driver routine required to access the file within the namespace onto a cache within the CSD”. One would be motivated as registering a security driver to receive database requests generated by an application compatible with a database driver, the security driver obtaining a database request generated by the application, detecting, by the security driver, a user agent string appended to the database request, the user agent string including at least one identifier indicative of a user of the application or a client executing the application and obtaining, by the security driver, a policy by which access to a portion of data within a database arrangement by the application (Snellman, Paragraph [0006]).
Mesnier, Davies and Snellman do not EXPLICITLY disclose: and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module.
However, in an analogous art, Terry teaches:
“and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations” (Paragraph [0068], Paragraph [0069] and Paragraph [0070] (a filesystem-aware storage system analyzes host filesystem data structures in order to determine storage usage of the host filesystem, can also identify the data types of objects stored by the filesystem and store the objects using different storage schemes based on the data types and in order to determine the filesystem type, the filesystem-aware block storage device will generally support a set of filesystems for which it "understands" the inner workings sufficiently))
“interpreting a data structure associated with the filesystem using the filesystem awareness module” (Paragraph [0070] (the filesystem-aware block storage device will generally support a set of filesystems for which it utilizes the underlying data structures (e.g., free block bitmaps), once the filesystem type is known, the filesystem-aware block storage device can parse the superblock to find the free block bitmaps for the host filesystem)).
It would have been obvious to one of ordinary skill in the art before the effective filing date to take the teachings of Terry and apply them on teachings of Mesnier, Davies and Snellman for the system “and wherein the mounting is performed by a filesystem awareness module configured to identify and load the appropriate driver routine based on the detected filesystem and wherein the driver routine does not require any host write operations; interpreting a data structure associated with the filesystem using the filesystem awareness module”. One would be motivated as such a filesystem-aware block storage device can make intelligent decisions regarding the physical storage of data like the filesystem-aware block storage device can identify blocks that have been released by the host filesystem and reuse the released blocks in order to effectively extend the data storage capacity of the system (Terry, Paragraph [0069]).
As per claim 19, the claim is rejected based upon the same rationale given for the parent claim 17 and the claim 5 above.
As per claim 20, the claim is rejected based upon the same rationale given for the parent claim 17 and the claim 8 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Thadikaran et al, (US PGPUB 20170235953), Systems and methods for providing awareness of a host file system on a storage device are described. In one embodiment, a storage device includes a host interface and a file awareness block. The host interface provides an interface between a host and the storage device. The file awareness block provides an awareness of the host file system to the storage device.
Borthakur et al, (US PGPUB 20140067778), a method of operation of a storage control system includes: configuring a state change policy on a data server, the state change policy including an online duration for a storage device; activating the storage device based on the state change policy; mounting the storage device based on the state change policy; and scheduling a filesystem maintenance task to be performed on the storage device based on the state change policy.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAMAL K DEWAN whose telephone number is (571)272-2196. The examiner can normally be reached on Mon-Fri 8:00 AM – 5:00 PM (EST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONY MAHMOUDI can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center to authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Kamal K Dewan/
Examiner, Art Unit 2163
/TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163