Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is responsive to amendment filed on 09/18/2025.
Status of claims
Claims 21-23 are newly added.
Claims 2, 8, and 15 are canceled.
Claims 1, 3-4, 7, 9-14, 16 and 18-20.
Claims 1, 3-7, 9-14 and 16-23 are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 01/07/2026 complies with the provisions of M.P.E.P 609. The information referred to therein has been considered as to the merits.
Response to Arguments
Applicant’s arguments filed on 09/18/2025 with respect to the amended limitations of the claims have been considered and are moot in view of the new ground(s) of rejection necessitated by amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-7, 9-14 and 16-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract without significantly more.
Claims 1, 14 and 20,
Step 1:
The claims are directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter.
Step 2A, Prong One:
The claims recite the limitation:
“determining…; generating… ”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “in a file system”, nothing in the claim element precludes the steps from practically being performed in a human mind.
If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion). Accordingly, the claims recite an abstract idea.
Step 2A, Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements:
“using the managed directory group to apply …”, represent(s) an extra solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presenting of collected and analyzed data. (See MPEP 2106.05(g)).
“a file system, the first managed directory …and the second managed directory…; managed directory group…; provide directory-level storage management functionality…”, amount to data-gathering steps which is considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)). Furthermore, the independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activities identified above, which include the data-gathering, and presenting steps, are recognized by the courts as well-understood, routine, and conventional activities when they are claimed Ina merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II) (i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPO2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPO2d at 1092- 93).
“processor, memory and non-transitory computer readable storage medium” amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrate by: Relevant court decision: the followings are examples of court decisions demonstrating well-understood, routine and conventional activities, see e.g., MPEP 2106.05(d)(ID): Computer readable storage media comprising instructions to implement a method, e.g., see (Versata Dev. Group, Inc.v. SAP Am., Inc”).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea.
Accordingly, claims 1, 14 and 20 are directed to an abstract idea.
Claim 3: recites the additional element at a high level of generality and would function in its ordinary capacity for having a dependency for the first and second managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 4: recites the limitation. There is no additional elements recited so the claim does not provide a practical application and is not considered to be significantly.
Claim 5: recites the additional element at a high level of generality and would function in its ordinary capacity for indicating the relationship between the first and second managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 6: recites the additional element at a high level of generality and would function in its ordinary capacity for indicating classification of the first and second managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 7: recites the additional element at a high level of generality and would function in its ordinary capacity for associating the first and second managed directories with one or more virtual machine, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 9: recites the additional element at a high level of generality and would function in its ordinary capacity for associating the first and second directories with one or more containers, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
.
Claim 10: recites the additional element at a high level of generality and would function in its ordinary capacity for grouping the policies of the managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 11: recites the additional element at a high level of generality and would function in its ordinary capacity for associating the policies of the managed directory group with one or more of snapshot policies associated with the first and second managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 12: recites the limitation, which is no additional elements recited so the claim does not provide a practical application and is not considered to be significantly.
Claim 13: recites the additional element at a high level of generality and would function in its ordinary capacity for performing the grouping of the first and second managed directories, this additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claims 16-19: are computer program product claims recite the same limitations of the method claims 3, 5, 7 and 12. Therefore, claims 16-19 are rejected as same claims 3, 5, 7 and 12 above.
Claim 21: recites the additional element at a high level of generality. This additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 22: recites the additional element at a high level of generality. This additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim 23: recites the additional element at a high level of generality. This additional element does not integrate the integrate the judicial exception into a practical application and does not amount to significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, 9-14 and 16-23 are rejected under 35 U.S.C. 103 as being unpatentable over Shah et al. (US 9,720,619 B1), hereinafter “Shah”, in view of Botes et al. (US 2018/0260125), hereinafter “Botes”.
As per claim 1, Shah discloses a computer-implemented method comprising:
- determining, in a file system, that a first managed directory and a second managed directory satisfy one or more criteria, the first managed directory configured to provide directory-level storage management functionality to contents of the first managed directory based on policies of the first managed directory and the second managed directory configured to provide directory-level storage management functionality to contents of the second managed directory based on policies of the second managed directory (col. 5 lines 63 to col. 6 line 15, the StorFS system uses techniques to efficiently track changes in a distributed setting. The StorFS is capable of performing low overhead data services at different level of granularity: from the file system level to the individual files level. ThetorFS system stripes file and file system data across multiple storage nodes in our cluster; Col. 8 lines 43 to clo.9 line 12, the StorFS system uses techniques to efficiently track changes in a distributed setting. The StorFS is capable of performing low overhead data services at different level of granularity: from the file system level to the individual files level. Wherein the torFS system stripes file and file system data across multiple storage nodes in our cluster. Wherein there can be a many to one relationship between metadata vNodes and cache vNodes The number of metadata and cache vNodes is specified as a configuration parameter during the cluster creation and this number remains fixed for the entire lifecycle of the cluster. Data vNodes are created on demand when new data is written to the StorFS system. The redundancy level is configurable based on user-defined policies. Cache vNodes can only be placed on storage devices that allow fast random access),
- generating, in the file system and based on the determination, a managed directory group configured to provide coordinated storage management functionality to the contents of the first managed directory and the contents of the second managed directory as a group based on policies of the managed directory group (col. 5 lines 4-56, the StorFS system can create snapshots efficiently. The StorFS system has the ability to snapshot an individual file, a set of files or directories or an entire file system, capture point in time state of both the write log and the persistent storage of the object(s) in a snapshot. There is a storage controller server running on each physical node (pNodes) in the cluster and each physical node owns a set of virtual nodes (vNodes), where the request is to create a snapshot for the stripe hosted by that virtual node. The request to create a snapshot on the FT vNode freezes the current in-memory delta tree state of the source object stripe that is being cloned. The delta tree captures the modification to the on-disk storage that is logged into the write log; Col, 9 lines 12-55, he File System Layer 214 provides a logical global namespace abstraction to organize and locate data in the cluster. The Data Service Layer 212 provides enterprise data services like disaster recovery, fine grained policy management, snapshots/clones, etc. Wherein the Write Cache 208 and the Read Cache 210 Layers provide acceleration for write and read I/O respectively using fast storage devices; and Col. 10 lines 10-25, each of the individual datastores 310A-N includes a datastore identifier 314A-N and inode tree 316A-N. Each inode tree 316A-N points to the directory of one or more inode(s)); and
- using the managed directory group to apply a file system operation to the contents of the first managed directory and the contents of the second managed directory as a group based on the policies of the managed directory group (Col. 8 lines 43 to clo.9 line 12, the torFS system stripes file and file system data across multiple storage nodes in our cluster. Wherein there can be a many to one relationship between metadata vNodes and cache vNodes The number of metadata and cache vNodes is specified as a configuration parameter during the cluster creation and this number remains fixed for the entire lifecycle of the cluster. Data vNodes are created on demand when new data is written to the StorFS system. The redundancy level is configurable based on user-defined policies. Cache vNodes can only be placed on storage devices that allow fast random access); and col. 12 lines 1-10, the delta-delta tree 500 includes references between the two groups of inodes, wherein the parent inode pointer 524 of inode 520 points to inode 550N).
However, Shah does not disclose wherein the first managed directory and the second managed directory are distinct managed directories having respective contents arranged in non-overlapping directory trees.
Meanwhile, Botes discloses wherein the first managed directory and the second managed directory are distinct managed directories having respective contents arranged in non-overlapping directory trees (par. [0114], the first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier; and par. [0436], resynchronization may proceed by first transferring a more recent snapshot, such as one created at a beginning of an attach, by having the target storage system retrieve it from in-sync storage systems in the manner described above, where the target incrementally requests leaf and composite logical extents that it does not have. This process may include accounting for in-progress operations at the time of the detach, where at the end of this process, the content up to that more recent snapshot is synchronized between the in-sync storage systems for the pod and the attaching storage system
Therefore, one having ordinary skill in the art would have been obvious before the effective filing date of the claimed invention to have modified the system of Shah to include the features as disclosed by Botes in order to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection.
As per claim 3, Shah further discloses the one or more criteria comprises the first and second managed directories having a dependency (col. 10 lines 9-26 and col. 12 lines 1-10, each inode 318A-N includes an inode identifier 320A-N, parent inode pointer 322A-N, and the dirty tree pointer 324A-N. The inode identifier 320A-N identifies the respective inode 320A-N, wherein the parent inode pointer 322A-N references the parent inode for this inode 320A-N).
Botes also discloses the one or more criteria comprises the first and second managed directories having a non-hierarchical dependency (par. [0376], a sequence number based model, recovery could query for all operations on any in-sync storage system associated with a sequence number larger than the last completed sequence number. In a symmetric implementation without a leader, each storage system that receives request for the pod could define its own sliding window and sliding window identity space; and par. [0508], the host (3402) may identify (3504) a particular storage system of the plurality of storage systems (3424, 3426, 3428) as a preferred storage system for receiving the I/O operation (3416) in dependence upon the respective response times for multiple storage systems of the plurality of storage systems).
As per claim 4, Botes further discloses the non-hierarchical dependency is determined based on a sequence of data operations between the first and second managed directories satisfying the one or more criteria (par. [0376], a sequence number based model, recovery could query for all operations on any in-sync storage system associated with a sequence number larger than the last completed sequence number. In a symmetric implementation without a leader, each storage system that receives request for the pod could define its own sliding window and sliding window identity space; and par. [0508], the host (3402) may identify (3504) a particular storage system of the plurality of storage systems (3424, 3426, 3428) as a preferred storage system for receiving the I/O operation (3416) in dependence upon the respective response times for multiple storage systems of the plurality of storage systems (3424, 3426, 3428), for example, by selecting the storage system associated with the fastest response times as the preferred storage system, by selecting any storage system whose response times satisfy a predetermined quality of service threshold as the preferred storage system).
As per claim 5, Shah further discloses the one or more criteria comprises one or more configuration files indicating that the first and second managed directories are related (col. 8 lines 42-61, multiple cache vNodes are grouped together to form an abstraction called ‘Write Log Group’; Col, 9 lines 12-55, the File System Layer 214 provides a logical global namespace abstraction to organize and locate data in the cluster. The Data Service Layer 212 provides enterprise data services like disaster recovery, fine grained policy management, snapshots/clones, etc. Wherein the Write Cache 208 and the Read Cache 210 Layers provide acceleration for write and read I/O respectively using fast storage devices).
As per claim 6, Shah further discloses the one or more criteria comprises user-specified data indicating that the first and second managed directories belong to the managed directory group (Col. 8 lines 43 to clo.9 line 12, the torFS system stripes file and file system data across multiple storage nodes in our cluster. Wherein there can be a many to one relationship between metadata vNodes and cache vNodes The number of metadata and cache vNodes is specified as a configuration parameter during the cluster creation and this number remains fixed for the entire lifecycle of the cluster. Data vNodes are created on demand when new data is written to the StorFS system. The redundancy level is configurable based on user-defined policies. Cache vNodes can only be placed on storage devices that allow fast random access); and col. 12 lines 1-10, the delta-delta tree 500 includes references between the two groups of inodes, wherein the parent inode pointer 524 of inode 520 points to inode 550N).
As per claim 7, Shah further discloses the first and second managed directories satisfying the one or more criteria based on the first and second managed directories being associated with one or more virtual machines configured as an application cluster (col. 4 lines 30-42, datacenters are virtualized, virtual machine snapshotting and cloning has become a very important feature. The object can be one or more files, directories, volumes, filesystems, and/or a combination thereof; col. 5 lines 15-30, The StorFS system additionally has the ability to capture point in time state of both the write log and the persistent storage of the object(s) in a snapshot. There is a storage controller server running on each physical node (pNodes) in the cluster and each physical node owns a set of virtual nodes (vNodes).
As per claim 9, Shah further discloses the first and second managed directories satisfying the one or more criteria based on the first and second managed directories being associated with one or more containers generated from a single configuration file (col. 4 lines 30-42, the virtual machine snapshotting and cloning has become a very important feature, wherein a snapshot is a copy of a source object stored in the StorFS system. The object can be one or more files, directories, volumes, filesystems, and/or a combination thereof. The snapshot can be a read-only or writeable. In addition, a clone is a writeable snapshot)
As per claim 10, Shah further discloses generating the managed directory group comprises generating the policies of the managed directory group are based on the policies of the first and second managed directories (col. 4 lines 30-45, col. 5 lines 4-56, the StorFS system has the ability to snapshot an individual file, a set of files or directories or an entire file system. The capture point in time state of both the write log and the persistent storage of the object(s) in a snapshot; col, 9 lines 12-55 and col. 6 lines 1-15, the File System Layer 214 provides a logical global namespace abstraction to organize and locate data in the cluster. The Data Service Layer 212 provides enterprise data services like disaster recovery, fine grained policy management, snapshots/clones, etc).
As per claim 11, Shah further discloses the policies of the managed directory group are associated with user access control associated with the first and second managed directories (col. 4 lines 30-45, col. 5 lines 4-56, the StorFS system has the ability to snapshot an individual file, a set of files or directories or an entire file system. The capture point in time state of both the write log and the persistent storage of the object(s) in a snapshot; col. 6 lines 1-15, Furthermore, the StorFS permits creating large number of snapshots (or clones) without any deterioration in performance. The StorFS system does not have to sync the current write log to the backend storage and also the following update operations to the original object as well as snapshots are handled efficiently to keep a consistent state of both original object and clone[s]; Col, 9 lines 12-55, the File System Layer 214 provides a logical global namespace abstraction to organize and locate data in the cluster. The Data Service Layer 212 provides enterprise data services like disaster recovery, fine grained policy management, snapshots/clones, etc. Wherein the Write Cache 208 and the Read Cache 210 Layers provide acceleration for write and read I/O respectively using fast storage devices).
As per claim 12, Shah further discloses using the managed directory group to apply a file system operation to the contents of the first managed directory and the contents of the second managed directory as a group based on the policies of the managed directory group generating, based on the managed directory group, a coordinated group snapshot for the first and second managed directories (col. 4 lines 30-42, the virtual machine snapshotting and cloning has become a very important feature, wherein a snapshot is a copy of a source object stored in the StorFS system. The object can be one or more files, directories, volumes, filesystems, and/or a combination thereof. The snapshot can be a read-only or writeable. In addition, a clone is a writeable snapshot; col. 8 lines 42-61, multiple cache vNodes are grouped together to form an abstraction called ‘Write Log Group’; Col, 9 lines 12-55, the File System Layer 214 provides a logical global namespace abstraction to organize and locate data in the cluster.).
As per claim 13, Shah further discloses the grouping of the first and second managed directories as the managed directory group is performed as a result of administrative requests to the file system based on one or more commands associated with the file system (col. 5 lines 4-56, the StorFS system has the ability to snapshot an individual file, a set of files or directories or an entire file system, capture point in time state of both the write log and the persistent storage of the object(s) in a snapshot. There is a storage controller server running on each physical node (pNodes) in the cluster and each physical node owns a set of virtual nodes (vNodes), where the request is to create a snapshot for the stripe hosted by that virtual node. The delta tree captures the modification to the on-disk storage that is logged into the write log; col. 8 lines 42-61, multiple cache vNodes are grouped together to form an abstraction called ‘Write Log Group’).
As per claim 21, Botes further discloses the sequence of data operations between the first and second managed directories comprises a sequence of application program interface (API) calls that satisfy the one or more criteria (par. [0376], a sequence number based model, recovery could query for all operations on any in-sync storage system associated with a sequence number larger than the last completed sequence number. In a symmetric implementation without a leader, each storage system that receives request for the pod could define its own sliding window and sliding window identity space; and par. [0508], the host (3402) may identify (3504) a particular storage system of the plurality of storage systems (3424, 3426, 3428) as a preferred storage system for receiving the I/O operation (3416) in dependence upon the respective response times for multiple storage systems of the plurality of storage systems (3424, 3426, 3428), for example, by selecting the storage system associated with the fastest response times as the preferred storage system, by selecting any storage system whose response times satisfy a predetermined quality of service threshold as the preferred storage system).
As per claim 22, Botes further discloses the sequence of data operations between the first and second managed directories is determined to satisfy the one or more criteria based on the sequence of data operations having a frequency that satisfies a threshold value (par. [0508], [0429] and [0436], the host (3402) may identify (3504) a particular storage system of the plurality of storage systems (3424, 3426, 3428) as a preferred storage system for receiving the I/O operation (3416) in dependence upon the respective response times for multiple storage systems of the plurality of storage systems (3424, 3426, 3428), for example, by selecting the storage system associated with the fastest response times as the preferred storage system, by selecting any storage system whose response times satisfy a predetermined quality of service threshold as the preferred storage system).
As per claim 23, Shah further discloses determining that the first managed directory and the second managed directory satisfy one or more criteria comprises determining that the first managed directory and the second managed directory are generated within a same parent directory (col. 10 lines 9-26, col. 12 lines 1-10, col. 5 lines 4-56, and col, 9 lines 12-55, each inode 318A-N includes an inode identifier 320A-N, parent inode pointer 322A-N, and the dirty tree pointer 324A-N. The inode identifier 320A-N identifies the respective inode 320A-N, wherein the parent inode pointer 322A-N references the parent inode for this inode 320A-N).
As per claims 14-19, A computer program product embodied in a non-transitory computer readable storage medium corresponding to the method of claims 1-3, 5, 7, and 10 above. Therefore, claims 14-19 are rejected under the same rationale of claims 1-3, 5, 7, and 10.
As per claim 20, is a system claim corresponding to the method of claim 1 above. Therefore, claim 20 is rejected under the same rationale that rejected claim 1.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-892).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Loan T. Nguyen whose telephone number is (571) 270-3103. The examiner can normally be reached on Monday from 10:00 am - 6:00 pm, Thursday-Friday from 10:00 am - 2:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached on (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-270-4103. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
2/10/2026
/LOAN T NGUYEN/Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165