Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,799

Elastic Node Growth and Shrinkage within a Distributed Storage System using Disaggregated Storage

Non-Final OA §102§103
Filed
Mar 05, 2024
Examiner
ALSIP, MICHAEL
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
80%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
481 granted / 645 resolved
+19.6% vs TC avg
Moderate +5% lift
Without
With
+5.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
30 currently pending
Career history
675
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
37.3%
-2.7% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6, 8-15, 17-24 and 26-33 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sundaram et al. (US 8,832,363). Consider claim 1, Sundaram et al. discloses a method comprising: providing a storage pod having a group of disks containing a plurality of Redundant Array of Independent Disks (RAID) groups, wherein an entirety of a global physical volume block number (PVBN) space associated with the storage pod is visible and accessible to all nodes of a plurality of nodes of a cluster representing a distributed storage system via their respective dynamically extensible file systems (DEFSs) and wherein storage space associated with the group of disks is partitioned into a plurality of allocation areas (AAs), in which a given AA of the plurality of AAs is owned by a given DEFS of a plurality of DEFSs of the cluster (abstract, background, Col. 3 lines 21-35, Col. 5 lines 46-52, Sundaram et al. discloses a cluster based system made up of nodes. The storage is formed into RAID groups which contain containers and containers are assigned to nodes. The storage arrays are organized as one large pool of storage.); and when a node is to be removed from the plurality of nodes of the cluster: moving one or more volumes associated with one or more DEFSs of the plurality of DEFSs owned by the node to one or more DEFSs of the plurality of DEFSs owned by one or more other nodes of the plurality of nodes of the cluster; and parking the one or more DEFSs owned by the node in another node of the plurality of nodes of the cluster by changing ownership of the one or more DEFSs of the plurality of DEFSs owned by the node (abstract, background, Col. 14 lines 53-67, Col. 17-18 lines 41-13, Sundaram et al. discloses that when a node is removed, the containers associated with that node are distributed to the other nodes.). Consider claim 2, Sundaram et al. discloses the method of claim 1, wherein said moving one or more volumes does not involve copying of data of the one or more volumes (abstract, background, Col. 14 lines 53-67, Col. 17-18 lines 41-13, Sundaram et al. discloses that ownership is transferred without the contents being affected.). Consider claim 3, Sundaram et al. discloses the method of claim 1, further comprising changing ownership of a majority of a plurality of AAs owned by the one or more DEFSs to one or more other DEFSs within the cluster (abstract, background, Col. 14 lines 53-67, Col. 17-18 lines 41-13, Sundaram et al. discloses that ownership is transferred for the containers assigned to the removed node.). Consider claim 4, Sundaram et al. discloses the method of claim 1, wherein the parked one or more DEFSs remain online (abstract, background, Col. 14 lines 53-67, Col. 17-18 lines 41-13, Sundaram et al. discloses the new nodes use the transferred containers.). Consider claim 5, Sundaram et al. discloses the method of claim 1, wherein as a result of lack of activity on the parked one or more DEFSs there is no need for the parked one or more DEFSs to go through consistency points (abstract, background, Col. 14 lines 53-67, Col. 17-18 lines 41-13, Sundaram et al. does not disclose forcing the newly owned containers through any consistency points.). Consider claim 6, Sundaram et al. discloses a method comprising: providing a storage pod having a group of disks containing a plurality of Redundant Array of Independent Disks (RAID) groups, wherein an entirety of a global physical volume block number (PVBN) space associated with the storage pod is visible and accessible to all nodes of a plurality of nodes of a cluster representing a distributed storage system via their respective dynamically extensible file systems (DEFSs) and wherein storage space associated with the group of disks is partitioned into a plurality of allocation areas (AAs), in which a given AA of the plurality of AAs is owned by a given DEFS of a plurality of DEFSs of the cluster (abstract, background, Col. 3 lines 21-35, Col. 5 lines 46-52, Sundaram et al. discloses a cluster based system made up of nodes. The storage is formed into RAID groups which contain containers and containers are assigned to nodes. The storage arrays are organized as one large pool of storage.); and based on addition of a new node to the plurality of nodes of the cluster, creating one or more new DEFSs for the new node by: creating the one or more new DEFSs within an existing node of the plurality of nodes of the cluster with a plurality of AAs donated from one or more existing DEFSs of the existing node; and changing ownership of the one or more new DEFSs to the new node (abstract, background, Fig. 2 and 7B, Col. 14 lines 53-67, when a new node is added, the amount of storage increases and is added/incorporated into the one large pool of storage, then new RAID configurations are created the new node is provided with storage.). Consider claim 8, Sundaram et al. discloses the method of claim 6, wherein the addition of the new node increases one or more of computer resources, networking resources, and storage resources available to the cluster for performing storage operations by the cluster (abstract, background, Fig. 2 and 7B, Col. 14 lines 53-67, when a new node is added, the amount of storage increases.). Consider claim 9, Sundaram et al. discloses the method of claim 6, wherein the addition of the new node increases one or more of computer resources, networking resources, and storage resources available to the cluster for performing data management operations by the cluster (abstract, background, Fig. 2 and 7B, Col. 14 lines 53-67, when a new node is added, the amount of storage increases.). Consider claim 28, Sundaram et al. discloses a method comprising: providing a scale-out storage system in a form of a cluster of a plurality of nodes that allows for independent scaling of storage resources and compute resources; servicing, by the plurality of nodes, storage operations on behalf of clients of the cluster; and supporting data services without impacting performance of the storage operations by adding a compute node to the cluster to perform the data services (abstract, background, Fig. 2 and 7B, Col. 3 lines 21-35, Col. 5 lines 46-52, Col. 14 lines 53-67, Sundaram et al. discloses a cluster based system made up of nodes. The storage is formed into RAID groups which contain containers and containers are assigned to nodes. The storage arrays are organized as one large pool of storage. When a new node is added, the amount of storage increases and is added/incorporated into the one large pool of storage, then new RAID configurations are created the new node is provided with storage.). Consider claim 29, Sundaram et al. discloses the method of claim 28, wherein each node of the plurality of nodes has access to a disaggregated storage space within a storage pod that includes a group of disks containing a plurality of Redundant Array of Independent Disks (RAID) groups in which storage space of the group of disks is divided into a plurality of allocation areas (AAs), wherein each AA of the plurality of AAs includes a plurality of RAID stripes of a given RAID group of the plurality of RAID groups (abstract, background, Col. 3 lines 21-35, Col. 5 lines 46-52, Sundaram et al. discloses a cluster based system made up of nodes. The storage is formed into RAID groups which contain containers and containers are assigned to nodes. The storage arrays are organized as one large pool of storage.). Consider claim 30, Sundaram et al. discloses the method of claim 28, wherein the compute node includes a light-weight data adaptor to facilitate access to the storage pod and has visibility into an entirety of a global physical volume block number (PVBN) space associated with the storage pod via the light-weight data adaptor (abstract, background, Fig. 2, Col. 3 lines 21-35, Col. 4 lines 23-38, Col. 5 lines 46-52, Sundaram et al. discloses a cluster based system made up of nodes. The storage is formed into RAID groups which contain containers and containers are assigned to nodes. The storage arrays are organized as one large pool of storage.). Consider claim 31, Sundaram et al. discloses the method of claim 28, wherein the compute node does not participate in handling of the storage operations and is dedicated to performance of performance of the data services (abstract, background, Fig. 2, Col. 3 lines 21-35, Col. 4 lines 23-38, Col. 5 lines 46-52, Sundaram et al. discloses one or more CPUs dedicated to the performance of the system.). Consider claim 32, Sundaram et al. discloses the method of claim 28, wherein the compute node includes a set of one or more types of compute resources (abstract, background, Fig. 2, Col. 3 lines 21-35, Col. 4 lines 23-38, Col. 5 lines 46-52, Sundaram et al. discloses one or more CPUs). Consider claim 33, Sundaram et al. discloses the method of claim 32, wherein the data services include performance of one or more of file system analytics and cataloging of user data assets and wherein the set of one or more types of compute resources include one or more central processing units (abstract, background, Fig. 2, Col. 3 lines 21-35, Col. 4 lines 23-38, Col. 5 lines 46-52, Sundaram et al. discloses one or more CPUs dedicated to the performance of the system.). Claims 10-14 are the medium claims to method claims 1-5 above and are rejected using the same rationale. Claims 19-23 are the system claims to method claims 1-5 above and are rejected using the same rationale. Claims 15, 17 and 18 are the medium claims to method claims 6, 8 and 9 above and are rejected using the same rationale. Claims 24, 26 and 27 are the system claims to method claims 6, 8 and 9 above and are rejected using the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 7, 16, 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sundaram et al. (US 8,832,363) as applied to claims 6, 15, 24 above, and further in view of Gupta et al. (US 9,122,398). Consider claim 7, Sundaram et al. discloses the method of claim 6, however Sundaram et al. does not explicitly disclose monitoring and automatically balancing space with the cluster, but Gupta et al. does teach these features. Gupta et al. discloses (abstract, Fig. 1, Col. 4 lines 16-20, Col. 6 lines 3-15) a similar system to Sundaram et al. where resources are scaled out and released automatically with resource usage monitored and controlled. Therefore Sundaram et al. in view of Gupta et al. teaches: “increasing an amount of the storage space associated with the one or more new DEFSs based on periodic space monitoring and automatic space balancing performed within the cluster.”. It would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify the Sundaram et al. reference to include resource usage monitoring and automatic resource distribution as is done in Gupta et al. because doing so allows for rapid and elastic resource scaling when needed leading to improved system performance and latency (Gupta et al. Col. 6 lines 3-15). Claim 16 is the medium claim to method claim 7 above and is rejected using the same rationale. Claim 25 is the system claim to method claim 7 above and is rejected using the same rationale. Claim(s) 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sundaram et al. (US 8,832,363) as applied to claims 32 above, and further in view of Beloussov et al. (US 11,023,133). Consider claim 34, Sundaram et al. discloses the method of claim 32, but does not teach the use of AI powered analytics or specifically discuss the processing of image/video data and thus does not alone teach: “wherein the data services include artificial-intelligence (AI)-powered data analytics and wherein the set of one or more types of compute resources include one or more graphics processing units.”. However, Beloussov et al. teaches using AI and machine learning to monitor system performance in a cluster based storage system that can process image/graphical data (Beloussov et al.: abstract, Col. 1 lines 39-53, Col. 5 lines 31-48, Col. 6 line 20, Col. 8 lines 16-44). It would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify the Sundaram et al. reference to include data analytics monitoring with AI as is done in Beloussove et al., because doing so allows for an improved and convenient way to configure complex storage systems and minimize the time period of degraded performance (Beloussov et al.: Col. 1 lines 39-49 and Col. 4 lines 44-49.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL ALSIP whose telephone number is (571)270-1182. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G. Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL ALSIP/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596685
SYSTEM AND METHODS FOR BANDWIDTH-EFFICIENT DATA ENCODING
2y 5m to grant Granted Apr 07, 2026
Patent 12591518
VALIDITY MAPPING TECHNIQUES
2y 5m to grant Granted Mar 31, 2026
Patent 12591545
SYSTEM AND METHOD FOR SECURING HIGH-SPEED INTRACHIP COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12585950
METHOD AND ELECTRONIC DEVICE FOR PERFORMING DEEP NEURAL NETWORK OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578856
SYSTEM AND METHOD FOR DATA COMPACTION AND SECURITY USING MULTIPLE ENCODING ALGORITHMS WITH PRE-CODING AND COMPLEXITY ESTIMATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
80%
With Interview (+5.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month