Prosecution Insights
Last updated: April 19, 2026
Application No. 18/595,785

Automatic Space Sharing of Disaggregated Storage of a Storage Pod by Multiple Nodes of a Distributed Storage System

Non-Final OA §103§112
Filed
Mar 05, 2024
Examiner
GOLDSCHMIDT, CRAIG S
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
293 granted / 401 resolved
+18.1% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
21 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
46.4%
+6.4% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 401 resolved cases

Office Action

§103 §112
DETAILED ACTION This action responds to Application No. 18/595785, filed 03/05/2024. Claims 1-25 are presented for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 05/28/2024, 04/02/2024, 10/30/2024 and 09/15/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 13 is objected to because of the following informalities: the language “one or more other node” (line 2) appears to be a typographical error, and should presumably read “one or more other nodes”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention, as follows: Claims 1, 7, 14, and 20, language “a storage pod having a group of disks containing multiple Redundant Array of Independent Disks (RAID) groups, wherein the storage pod is accessible to all nodes of a plurality of nodes of a cluster representing a distributed storage system via a global physical volume block number (PVBN) space” (e.g. claim 1, lines 2-6). This limitation is indefinite, for 2 reasons. First, this limitation includes multiple dependent clauses (e.g. “representing a distributed storage system” and “via a global physical block number (PVBN) space”) which are ambiguously worded and lack clear reference to their respective subjects. For example, “representing a distributed storage system” could refer to 1) the storage pod, 2) all nodes of a plurality of nodes, or 3) a cluster. Similarly, “via a global physical block number” could refer to 1) accessible” or 2 “representing a distributed storage system”. Second, it is unclear whether the “distributed storage system” means that the 1) storage system is distributed across the nodes themselves, 2) the storage pod is a distributed storage system, or 3) the storage pod is comprised of the nodes, and thus constitutes the distributed storage system. If the distributed storage system is distributed across the nodes, and is not the storage pod, then then the “storage pod” would appear to be an extraneous feature that not related to the rest of the invention, as there is no other reference to it in the claims. Claims 5, 18, and 24: language “the space request message” (e.g. claim 5, line 1), “the donor DEFS” (line 2), “the done DEFS” (line 2). These limitations lack sufficient antecedent basis in the claims. Claims 2, 15, and 21 disclose these limitations, respectively, but claims 5, 18, and 24 depend on claims 1, 14, and 20, respectively, which do not contain similar antecedent basis; language “donate to the donee DEFS from among a plurality of AAs owned by the donee” (e.g. claim 5, lines 2-3). This limitation is indefinite, as it is unclear how the donor can donate AAs already owned by the donee to the donee; Claims 6, 19, and 25, language “the second node” (e.g. claim 6, line 2), “the first node” (line 2), “the donee DEFS” (line 2). These limitations lack sufficient antecedent basis in the claims. Claims 2, 15, and 21 disclose these limitations, respectively, but claims 6, 19, and 25 depend on claims 5, 18, and 24, respectively, which do not contain similar antecedent basis. Claims 2-6, 8-13, 15-19, and 21-25 are rejected as being dependent upon one of claims 1, 7, 14, and 20 above, respectively. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 7-8, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel (US 2019/0114117 A1) in view of Li et al (US 2003/0093509 A1). Re claim 1, Patel discloses the following: A method comprising: providing a storage pod having a group of disks containing multiple Redundant Array of Independent Disks (RAID) groups (¶ 3). The storage array (storage pod) contains one or more storage volumes comprising one or more groups of disks which are organized into one or more (i.e. including “multiple”) RAID groups; wherein the storage pod is accessible concurrently to all nodes of a plurality of nodes of a cluster (¶ 19). This limitation is indefinite, as noted above. Examiner interprets it to mean that the storage pod is made up of a cluster of interconnected nodes. The storage may be set up as a cluster of interconnected storage nodes (a plurality of nodes of a cluster representing a distributed storage system). Applicant has not explicitly defined “accessible concurrently to all nodes” in the claims, and the specification merely discloses that “As a result, of creating an distributing the disaggregated storage across a cluster in this manner, all disks and all RAID groups can theoretically to be accessed concurrently by all nodes and the issue discussed with reference to Fig. 5 in which the entirety of any given disk and the entirety of any given RAID group is owned by a single node is avoided” (¶ 116). Accordingly, Examiner interprets this limitation to mean that the storage pod is not owned by a single node. In Patel, the storage array (storage pod) is not owned by a single node, as it comprises a cluster of interconnected nodes; representing a distributed storage system via a global physical volume block number (PVBN) space (¶ 19 and 41). An aggregate represents a distributed storage system, as it comprises one or more RAID groups comprising one or more disks (¶ 41), and may comprise a cluster of interconnected nodes (¶ 19); accordingly, the storage system is distributed across these elements. The aggregate utilizes a PVBN space, which is global for the aggregate, as each aggregate has one PVBN space (¶ 41); one or more allocation areas (AA) within the global PVBN space (¶ 4). The global PVBN space comprises blocks (allocation areas). Patel does not explicitly disclose reallocating resources between nodes, and does not explicitly disclose a dynamically extensible file system. Li discloses the following: a plurality of nodes of a cluster representing a distributed storage system monitoring, by a node of the cluster, storage space availability or usage by one or more dynamically extensible file systems (DEFSs) of the node (¶ 20 and 641). Each node contains a respective agent which monitors information about the respective node (¶ 20) and its associated file system, which “the illustrated embodiment dynamically extends” (i.e. it is a dynamically extensible file system) (¶ 641); based on the storage availability or usage meeting a predetermined or configurable threshold in relation to storage space or usage of one or more [file systems] of one or more other nodes of the plurality of nodes of the cluster, requesting ownership of one or more allocation areas (AA) within the […] space currently owned by the one or more DEFSs of the one or more other nodes to be transferred to the node (¶ 289-290). When an agent determines that its host has hit a utilization threshold, it triggers an extension of the file system to include new LUNs (¶ 641-642). The agents for hosts maintain LUN allocation for their respective hosts; the LUNs (allocation areas) may be deallocated from one host and freed up to be reallocated to another during the extension process (¶ 289-290). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the distributed storage system of Patel to perform resource monitoring, reallocation, and dynamic extension of the file system as in Li, because it would be applying a known technique to a known method ready for improvement, to yield predictable results. Patel discloses a storage system distributed over a plurality of nodes, which is ready for the improvement of managing storage reallocation using dynamically extensible file systems. Li discloses dynamically extensible file systems for managing storage reallocation across distributed nodes, which is applicable to the distributed nodes of Patel. It would have been obvious to integrate the dynamically extensible file system resource management of Li into the distributed storage system of Patel, because it would yield the predictable result of ensuring resources are efficiently allocated to the nodes. Re claim 7, Patel discloses the following: A method comprising: providing a storage pod having a group of disks containing multiple Redundant Array of Independent Disks (RAID) groups (¶ 3). The storage array (storage pod) contains one or more storage volumes comprising one or more groups of disks which are organized into one or more (i.e. including “multiple”) RAID groups; wherein the storage pod is accessible concurrently to all nodes of a plurality of nodes of a cluster (¶ 19). This limitation is indefinite, as noted above. Examiner interprets it to mean that the storage pod is made up of a cluster of interconnected nodes. The storage may be set up as a cluster of interconnected storage nodes (a plurality of nodes of a cluster representing a distributed storage system). Applicant has not explicitly defined “accessible concurrently to all nodes” in the claims, and the specification merely discloses that “As a result, of creating an distributing the disaggregated storage across a cluster in this manner, all disks and all RAID groups can theoretically to be accessed concurrently by all nodes and the issue discussed with reference to Fig. 5 in which the entirety of any given disk and the entirety of any given RAID group is owned by a single node is avoided” (¶ 116). Accordingly, Examiner interprets this limitation to mean that the storage pod is not owned by a single node. In Patel, the storage array (storage pod) is not owned by a single node, as it comprises a cluster of interconnected nodes; representing a distributed storage system via a global physical volume block number (PVBN) space (¶ 19 and 41). An aggregate represents a distributed storage system, as it comprises one or more RAID groups comprising one or more disks (¶ 41), and may comprise a cluster of interconnected nodes (¶ 19); accordingly, the storage system is distributed across these elements. The aggregate utilizes a PVBN space, which is global for the aggregate, as each aggregate has one PVBN space (¶ 41). Patel does not explicitly disclose nodes tracking storage space availability or usage, and does not explicitly disclose a dynamically extensible file system. Li discloses the following: a plurality of nodes of a cluster representing a distributed storage system tracking, by each node of the cluster, a space metric indicative of storage space availability or usage by a first set of one or more DEFSs of the node; and (¶ 20 and 641). Each node contains a respective agent which monitors information about the respective node including its LUNs (space availability/usage) (¶ 20) and its associated file system, which “the illustrated embodiment dynamically extends” (i.e. it is a dynamically extensible file system) (¶ 641); notifying, by each node of the cluster, one or more other nodes of the plurality of nodes of the cluster regarding the space metric of the node (¶ 477 and 641-642). The agents provide LUN availability/unavailability (usage/availability) to the SAN manager, such that other nodes may access it to determine LUNS to utilize for extending the DEFS. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel and Li, for the reasons noted in claim 1 above. Re claim 8, Patel and Li disclose the method of claim 1, and Patel further discloses the global PVBN space (¶ 41). Li further discloses the following: the [storage] space is partitioned into a plurality of allocation areas (AAs) (¶ 71). The storage space is partitioned into a plurality of LUNs (allocation areas); and wherein the method further comprises based on the tracking and the notifying, performing space balancing among the first set of one or more DEFSs and a second set of one or more DEFSs of a second node of the plurality of nodes of the cluster by changing ownership of one or more of the plurality of AAs owned by a first DEFs of the first set of one or more DEFSs to a second DEFS of the second set of one or more DEFSs (¶ 561 and 641-642). LUNs (AAs) can be unassigned and then reassigned (changing ownership of an AA). AAs can be reassigned to a new host based on the host usage crossing a threshold; accordingly, this is performing “space balancing”, as the additional LUN space balances usage back below the threshold. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel and Li, for the reasons noted in claim 1 above. Re claim 14, Patel and Li disclose the method of claim 1; accordingly, they also disclose a storage medium storing instructions executing that method, as in claim 14, (See Patel, ¶ 64). Re claim 20, Patel and Li disclose the method of claim 1; accordingly, they also disclose a storage system implementing that method, as in claim 20 (See Patel, ¶ 64). Claims 2-4, 15-17, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Li, further in view of Hu et al (US 2006/0253856 A1). Re claim 2, Patel and Li disclose the method of claim 1, and Li further discloses DEFS (¶ 641) but do not explicitly disclose the details of how resources are requested from one node to another. Hu discloses wherein said requesting ownership includes sending a space request message from the node on behalf of a donee [node] of the one or more […] node to a donor [node] of the one or more […] second node of the one or more other nodes (¶ 10-12). Each node contains a lock manager, wherein when a node wants to request a lock for a resource (storage) to be transferred, it places the request (space request message) in a queue of the lock manager, which is replicated across the nodes; accordingly, it is sent to a donee node. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the storage reallocation of Patel (combined with Li) to allow nodes to request resources using a queue, as in Hu, because Hu suggests that requesting resources using multicast protocols to queues has performance and scalability advantages, and improves upon poor performance of previous lock transfer protocols (¶ 42). Re claim 3, Patel, Li, and Hu disclose the method of claim 2, and Hu further discloses that the space request message is sent via an on-wire internode communication mechanism to the second node (Abstract). The system utilizes a multicast ability of a network (on-wire internode communication mechanism) to send the requests between nodes. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, and Hu, for the reasons noted in claim 2 above. Re claim 4, Patel, Li, and Hu disclose the method of claim 4, and Hu further discloses that the space request message is posted to a persistent message queue of the donor DEFS or the second node (¶ 10-12). The requests are placed into respective queues. Elements of the queues are placed into durable storage (persistent). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, and Hu, for the reasons noted in claim 2 above. Re claims 15-17, Patel, Li, and Hu disclose the methods of claims 2-4 above, respectively; accordingly, they also disclose storage media storing instructions executing those methods, as in claims 15-17, respectively (See Patel, ¶ 64). Re claims 21-23, Patel, Li, and Hu disclose the methods of claims 2-4 above, respectively; accordingly, they also disclose storage systems implementing those methods, as in claims 21-23, respectively (See Patel, ¶ 64). Claim 5-6, 18-19, and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Li, further in view of Hu, further in view of Danilov et al (US 2022/0222002 A1). Re claim 5, Patel and Li disclose the method of claim 1, and Li further discloses DEFS (¶ 641) and allocating LUNS (AAs) (¶ 641), but Patel and Li do not specifically disclose sending a space message, and do not specifically disclose prioritizing free AAs over partial AAs. Hu discloses that after receiving the space request message, selecting, by the donor […], an [resource] to donate to the donee […] from among a plurality of [resources] owned by the donee […] (¶ 10-12). Each node contains a lock manager, wherein when a node wants to request a lock for a resource (storage) to be transferred, it places the request (space request message) in a queue of the lock manager, which is replicated across the nodes; accordingly, it is sent to a donee node, and the donee selects a resource to transfer the lock for. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, and Hu, for the reasons noted in claim 2 above. Danilov discloses prioritizing free AAs over partial AAs (¶ 64). The node containing relatively little data (donor node) selects a block (AA) to donate to the remainder of nodes, and selects a block that is initially empty, thus prioritizing free AAs over partial AAs (as partial AAs are not empty). It would have been obvious to one having ordinary skill in the art before the effective data of the claimed invention (AIA ) to modify the node storage resource reallocation of Patel (combined with Li and Hu) to prioritize reallocating empty blocks over partial blocks, as in Danilov, because it would be applying a known technique to improve a similar method in the same way. Patel (combined with Li and Hu) disclose transferring storage resources between nodes. Dalilov also discloses transferring storage resources between nodes, and has been improved in a similar way to the claimed invention, to prioritize transferring empty blocks (AAs) over partial ones. It would have been obvious to modify the node storage transfer of Patel (combined with Li and Hu) to empty AAs, because it would yield the predictable improvement of keeping data within a block (AA) from being split between nodes, as only completely empty blocks would be donated to other nodes. Re claim 6, Patel, Li, Hu, and Danilov disclose the method of claim 5, and Li further discloses an AA ownership change (¶ 641). Hu further discloses sending a [resource] change message from the second node to the first node, wherein the [resource] change message includes an [resource] identifier of the selected [resource] (¶ 10-12). The donor node (second node) releases its lock, and commits the change to the queues of the respective queues in the respective FTDLM modules, which includes pushing the change to the donee node (first node). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, Hu, and Danilov, for the reasons noted in claim 5 above. Re claims 18-19, Patel, Li, Hu, and Danilov disclose the methods of claims 5-6 above, respectively; accordingly, they also disclose storage media storing instructions executing those methods, as in claims 18-19, respectively (See Patel, ¶ 64). Re claims 24-25, Patel, Li, Hu, and Danilov disclose the methods of claims 5-6 above, respectively; accordingly, they also disclose storage systems implementing those methods, as in claims 24-25, respectively (See Patel, ¶ 64). Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Li, further in view of Peake et al (US 2014/0122636 A1). Re claim 9, Patel and Li discloses the method of claim 7, and but do not disclose a cluster-wide space metric. Peake further discloses determining a cluster-wide space metric based at least in part on space metrics received from all other nodes of the cluster (¶ 289). The storage system determines a total memory space utilization rate by monitoring usage of the respective nodes. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the storage allocation of Patel (combined with Li) to utilize an average utilization (cluster-wide space metric) to balance utilization, as in Peake, because it would be applying a known technique to improve a similar method in the same way. Patel (combined with Li) discloses reallocating storage between nodes. Peake also discloses reallocating storage between nodes, which has been improved in a similar way to the claimed invention, to utilize a cluster-wide average utilization metric. It would have been obvious to modify the storage allocation of Patel (combined with Li) to balance allocation based on average utilization, as in Peake, because it would yield the predictable improvement of balancing performance across the nodes, by reducing the amount of node over or under utilization. Re claim 10, Patel, Li, and Peake disclose the method of claim 9, and Peake further discloses determining an average node space metric by dividing the cluster-wide space metric by a number of the plurality of nodes (¶ 289). The total threshold of memory space utilization may be divided by the over/under usages (a number) of the plurality of nodes. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, and Peake, for the reasons noted in claim 9 above. Re claim 11, Patel, Li, and Peake disclose the method of claim 10, and Li further discloses DEFSs (¶ 641) Peake further discloses that based on the space metric meeting a predetermined or configurable threshold in relation to the average node space metric, requesting storage space from one or more […] other nodes (Abstract). In response to determining that a node is over-utilized, it can request reassignment of one or more logical address blocks from an under-utilize node (other node). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to combine Patel, Li, and Peake, for the reasons noted in claim 9 above. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Li, further in view of Martinez Lerin (US 10810054 B1). Re claim 12, Patel and Li disclose the method of claim 1, but do not specifically disclose a polling request. Martinez Lerin discloses that said notifying comprises responding to a polling request for the space metric received from the one or more other nodes (col. 11, lines 21-42). Each storage node may notify a service node (one or more other nodes) in response to a polling request for capacity/usage information (space metric). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the storage node method of Patel (combined with Li) to include polling for a space metric, as in Martinez Lerin, because it would be applying a known technique to improve a similar method in the same way. Patel (combined with Li) discloses a storage node method. Martinez Lerin also discloses a storage node method, which has been improved in a similar way to the claimed invention, to poll for storage availability. It would have been obvious to modify the storage node method of Patel (combined with Li) to poll for storage node availability, as in Martinez Lerin, because it would yield the predictable improvement of providing a node with up-to-date allocation information for reallocation decisions. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Patel in view of Li, further in view of Moulton et al (US 2001/0042221 A1). Re claim 13, Patel an Li disclose the method of claim 7, and Li further discloses that the respective nodes are notified of a space metric; however, Patel and Li do not specifically disclose sending a space reporting message, including the space metric, to each of the one or more other node. Moulton discloses that said notifying comprises sending a space reporting message, including the space metric, to each of the one or more other node (¶ 4). A plurality of storage nodes each broadcast their heartbeat messages (reporting messages), which include available storage (space metric), to the other nodes. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the storage node method of Patel (combined with Li) to include a broadcast space reporting message, as in Moulton, because it would be applying a known technique to improve a similar method in the same way. Patel (combined with Li) discloses a storage node method. Moulton also discloses a storage node method, which has been improved in a similar way to the claimed invention, to broadcast node storage availability. It would have been obvious to modify the storage node method of Patel (combined with Li) to broadcast storage node availability, as in Moulton, because it would yield the predictable improvement of providing the nodes with allocation information which they can use for their reallocation decisions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Naor et al (US 2014/0025909 A1). Discloses a unified distributed storage platform comprising a plurality of distributed storage nodes allocated based on service level specification requirements (¶ 13). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CRAIG S GOLDSCHMIDT whose telephone number is (571)270-3489. The examiner can normally be reached M-F 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CRAIG S GOLDSCHMIDT/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596650
Preemptive Flushing of Processing-in-Memory Data Structures
2y 5m to grant Granted Apr 07, 2026
Patent 12596481
PREFETCHING DATA USING PREDICTIVE ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12585411
Optics-Based Distributed Unified Memory System
2y 5m to grant Granted Mar 24, 2026
Patent 12578854
COMPOSITE OPERATIONS USING MULTIPLE HIERARCHICAL DATA SPACES
2y 5m to grant Granted Mar 17, 2026
Patent 12578883
ELASTIC EXTERNAL STORAGE FOR DISKLESS HOSTS IN A CLOUD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+32.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 401 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month