Prosecution Insights
Last updated: April 19, 2026
Application No. 18/982,911

SINGLE INPUT/OUTPUT WRITES IN A FILE SYSTEM HOSTED ON A CLOUD, VIRTUAL, OR COMMODITY-SERVER PLATFORM

Non-Final OA §103§DP
Filed
Dec 16, 2024
Examiner
BENNER, JANE WEI
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
249 granted / 298 resolved
+28.6% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
313
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
23.0%
-17.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 298 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 4/3/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 5 is objected to because of the following informalities: Line 5 recites “the collection of disks” which should be --a collection of disks--. Appropriate correction is required. Claim 15 is objected to for the same reasons as claim 5, as outlined above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6-7, 11-12, 16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Satoyama et al. (US 2019/0121549 A1) hereinafter Satoyama et al. in view of Tokuda et al. (US 2008/0222214 A1) hereinafter Tokuda et al. Regarding claim 1, Satoyama et al. teaches a non-transitory machine readable medium storing instructions, which when executed by one or more processing resources of a node of storage system, cause the node to: based on compressibility of a data payload of a write operation received from a client, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within persistent block storage (it can be determined that a compression process is to be performed by the FMPK, and when a write request is transmitted in this manner, the I/O processing program calculates an in-pool page number corresponding to virtual page and a block number corresponding to the page number Paragraph [0145]). Satoyama et al. does not appear to explicitly teach, however, Tokuda et al. teaches after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium (a journal is created each time the storage system processes the write I/O processing request to the journal acquisition storage device, which includes processing order number and time, a write destination address and the write data Paragraphs [0151]-[0154]); and prior to completion of the journaling, acknowledge receipt of the write operation to the client (the controller may notify completion of the write I/O process to the host computer without waiting for an end of the journal creation process Paragraphs [0139], [0235]). The disclosures of Satoyama et al. and Tokuda et al., hereinafter ST, are analogous art to the claimed invention because they are in the same field of endeavor of write command processing and/or journaling in memory systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of ST before them, to modify the teachings of Satoyama et al. to include the teachings of Tokuda et al. since both ST teach processing writes in conjunction with journaling. Therefore it is applying a known technique (initiate journaling of the write data and block number and notifying the host of completion of the write I/O process [0139], [0154] of Tokuda et al.) to a known device (performing compression of write data of Satoyama et al.) ready for improvement to yield predictable results (the write data and block number is journaled and the host has been notified that the write I/O has completed), KSR, MPEP 2143. Regarding claim 6, ST teaches all of the features with respect to claim 1, as outlined above. Tokuda et al. further teaches wherein the node comprises a virtual storage system or a commodity computer system in which latency of the journal storage medium is plus or minus 10% of latency of the persistent block storage (journal storage device 31-32 and persistent storage devices 33 are provided from storage volume spaces created by the plurality of physical storage devices Paragraph [0049], that is, the latency of the journal storage device is equal to that of the persistent storage device). Claims 7 and 12 are rejected under 35 USC 103 for the same reasons as claim 1, as outlined above. Claim 11 is rejected under 35 USC 103 for the same reasons as claim 6, as outlined above. Claim 16 is rejected under 35 USC 103 for the same reasons as claim 6, as outlined above. Regarding claim 18, ST teaches all of the features with respect to claim 16, as outlined above. Satoyama et al. further teaches wherein the persistent block storage comprises a collection of one or more SSDs (FPMK is a solid state drive (SSD) Paragraph [0058]). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over ST in further view of Holtz et al. (US 2015/0355985 A1) hereinafter Holtz et al. Regarding claim 14, ST teaches all of the features with respect to claim 12, as outlined above. ST does not appear to explicitly teach, however Holtz et al. teaches wherein the instructions further cause the storage node to, during recovery from a crash of the storage node, identify (i) those of a plurality of single I/O write operations performed by the storage node prior to performance of a last CP that are to be reconstructed and replayed based on the data structure (when a node failure occurs, recover procedures may be performed to restore the containers using information logged in the recover logs Paragraph [0030], where accumulated entries in the logs is recoverable through replay of recovery logs from the previous CP Paragraphs [0037]-[0038]), (ii) information regarding the last CP (file system layer includes a CP process that implements/tracks the occurrence of CPs Paragraph [0037]), and (iii) operation headers contained in the journal (CP information is stored as metadata within the log Paragraph [0037]). The disclosures of ST and Holtz et al., hereinafter STH, are analogous art to the claimed invention because they are in the same field of endeavor of write command processing and/or journaling in memory systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of STH before them, to modify the teachings of ST to include the teachings of Holtz et al. since both STH teach processing writes in conjunction with journaling. Therefore it is applying a known technique (using consistency points during recovery of a node failure event [0037]-[0038] of Holtz et al.) to a known device (performing compression of write data of Satoyama et al.) ready for improvement to yield predictable results (CPs are used to reconstruct data during a node failure event of Holtz), KSR, MPEP 2143. Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over ST in further view of Singh (US 10,152,481 B1) hereinafter Singh. Regarding claim 17, ST teaches all of the features with respect to claim 16, as outlined above. ST does not appear to explicitly teach, however Singh teaches wherein the journal storage medium comprises a solid-state drive (SSD) (SSD may be used for large journals instead of NVRAMS Column 11, Lines 26-30). The disclosures of ST and Singh, hereinafter STS, are analogous art to the claimed invention because they are in the same field of endeavor of write command processing and/or journaling in memory systems. Because both STS teach the use of journaling in different storage devices (ex. the journal storage devices of Tokuda comprising disks), it would have been obvious to one skilled in the art to substitute one type of memory for another to achieve the predictable result of availability of the journal stored in the particular type of memory as disclosed by Singh, in this case, SSD (KSR, MPEP 2143). Claim(s) 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over ST in further view of Trika (US 2019/0042143 A1) hereinafter Trika. Regarding claim 19, ST teaches all of the features with respect to claim 18, as outlined above. ST does not appear to explicitly teach, however Trika teaches further comprising a file system layer and an intermediate storage layer interposed between the file system layer and the persistent block storage, and wherein the intermediate storage layer performs the single I/O write operation (Fig. 8 depicts application layers 802, which includes a file-system layer and additional intermediate layers (i.e., RAID driver layer and/or NVMe driver layer) that sits between the file-system layer and data storage devices in the bottom layer Paragraph [0065] which requests to perform I/O operations flow through before reaching a given data storage device [0033]). The disclosures of ST and Trika, hereinafter STT, are analogous art to the claimed invention because they are in the same field of endeavor of write command processing and/or journaling in memory systems. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of STT before them, to modify the teachings of ST to include the teachings of Trika since both STT teach processing writes in conjunction with journaling. Therefore it is applying a known technique (file system layer and an intermediate storage layer between the block storage device [0033], [0065] of Trika) to a known device (performing compression of write data of Satoyama et al.) ready for improvement to yield predictable results (an intermediate storage layer sits between the file-system layer and data storage devices [0033], [0065]), KSR, MPEP 2143. Regarding claim 20, STT teaches all of the features with respect to claim 19, as outlined above. Trika further teaches wherein the intermediate storage layer comprises a redundant array of independent disks (RAID) layer (RAID (Redundant Array of Independent Disks) driver is the intermediate layer [0033], [0065]) and wherein the collection of one or more SSDs is managed by the RAID layer (the data storage device can be an SSD Paragraphs [0031],[0034]). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of U.S. Patent No. 12,169,630 B2 as outlined in the table below. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of Patent No. 12,169,630 B2. Instant Application 18/982,911 US Patent No. 12,169,630 B2 Parent App. 18/523,747 1. A non-transitory machine readable medium storing instructions, which when executed by one or more processing resources of a node of storage system, cause the node to: based on compressibility of a data payload of a write operation received from a client, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and prior to completion of the journaling, acknowledge receipt of the write operation to the client. 1. A non-transitory machine readable medium storing instructions, which when executed by one or more processing resources of a node of distributed storage system, cause the node to: receive a write operation from a client; based on compressibility of a data payload of the write operation, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within a block storage media; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and without waiting for completion of the journaling, acknowledge by the node receipt of the write operation to the client. 2. The non-transitory machine readable medium of claim 1, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 2. The non-transitory machine readable medium of claim 1, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 3. The non-transitory machine readable medium of claim 2, wherein the node is operating in a high-availability (HA) configuration with another node that represents an HA partner of the node and wherein the journaling includes logging to a journal and mirroring of the journal to the HA partner. 3. The non-transitory machine readable medium of claim 2, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system… 4. The non-transitory machine readable medium of claim 2, wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 3. The non-transitory machine readable medium of claim 2, … wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 5. The non-transitory machine readable medium of claim 4, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, includes determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 4. The non-transitory machine readable medium of claim 3, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 6. The non-transitory machine readable medium of claim 1, wherein the node comprises a virtual storage system or a commodity computer system in which latency of the journal storage medium is plus or minus 10% of latency of the persistent block storage. 6. The non-transitory machine readable medium of claim 1, wherein the node comprises a virtual storage system or a commodity computer system in which latency of the journal storage medium is plus or minus 10% of latency of the block storage media. 7. A method comprising: receiving, by a storage node, a write operation from a client; performing, by the storage node, a single input/output (I/O) write operation including writing a data payload of the write operation in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, in parallel: causing, by the storage node, journaling of an operation header, containing information identifying the write operation and the particular block number, to be stored to a journal storage medium by performing an asynchronous journaling operation; and sending, by the storage node, an acknowledgement to the client regarding receipt of the write operation. 7. A method comprising: receiving, by a node of a distributed storage system, a write operation from a client; based on compressibility of a data payload of the write operation, performing, by the node, a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within a block storage media; after completion of the single I/O write operation, initiating journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and without waiting for completion of the journaling, acknowledge by the node receipt of the write operation to the client. 8. The method of claim 7, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the method further comprises: maintaining, by the storage node, a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; marking, by the storage node, the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, including, by the storage node, information regarding the particular CP within metadata of the packed block header. 8. The method of claim 7, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the method further comprises: maintaining, by the node, a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; marking, by the node, the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, storing information regarding the particular CP within metadata of the packed block header. 10. The method of claim 8, further comprising, during recovery from a crash of the storage node, identifying, by the storage node, (i) those of a plurality of single I/O write operations performed by the storage node prior to performance of a last CP by the storage node that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 9. The method of claim 8, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system and wherein the method further comprises during recovery from a crash of the node, identifying those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the data structure, information regarding the last CP, and operation headers contained in the journal. 12. A storage node comprising: one or more processors; and instructions that when executed by the one or more processors cause the storage node to: receive a write operation from a client; based on compressibility of a data payload of the write operation, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and prior to completion of the journaling, acknowledge receipt of the write operation to the client. 12. A distributed storage system comprising: one or more processors; and a non-transitory computer-readable medium, coupled to the one or more processors, having stored therein instructions that when executed by the one or more processors cause a node of the distributed storage system to: receive a write operation from a client; based on compressibility of a data payload of the write operation, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within a block storage media; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and without waiting for completion of the journaling, acknowledge by the node receipt of the write operation to the client. Regarding claims 12-20, it is noted that while the claims of the parent is a system and the claims of the instant application is a storage node, the instant application merely applies the teaching to a different technological environment, and it would be obvious to derive the storage node from a system comprising the storage node. 13. The storage node of claim 12, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the storage node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 13. The distributed storage system of claim 12, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 14. The storage node of claim 12, wherein the instructions further cause the storage node to, during recovery from a crash of the storage node, identify (i) those of a plurality of single I/O write operations performed by the storage node prior to performance of a last CP that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 14. The distributed storage system of claim 13, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system and wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 15. The storage node of claim 14, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, includes determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 15. The distributed storage system of claim 14, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 16. The storage node of claim 12, wherein the storage node comprises a virtual storage system or a commodity computer system in which latency of the journal storage medium is within plus or minus 10% of latency of the persistent block storage. 17. The distributed storage system of claim 13, wherein latency of the journal storage medium is plus or minus 10% of latency of the block storage media. 17. The storage node of claim 16, wherein the journal storage medium comprises a solid-state drive (SSD). 18. The distributed storage system of claim 17, the journal storage medium comprises a solid-state drive (SSD)... 18. The storage node of claim 16, wherein the persistent block storage comprises a collection of one or more SSDs. 18. The distributed storage system of claim 17, … the block storage media comprises a collection of one or more SSDs. 19. The storage node of claim 18, further comprising a file system layer and an intermediate storage layer interposed between the file system layer and the persistent block storage, and wherein the intermediate storage layer performs the single I/O write operation. 19. The distributed storage system of claim 13, wherein an intermediate storage layer of the node performs the single I/O write operation, wherein the intermediate storage layer is interposed between a file system layer of the node and the block storage media. 20. The storage node of claim 19, wherein the intermediate storage layer comprises a redundant array of independent disks (RAID) layer and wherein the collection of one or more SSDs is managed by the RAID layer. 20. The distributed storage system of claim 19, wherein the intermediate storage layer comprises a redundant array of independent disks (RAID) layer and wherein the collection of one or more SSDs is managed by the RAID layer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of U.S. Patent No. No. 11,861,172 B2 as outlined in the table below. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of Patent No. 11,861,172 B2. Instant Application 18/982,911 US Patent No. 11,861,172 B2 Parent App. 17/672,401 1. A non-transitory machine readable medium storing instructions, which when executed by one or more processing resources of a node of storage system, cause the node to: based on compressibility of a data payload of a write operation received from a client, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and prior to completion of the journaling, acknowledge receipt of the write operation to the client. 1. A non-transitory machine readable medium storing instructions, which when executed by a processing resource of a node of distributed storage system, cause the node to: responsive to receiving a write operation from a client by a file system layer of the node and determining a data payload of the write operation meets a compressibility threshold, cause an intermediate storage layer of the node logically interposed between the file system layer and a block storage media to perform a single input/output (I/O) write operation, wherein the single I/O write operation involves writing a packed block header containing an operation header entry corresponding to the write operation, and the data payload in compressed form to a data block associated with a particular block number within the block storage media; and responsive to completion of the single I/O write: initiate, by the file system layer, journaling of an operation header containing the particular block number; and without waiting for completion of the journaling, acknowledge, by the file system layer, receipt of the write operation to the client. 2. The non-transitory machine readable medium of claim 1, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 2. The non-transitory machine readable medium of claim 1, wherein the instructions further cause the node to: maintain, by the file system layer, a persistent on-disk data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the persistent on-disk data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 3. The non-transitory machine readable medium of claim 2, wherein the node is operating in a high-availability (HA) configuration with another node that represents an HA partner of the node and wherein the journaling includes logging to a journal and mirroring of the journal to the HA partner. 3. The non-transitory machine readable medium of claim 2, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system… 4. The non-transitory machine readable medium of claim 2, wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 3. The non-transitory machine readable medium of claim 2, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system and wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the persistent on-disk data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 5. The non-transitory machine readable medium of claim 4, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, includes determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 4. The non-transitory machine readable medium of claim 3, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the persistent on-disk data structure as being associated with the last CP, that are not present in the journal, determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 7. A method comprising: receiving, by a storage node, a write operation from a client; performing, by the storage node, a single input/output (I/O) write operation including writing a data payload of the write operation in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, in parallel: causing, by the storage node, journaling of an operation header, containing information identifying the write operation and the particular block number, to be stored to a journal storage medium by performing an asynchronous journaling operation; and sending, by the storage node, an acknowledgement to the client regarding receipt of the write operation. 7. A method comprising: responsive to receiving a write operation from a client by a file system layer of a node of a distributed storage system and determining a data payload of the write operation meets a compressibility threshold, causing an intermediate storage layer of the node logically interposed between the file system layer and a block storage media to perform a single input/output (I/O) write operation, wherein the single I/O write operation involves writing a packed block header containing an operation header entry corresponding to the write operation, and the data payload in compressed form to a data block associated with a particular block number within the block storage media; and responsive to completion of the single I/O write: initiating, by the file system layer, journaling of an operation header containing the particular block number; and without waiting for completion of the journaling, acknowledging, by the file system layer, receipt of the write operation to the client. 8. The method of claim 7, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the method further comprises: maintaining, by the storage node, a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; marking, by the storage node, the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, including, by the storage node, information regarding the particular CP within metadata of the packed block header. 8. The method of claim 7, further comprising: maintaining, by the file system layer, a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; marking the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, storing information regarding the particular CP within metadata of the packed block header. 12. A storage node comprising: one or more processors; and instructions that when executed by the one or more processors cause the storage node to: receive a write operation from a client; based on compressibility of a data payload of the write operation, perform a single input/output (I/O) write operation including writing the data payload in compressed form to a data block associated with a particular block number within persistent block storage; after completion of the single I/O write operation, initiate journaling of an operation header, containing information identifying the write operation and the particular block number, to a journal storage medium; and prior to completion of the journaling, acknowledge receipt of the write operation to the client. 12. A distributed storage system comprising: a processing resource; and a non-transitory computer-readable medium, coupled to the processing resource, having stored therein instructions that when executed by the processing resource cause a node of the distributed storage system to: responsive to receiving a write operation from a client by a file system layer of the node and determining a data payload of the write operation meets a compressibility threshold, cause an intermediate storage layer of the node logically interposed between the file system layer and a block storage media to perform a single input/output (I/O) write operation, wherein the single I/O write operation involves writing a packed block header containing an operation header entry corresponding to the write operation, and the data payload in compressed form to a data block associated with a particular block number within the block storage media; and responsive to completion of the single I/O write: initiate, by the file system layer, journaling of an operation header containing the particular block number; and without waiting for completion of the journaling, acknowledge, by the file system layer, receipt of the write operation to the client. Regarding claims 12-20, it is noted that while the claims of the parent is a system and the claims of the instant application is a storage node, the instant application merely applies the teaching to a different technological environment, and it would be obvious to derive the storage node from a system comprising the storage node. 13. The storage node of claim 12, wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the storage node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 13. The distributed storage system of claim 12, wherein the instructions further cause the node to: maintain, by the file system layer, a persistent on-disk data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the persistent on-disk data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header. 14. The storage node of claim 12, wherein the instructions further cause the storage node to, during recovery from a crash of the storage node, identify (i) those of a plurality of single I/O write operations performed by the storage node prior to performance of a last CP that are to be reconstructed and replayed based on the data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 14. The distributed storage system of claim 13, wherein the journaling includes logging to a journal and mirroring of the journal to a high-availability (HA) partner node of a second distributed storage system and wherein the instructions further cause the node to during recovery from a crash of the node, identify (i) those of a plurality of single I/O write operations performed by the node prior to performance of a last CP by the node that are to be reconstructed and replayed based on the persistent on-disk data structure, (ii) information regarding the last CP, and (iii) operation headers contained in the journal. 15. The storage node of claim 14, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, includes determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 15. The distributed storage system of claim 14, wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the persistent on-disk data structure as being associated with the last CP, that are not present in the journal, determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block. 19. The storage node of claim 18, further comprising a file system layer and an intermediate storage layer interposed between the file system layer and the persistent block storage, and wherein the intermediate storage layer performs the single I/O write operation. 12… responsive to receiving a write operation from a client by a file system layer of the node and determining a data payload of the write operation meets a compressibility threshold, cause an intermediate storage layer of the node logically interposed between the file system layer and a block storage media to perform a single input/output (I/O) write operation… 20. The storage node of claim 19, wherein the intermediate storage layer comprises a redundant array of independent disks (RAID) layer and wherein the collection of one or more SSDs is managed by the RAID layer. 19. The distributed storage system of claim 12, wherein the intermediate storage layer comprises a redundant array of independent disks (RAID) layer. 20. The distributed storage system of claim 19, wherein the block storage media comprises a collection of disks managed by the RAID layer. Allowable Subject Matter Claims 2-5, 8-10, 13 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Specifically regarding claim 2, “wherein the single I/O write operation further includes writing a packed block header containing an operation header entry corresponding to the write operation to the data block and wherein the instructions further cause the node to: maintain a data structure containing information regarding a plurality of block numbers that are available for single I/O write operations; mark the particular block number within the data structure as being associated with a particular consistency point (CP) active at a time of the single I/O write operation; and prior to performing the single I/O write operation, store information regarding the particular CP within metadata of the packed block header,” is not taught by the prior art. The closest prior art of record is ST, as outlined above, in further view of Subramanian et al. (US 2017/0031772 A1). Subramanian et al. teaches that a file system can maintain an active map of a snapshot, i.e., an image of the claimed consistency. However, neither ST nor Subramanian, individually or in combination, disclose the node to store information regarding a particular CP within metadata of the packed block header prior to performing the single I/O write operation itself, wherein the single I/O write operation involves writing a packed block header containing an operation header entry corresponding to the single write operation. Claims 3-5 would be allowable at least due to its dependency on claim 2. Claims 8 and 13 recite substantially similar claim limitations to that of claim 2 and is allowable for the same reasons as discussed above. Claims 9-10 would be allowable at least due to its dependency on claim 8. Specifically regarding claim 15, “wherein identification of said those of a plurality of single I/O write operations comprises for any block numbers marked in the data structure as being associated with the last CP, that are not present in the journal, includes determining whether a corresponding data block persisted to the collection of disks represents a valid single I/O data block based on existence of a packed block header within the corresponding data block,” is not taught by the prior art of record. Subramanian et al. generally teaches that a file system can maintain an active map of a snapshot, i.e., an image of the claimed consistency point, however, is silent with regards the remaining limitations. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chatterjee (US 8554734) teaches using a logging journal internal to the storage server can perform an I/O operation that can be completed in parallel resulting in a latency substantially equal to one standard data storage write. Pawar et al. (US 9542396) teaches managing a transaction log of a file system stored on a non-volatile storage medium, and changes are tracked by looking at the metadata. Kumar et al. (US 2021/0294515 A1) teaches generating metadata for journal entries for batching. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANE W BENNER whose telephone number is (571)270-0067. The examiner can normally be reached Mon - Thurs (8 AM - 5 PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, REGINALD BRAGDON can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JANE W. BENNER Primary Examiner Art Unit 2131 /JANE W BENNER/ Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602327
COUNTER-BASED PREFETCH MANAGEMENT FOR MEMORY
2y 5m to grant Granted Apr 14, 2026
Patent 12602180
STORAGE DEVICE FOR REDUCING MULTI STREAM WRITE IMBALANCE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602183
CONTROLLER AND METHODS FOR MANAGING TIMING CHARCTERISTICS USING COLLECTED METRICS AND A TELEMETRY RATIO
2y 5m to grant Granted Apr 14, 2026
Patent 12585410
STORAGE DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12566566
SSD SYSTEM WITH CONSISTENT READ PERFORMANCE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
92%
With Interview (+8.9%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 298 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month