Prosecution Insights
Last updated: April 19, 2026
Application No. 18/752,435

LIVE WRITES TO ERASURE CODED VOLUMES WITHOUT PRIOR REPLICATION

Final Rejection §103
Filed
Jun 24, 2024
Examiner
JUNG, ANDREW J
Art Unit
2175
Tech Center
2100 — Computer Architecture & Software
Assignee
Dropbox Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
80 granted / 139 resolved
+2.6% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
9 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 139 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to amendment filed on September 23, 2025. Claims 1-3, 6-7, 9-10, 12-14, and 16-19 have been amended. Claims 8, 11, and 15 have been canceled. Claims 21-23 have been added. The objections and rejections from the prior correspondence that are not restated herein are withdrawn. Response to Arguments Applicant's arguments filed on October 1, 2025 have been fully considered but are not persuasive. Applicant argues that the RAID storage options discussed in Knauft are not chosen based on "identifying a source of the first data block as a storage front end, by the first data block being a request to make a live write, or the first data block includes a request that needs to be accessed". The Examiner respectfully disagrees. KNAUFT [0040] teaches writing out data from in-memory bank 120 to an available segment of CapObjO on capacity tier 114, where a “segment” is a region of space in the LFS disk layout of CapObjO that can hold the contents of the bank, and [0071] teaches flushing the contents of in-memory metadata cache 122 to MetaObjO on performance tier 112 (i.e. determination is based on a source of the first data block); [0043] also teaches updating and managing the metadata for each storage object on the performance tier 112 due to the high I/O throughput and low I/O latency of performance tier 112 and the small size of the metadata object relative to the capacity object (i.e. an indication that the first data block needs to be accessed). Therefore, KNAUFT teaches the amended limitation of claim 1 as outlined in the rejection below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over KNAUFT (Pub. No. US 20210311631 A1), hereafter KNAUFT, in view of LEE (Pub. No.: US 20200274556 A1), hereafter LEE. Regarding claim 1, KNAUFT teaches: A non-transitory computer-readable storage medium comprising instructions that when executed by at least one processor, cause the at least one processor to: determine that a first data block is received in association with a latency-sensitive request of a data storage system […]; in response to determining that the first data block is received in association with the latency-sensitive request of the data storage system: replicate the first data block to a first plurality of disks of the data storage system (KNAUFT [0019] teaches the capacity object is stored on the capacity tier of the distributed storage system using an erasure coding scheme (e.g., RAID-5 or RAID-6), while the metadata object is stored on the performance tier of the distributed storage system using a mirroring scheme (e.g., RAID-1), where [0020] teaches write commands of metadata for the storage object are written to its corresponding metadata object on the performance tier using mirroring), wherein the determination is based on a source of the first data block, a type of a write request associated with the first data block, or an indication that the first data block needs to be accessed (KNAUFT [0040] teaches writing out data from in-memory bank 120 to an available segment of CapObjO on capacity tier 114, where a “segment” is a region of space in the LFS disk layout of CapObjO that can hold the contents of the bank, and [0071] teaches flushing the contents of in-memory metadata cache 122 to MetaObjO on performance tier 112 (i.e. determination is based on a source of the first data block); [0043] also teaches updating and managing the metadata for each storage object on the performance tier 112 due to the high I/O throughput and low I/O latency of performance tier 112 and the small size of the metadata object relative to the capacity object (i.e. an indication that the first data block needs to be accessed)); the first plurality of disks of the data storage system are distributed across multiple servers (KNAUFT [0037] teaches metadata object MetaObjO is created/managed using a traditional, overwrite-based file system disk layout and is mirrored (via, e.g., RAID-1) across some, or all, of storage devices 108(1)-(N) of performance tier 112); determine that a second data block is received in association with a non-latency-sensitive request of the data storage system; in response to determining that the second data block is received in association with the non-latency-sensitive request of the data storage system: erasure code the second data block to the second plurality of disks without replicating the second data block (KNAUFT [0019] teaches the capacity object is stored on the capacity tier of the distributed storage system using an erasure coding scheme (e.g., RAID-5 or RAID-6) and is managed using a LFS disk layout, where [0020] teaches write commands that are directed to stripes of the storage object are issued as writes to its corresponding capacity object on the capacity tier), the second plurality of disks of the data storage system are distributed across multiple servers (KNAUFT [0037] teaches capacity object CapObjO is created/managed using a LFS disk layout and is striped across some, or all, of storage devices 110(1)-(N) of capacity tier 114 in accordance with storage object O's provisioned erasure coding scheme). KNAUFT does not appear to explicitly teach after replicating the first data block to the first plurality of disks, erasure coding the first data block to a second plurality of disks of the data storage system. However, LEE teaches the limitation (LEE [0060] teaches the data management circuit 104 may decide to replace the redundant data associated with the data set with less resilient data; [0061] teaches changes to previously stored user data and their redundancy levels may occur; [0072] teaches another redundancy scheme to provide resiliency when disks fail in the system 100 may be to have a second copy of the data be erasure coded across a set of storage devices 106 (e.g., devices 106b, 106c, 106d, 106e, 106f, 106g, and 106h). And a first copy of the data set fully stored in a device 106A. In such an embodiment, the data set 101 may be stored in its hashed location in its original form (the first copy of device 106a), but the second copy will be erasure coded and distributed across the defined set of devices 106b-106h). Accordingly, it would have been obvious to a person having ordinary skill in the art at the time of the effective filing of the invention, having the teachings of KNAUFT and LEE before them, to include LEE’s change of redundancy levels in KNAUFT’s distributed storage system. One would have been motivated to make such a combination in order to employ a less expensive redundancy scheme such as erasure or error encoding as the storage fills up and the free space decreases as taught by LEE [0049], and allow the system to support more than a single storage device failure, while not incurring the latency delay needed to reassemble the data set during a read/Get operation (LEE [0073]), providing higher levels of performance and availability/reliability without sacrificing usable capacity, thus allowing significantly lower system cost (LEE [0087]). Regarding claim 2, KNAUFT in view of LEE teaches the elements of claim 1 as outlined above. KNAUFT also teaches wherein the instructions further configure the at least one processor to: send a put () request acknowledgment to the non-latency-sensitive request prior to the second data blocks being accessible from the data storage system (see KNAUFT Fig. 4 #418 & [0052] for sending an acknowledgment to the client which originated the write request indicating that the write request has been processed (thereby allowing the client to proceed with its operation), and in #428 contents of in-memory bank are written via full stripe write to allocated segment). Regarding claim 3, KNAUFT in view of LEE teaches the elements of claim 1 as outlined above. KNAUFT also teaches wherein the non-latency-sensitive request comes with a callback address to be notified when the first data block is accessible from the data storage system (KNAUFT [0055] teaches once the data blocks have been reordered, full stripe write handler 118 can calculate and fill in the parity blocks for each stripe of data blocks in in-memory bank 120 (step 424), allocate a new segment in CapObjO for holding the contents of in-memory bank 120 (or find an existing free segment via the SUT) (step 426), and write out in-memory bank 120 via a full stripe write to that segment (step 428). Full stripe write handler 118 can further update the logical map in in-memory metadata cache 122 so that the LBAs of the logical data blocks in the bank/segment now point to the PBAs on capacity tier 114 where the data blocks now reside and update the SUT in in-memory metadata cache 122 to identify the new segment of CapObjO and the number of live data blocks in that segment (step 430)). Regarding claim 4, KNAUFT in view of LEE teaches the elements of claim 1 as outlined above. KNAUFT also teaches wherein it is determined that the first data block is from a non-latency-sensitive request when the non-latency-sensitive request is received by an acceptor service (see KNAUFT [0020] as outlined in claim 1 above). Regarding claim 5, KNAUFT in view of LEE teaches the elements of claim 1 as outlined above. KNAUFT also teaches wherein it is determined that the first data block is from the latency-sensitive request when latency-sensitive client is received by a storage front end (see KNAUFT [0020] as outlined in claim 1 above). Regarding claim 21, KNAUFT in view of LEE teaches the elements of claim 1 as outlined above. KNAUFT also teaches wherein the determination is based on identifying the source of the first data block as a storage front end or the type of the write request is a live write (see KNAUFT [0040] & [0071] as taught above in claim 1, where data from in-memory bank 120 is written to CapObjO on capacity tier 114, and data from in-memory metadata cache 122 is written to MetaObjO on performance tier 112). Claims 6-7, 9-10, 12-14, 16-20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over KNAUFT in view of LEE and GUPTA (Pub. No.: US 20230315303 A1), hereafter GUPTA. Regarding claim 6, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. KNAUFT also teaches: collecting a second plurality of data blocks in a buffer (see KNAUFT Fig. 1 in-memory bank 120 & in-memory metadata cache 122, where [0039] teaches caching the write request in an in-memory bank 120, and caching certain metadata of O that is modified due to the write request in an in-memory metadata cache 122); erasure coding the second plurality of data blocks onto a second plurality of disks of a storage system, without replicating the second plurality of data blocks to any one of the second plurality of disks of the storage system (see KNAUFT [0019-0020] as outlined in claim 1 above). KNAUFT in view of LEE does not appear to explicitly teach wherein the erasure coding is performed using a local reconstruction code (LRC) erasure coding scheme. However, GUPTA teaches the limitation (GUPTA [0046] teaches determining parity stripes using LRC). Accordingly, it would have been obvious to a person having ordinary skill in the art at the time of the effective filing of the invention, having the teachings of KNAUFT, LEE, and GUPTA before them, to include GUPTA and LEE’s local reconstruction code scheme in KNAUFT’s distributed storage system. One would have been motivated to make such a combination in order to optimize rebuild read and storage space overhead. Regarding claim 7, KNAUFT in view of LEE and GUPTA teaches the elements of claim 6 as outlined above. KNAUFT also teaches wherein the erasure coding the plurality of data blocks onto the second plurality of disks begins after there is a threshold amount of data blocks present in the buffer (see KNAUFT Fig. 5). Regarding claim 9, KNAUFT in view of LEE and GUPTA teaches the elements of claim 6 as outlined above. KNAUFT also teaches sending a put () request acknowledgment in response to the non-latency-sensitive request prior to the first plurality of data being accessible from the data storage system (see KNAUFT Fig. 4 #418 & [0052] as outlined in claim 2 above). Regarding claim 10, KNAUFT in view of LEE and GUPTA teaches the elements of claim 9 as outlined above. KNAUFT also teaches wherein the non-latency-sensitive request provides a callback address to be notified when the second plurality of data is accessible from the data storage system (see KNAUFT [0055] as outlined in claim 3 above). Regarding claim 12, KNAUFT in view of LEE and GUPTA teaches the elements of claim 6 as outlined above. KNAUFT also teaches: determining that a first data block is part of a latency-sensitive request of a data storage system (see KNAUFT [0019-0020] as outlined in claim 1 above); replicating the first data block to at least two of the plurality of disks (see KNAUFT [0037] as outlined in claim 1 above); sending a put () request acknowledgment indicating that the first data block is stored and accessible from the storage system (see KNAUFT Fig. 10 #1026 sending ACK to client); LEE also teaches after replicating the first data block to the at least two of the plurality of disks, erasure coding the first data block to additional disks of the plurality of disks (see LEE [0060-0061] and [0072] as outlined in claim 1 above). The same motivation that was utilized for combining KNAUFT, LEE, and GUPTA as set forth in claim 6 is equally applicable to claim 12. Regarding claim 13, KNAUFT in view of LEE and GUPTA teaches the elements of claim 12 as outlined above. KNAUFT also teaches wherein it is determined that the first data block is part of the latency-sensitive request when the client accesses the data storage system through a storage front end (see KNAUFT [0020] as outlined in claim 5 above). Regarding claim 14, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. KNAUFT also teaches A computing system comprising: at least one processor; and a memory storing instructions that, when executed by the at least one processor, configure the computing system (see KNAUFT Fig. 1), Regarding claim 16, KNAUFT in view of LEE and GUPTA teaches the elements of claim 14 as outlined above. KNAUFT also teaches send a put () request acknowledgment in response to the non-latency-sensitive request prior to the second plurality of data being accessible from the storage system (see KNAUFT Fig. 4 #418 & [0052] as outlined in claim 2 below). Regarding claim 17, KNAUFT in view of LEE and GUPTA teaches the elements of claim 16 as outlined above. KNAUFT also teaches wherein the non-latency-sensitive request includes a callback address to be notified when the second plurality of data is accessible from the storage system (see KNAUFT [0055] as outlined in claim 3 below). Regarding claim 18, KNAUFT in view of LEE and GUPTA teaches the elements of claim 16 as outlined above. KNAUFT also teaches wherein it is determined that the second plurality of data is part of the non-latency-sensitive request when the non-latency-sensitive request is received through an acceptor service (see KNAUFT [0020] as outlined in claim 1 below). Regarding claim 19, KNAUFT in view of LEE and GUPTA teaches the elements of claim 14 as outlined above. KNAUFT also teaches: determine that a first data block is received as part of a latency-sensitive request of a data storage system (see KNAUFT [0019-0020] as outlined in claim 1 above); replicate the first data block to at least two of the plurality of disks (see KNAUFT [0037] as outlined in claim 1 above); send a put () request acknowledgment indicating that the first data block is stored and accessible from the storage system (see KNAUFT Fig. 10 #1026 sending ACK to client). LEE also teaches after replicating the first data block to the at least two of the plurality of disks, erasure coding the first data block to additional disks of the plurality of disks (see LEE [0060-0061] and [0072] as outlined in claim 1 above). The same motivation that was utilized for combining KNAUFT, LEE, and GUPTA as set forth in claim 14 is equally applicable to claim 19. Regarding claim 20, KNAUFT in view of LEE and GUPTA teaches the elements of claim 14 as outlined above. GUPTA also teaches: wherein the erasure coding is performed using a local reconstruction code (LRC) erasure coding scheme (GUPTA [0046] teaches determining parity stripes using LRC). The same motivation that was utilized for combining KNAUFT, LEE, and GUPTA as set forth in claim 14 is equally applicable to claim 20. Regarding claim 22, the claim recites similar limitation as corresponding claim 21 and is rejected for similar reasons as claim 21 using similar teachings and rationale. Regarding claim 23, the claim recites similar limitation as corresponding claim 21 and is rejected for similar reasons as claim 21 using similar teachings and rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: JUCH (Patent No.: US 11947814 B2) – “Optimizing Resiliency Group Formation Stability” relates to utilizing mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage. WANG (Pub. No.: US 20210311652 A1) – “Using Segment Pre-Allocation to Support Large Segments” relates to employing one of two data redundancy schemes such as mirroring or erasure coding, thus improving efficiency of executing write operations depending on whether the writes are directed to a partial stripe or to a full stripe or all of the data blocks within a stripe. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW J JUNG whose telephone number is (571)270-3779. The examiner can normally be reached on Monday through Friday from 9am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Wiley can be reached on 571-272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW J JUNG/Supervisory Patent Examiner, Art Unit 2175
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Jun 27, 2025
Non-Final Rejection — §103
Sep 18, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Examiner Interview Summary
Oct 01, 2025
Response Filed
Jan 22, 2026
Final Rejection — §103
Mar 04, 2026
Interview Requested
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 30, 2026
Request for Continued Examination
Mar 31, 2026
Examiner Interview Summary
Apr 02, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572787
NEUROMORPHIC DEVICE THAT PROVIDES A LOOKUP TABLE BASED RECONFIGURABLE NEURAL NETWORK ARCHITECTURE
2y 5m to grant Granted Mar 10, 2026
Patent 12524622
SYSTEMS AND METHODS RELATING TO KNOWLEDGE DISTILLATION IN NATURAL LANGUAGE PROCESSING MODELS
2y 5m to grant Granted Jan 13, 2026
Patent 11663144
LRU LIST REORGANIZATION FOR FAVORED AND UNFAVORED VOLUMES
2y 5m to grant Granted May 30, 2023
Patent 11662929
SYSTEMS, METHODS, AND COMPUTER READABLE MEDIA PROVIDING ARBITRARY SIZING OF DATA EXTENTS
2y 5m to grant Granted May 30, 2023
Patent 11620220
CACHE SYSTEM WITH A PRIMARY CACHE AND AN OVERFLOW CACHE THAT USE DIFFERENT INDEXING SCHEMES
2y 5m to grant Granted Apr 04, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
95%
With Interview (+37.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 139 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month