Prosecution Insights
Last updated: April 19, 2026
Application No. 18/487,328

Management of File System Requests in a Distributed Storage System

Final Rejection §103
Filed
Oct 16, 2023
Examiner
GEORGANDELLIS, ANDREW C
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
Weka Io Ltd.
OA Round
4 (Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
96%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
274 granted / 490 resolved
-2.1% vs TC avg
Strong +40% interview lift
Without
With
+40.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
18 currently pending
Career history
508
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
84.3%
+44.3% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 490 resolved cases

Office Action

§103
DETAILED ACTION Introduction Claims 21-40 are pending. Claims 1-20 are cancelled. Claims 21 and 31 are amended. No new claims are added. This Office action is in response to Applicant’s request for continued examination (RCE) filed on 10/10/2025. Other Prior Art Colgrove (US 2012/0066449) teaches splitting a read request into a reconstruction request (i.e., comprising a plurality of requests each directed to a different storage device) in response to predicting that a storage system is in a long latency response (i.e., congested) state. See par. 69. Huang teaches generating a single transfer request corresponding to a requested chunk when a node storing the chunk is not congested, and generating multiple transfer requests corresponding to portions of the requested chunk to be sent to the other nodes storing the portions when the node storing the chunk is congested, thereby allowing the system to reconstruct the chunk from the portions without having to obtain the chunk directly from the congested node. See Huang et al., “Erasure Coding in Windows Azure Storage.” Response to Arguments Examiner discusses the arguments of Applicant’s representative below. Rejection of claims 21 and 31 under 31 U.S.C. 103 Applicant’s representative has amended claims 21 and 31 to recite new features and now argues that the combination of Liang and either Ellis or Hasegawa does not teach the system of claims 21 and 31, as amended. However, Examiner respectfully disagrees for the reasons provided in the rejection below. Claim Objections Claim 21 recites the phrase “transmit the DESS requests via a network interface using congestion mitigation techniques,” but it is not clear whether the claimed “congestion mitigation techniques” are distinct from the step of splitting the file system requests into DESS requests, which is itself a congestion mitigation technique, as evidenced by claim 31. Specifically, claim 31 recites instructions to “execute congestion mitigation by dynamically splitting requests according to in-queue workload conditions and predicted congestion states,” which makes it clear that the claimed “congested mitigation technique” of claim 1 is in fact the step of splitting the file system requests into DESS requests. Claims 23 and 33 recites a “state of the DESS,” but it is not clear whether this state refers to one of the “congestion states” referenced in claims 21 and 31. Claim Rejections: 35 U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-40 are rejected under 35 U.S.C. 103 because they are unpatentable over Liang (US 2015/0309874) in view of either Ellis (US 2006/0031600) or Hasegawa (US 2013/0073795).1 Regarding claims 21 and 31, Liang teaches a system, the system comprising: a first device operable in a distributed electronic storage system (DESS), wherein the first device comprises: a receive buffer operable to receive and store a plurality of file system requests (A key-value store client 302 includes a request queue 300 for buffering file system requests from applications 301. See par. 46; fig. 3); a transmit buffer operable to store and transmit a plurality of DESS requests (The key-value store client includes a task queue 440 that stores and transmits a plurality of tasks to storage devices of a distributed key-value store 303. See par. 51); and a DESS processor operable to: generate DESS requests (They key-value store client generates tasks for storage in the task queue. See par. 73), dynamically split and schedule the plurality of file system requests into DESS requests according to predicted congestion states and resource availability (Liang teaches splitting each file system request into a number of tasks that is determined based on request queue backlog (i.e., in-queue workload) information (which functions as a prediction of future system utilization) and task delay statistics (which are a function of resource availability). See par. 23, 70-71), and transmit the DESS requests via a network interface using congestion mitigation techniques (The step of generating a variable number of tasks for each file system request is itself a traffic mitigation technique because it results in transmission of the optimal number of tasks for a given request queue backlog level and task delay level). However, Lian does not teach that the DESS processor is operable to: generate a single DESS request corresponding to each of the plurality of file system requests, if a total size of all file system requests of the plurality of file system requests combined fails to exceed a threshold; and generate the plurality of DESS requests corresponding to a first file system request of the plurality of file system requests, if the total size of all file system requests of the plurality of file system requests combined exceeds the threshold. Nonetheless, Ellis teaches a storage system whereby the system generates a single transfer instruction for a newly received DMA request if the size of the newly received DMA request plus the size of the previously received DMA requests is less than the capacity of a transfer buffer, and whereby the system generates multiple transfer instructions for the newly received DMA request if the size of the newly received DMA request plus the size of the previously received DMA requests exceeds the capacity of the transfer buffer. See par. 7. Alternatively, Hasegawa teaches a storage system whereby the system generates a single data transfer operation for a newly received write operation if the size of the newly received write operation plus the size of previously received write operations is less than a threshold, and whereby the system generates multiple data transfer operations for the newly received write operation if the size of the newly received write operation plus the size of the previously received write operations exceeds the threshold. See par. 76-84; fig. 14. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Liang so that the system generates a single task for a newly received request if the size of the newly received request plus the size of other requests fails to exceed a threshold, and the system generates multiple tasks for the newly received request if the size of the newly received request plus the size of other requests exceeds the threshold, because doing so enables the system to optimize the number of tasks generated for each request based on the state of the system. Regarding claims 22 and 32, Liang and Ellis/Hasegawa teach the system of claim 21, wherein: the plurality of file system requests are for access to a storage resource on a second device of the DESS, and the plurality of DESS requests are transmitted to the second device via a network link (Liang teaches that the key-value store client receives read/write requests from applications, generates tasks corresponding to the read/write requests, and sends the tasks to storage devices of a distributed key-value store over a plurality of network links. See par. 68-73). Regarding claims 23 and 33, Liang and Ellis/Hasegawa teach the system of claim 21, wherein: a state of the DESS is predicted according to resources required for servicing the first file system request and characteristics of a second file system request, and the quantity of the plurality of DESS requests corresponding to the first file system request is determined according to the predicted state of the DESS (Liang teaches that for each request, the system determines a number of tasks based on a predicted state of the system, which in turn is based on characteristics of the request (i.e., type and size of request. See par. 32), and backlog information which includes characteristics of other requests, such as the size and type of the other requests. See par. 80, 115). Regarding claims 24 and 34, Liang and Ellis/Hasegawa teach the system of claim 23, wherein the resources required for servicing the first file system request are determined according to an amount of information to be read during servicing of the first file system request (Liang teaches that for each request, the system determines a number of tasks based on a predicted state of the system, which in turn is based on characteristics of the request, including whether the request is a read or write request, and the size of the read or write request. See par. 32). Regarding claims 25 and 35, Liang and Ellis/Hasegawa teach the system of claim 23,wherein the resources required for servicing the first file system request are determined according to an amount of information to be written during servicing of the first file system request (Liang teaches that for each request, the system determines the number of tasks based on a predicted state of the system, which in turn is based on characteristics of the request, including whether the request is a read or write request, and the size of the read or write request. See par. 32). Regarding claims 26 and 36, Liang and Ellis/Hasegawa teach the system of claim 21, wherein the DESS processor is operable to: determine a level of congestion of the DESS; and determine the quantity of the plurality of DESS requests corresponding to the first file system request according to the level of congestion of the DESS (Liang teaches that the key-value store client determines backlog information (i.e., utilization information) and uses the backlog information to determine the number of parallel read/write tasks to generate. See par. 70). Regarding claims 27 and 37, Liang and Ellis/Hasegawa teach the system of claim 26, wherein the determination of the level of congestion of the DESS comprises a determination of a load on one or more resources of the DESS (Liang teaches that the backlog information represents a level of load or utilization of the resources of the key-value store. See par. 23). Regarding claims 28 and 38, Liang and Ellis/Hasegawa teach the system of claim 27, wherein the one or more resources comprises one or more of: processor resources, memory resources, storage resources, and networking resources (Liang teaches that the utilized resources of the key-value store must include at least one of the resources enumerated in claim 27). Regarding claims 29 and 39, Liang and Ellis/Hasegawa teach the system of claim 21, wherein the quantity of the plurality of DESS requests corresponding to the first file system request is determined according to whether a file system request, in the plurality of file system requests, is a data request or metadata request (Liang teaches that the metadata requests do not require writing/reading a data object to/from memory and are therefore relatively small in size (if not the smallest type of request). The system may take into the number of pending read or write requests of different sizes in the queue, and will therefore take into account the number of small metadata requests. See par. 80). Claims 30 and 40 are rejected under 35 U.S.C. 103 because they are unpatentable over Liang and Ellis/Hasegawa, as applied to claims 21 and 31 above, in further view of Pillai (US 8,090,801). Regarding claims 30 and 40, Liang and Ellis/Hasegawa do not teach the system of claim 21, wherein the DESS processor is operable to: determine a DESS usage metric, and determine the threshold according to the DESS usage metric. However, Pillai teaches a system for performing remote access commands between nodes, whereby the system adjusts available resources based on a usage metric, and whereby the usage metric comprises an average request size over a period of time. See col. 15, ln. 32-41. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Liang and Ellis/Hasegawa so that the threshold is dynamically determined based on an average task size over a determined period of time, because doing so allows the threshold to change over time to account for changing system conditions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew Georgandellis whose telephone number is 571-270-3991. The examiner can normally be reached on Monday through Friday, 7:30-5:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tonia Dollinger, can be reached on 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW C GEORGANDELLIS/Primary Examiner, Art Unit 2459 1 Examiner collectively refers to Ellis and Hasegawa as “Ellis/Hasegawa.”
Read full office action

Prosecution Timeline

Oct 16, 2023
Application Filed
Oct 16, 2023
Response after Non-Final Action
Jan 27, 2025
Non-Final Rejection — §103
Jun 27, 2025
Response Filed
Jul 16, 2025
Final Rejection — §103
Sep 17, 2025
Response after Non-Final Action
Oct 10, 2025
Request for Continued Examination
Oct 22, 2025
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103
Mar 05, 2026
Response Filed
Apr 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574425
SYSTEMS AND METHODS FOR APPLICATION OF CONTEXT-BASED POLICIES TO VIDEO COMMUNICATION CONTENT
2y 5m to grant Granted Mar 10, 2026
Patent 12549510
METHODS AND SYSTEMS FOR ACCESSING CONTENT
2y 5m to grant Granted Feb 10, 2026
Patent 12526335
NONSTOP VIRTUAL REMOTE DIRECT MEMORY ACCESS
2y 5m to grant Granted Jan 13, 2026
Patent 12493537
SYSTEM AND METHOD FOR BOOTING SERVERS IN A DISTRIBUTED STORAGE TO IMPROVE FAULT TOLERANCE
2y 5m to grant Granted Dec 09, 2025
Patent 12476870
DATA COLLECTION METHOD AND DEVICE
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
96%
With Interview (+40.4%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 490 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month