Prosecution Insights
Last updated: April 19, 2026
Application No. 17/886,397

Resource Monitoring in a Distributed Storage System

Final Rejection §102§103§112
Filed
Aug 11, 2022
Examiner
MAI, KEVIN S
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Weka Io Ltd.
OA Round
5 (Final)
29%
Grant Probability
At Risk
6-7
OA Rounds
5y 3m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
125 granted / 428 resolved
-28.8% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
39 currently pending
Career history
467
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This Office Action has been issued in response to Applicant's Arguments filed December 3, 2025. Claims 23-42 have been examined and are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed December 3, 2025 have been fully considered but they not persuasive. Applicants arguments with respect to the rejection under 35 USC § 112 have been considered but they are not persuasive. Applicant arguments do not identify specific sections in the specification that provide support for the claimed subject matter. In the absence of these citations and explanation of the supporting disclosure the rejection is maintained. Applicant argues Hanko does not disclose the claimed overperformance condition distinct from overloaded. Paragraph [0107] of Hanko discloses the server agent calculates whether the path is overloaded and needs to shed load, and what is the path's available (unused) bandwidth. The server agent determines that the path is overloaded if either the server's interface or the adapter's interface is overloaded. This discloses detecting being overloaded. Paragraph [0109] of Hanko discloses if there is another path available (between the server and the adapter coupled along the overloaded path) which is not overloaded and which has sufficient available bandwidth for another storage device, the server agent selects such other path for subsequent use. This discloses detecting if another path that is not overloaded (overperformance). Applicant argues Haga does not distinguish overperformance from overload. Figure 4 of Haga discloses if the system is in over performance the policy is to reduce resources by releasing resources assigned. Haga further discloses a performance degradation condition (overloaded) where the policy is to assign resources. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 23-42 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 23 and 33 recite “the overperformance condition indicates that the performance metric exceeds an optimal operating range without resource overload; and performing a load adjustment operation, via the first computing device, according to the determination of the overperformance condition, wherein the load adjustment operation reallocates one or both of data and task assignments, to mitigate excess performance while avoiding resource overload.” Examiner was unable to find support for this in the specification. The specification recites an overperformance condition but does not appear to define it as exceeding an optimal operating range without resource overload. The dependent claims are similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 23-29, 32-39, and 42 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Pub. No. 2017/0041182 to Hanko et al. (hereinafter “Hanko”) and further in view of US Pub. No. 2005/0192969 to Haga et al. (hereinafter “Haga”). As to Claim 23, Hanko discloses a method for managing a distributed electronic storage system (DESS), the method comprising: generating, via a first network adapter, an indication of a load on a first set of one or more resources associated with the first network adapter (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces); receiving, via a network link, an indication of a load on a second set of one or more resources (Paragraph [0115] of Hanko discloses requests (from each adapter agent) reports indicative of current overload status. Paragraph [0038] of Hanko disclose first and second adapters); receiving an indication of a performance parameter over the network link, wherein the performance parameter is determined according to one or more bandwidth utilization and latency (Paragraph [0045] of Hanko discloses the adapter agent is configured to report at least one said consumed bandwidth indication and/or at least one said available bandwidth indication); determining a condition of the DESS based on the performance over the network link, the indication of the load on the first set of one or more resources and the indication of the load on the second set of one or more resources, wherein a first computing device is operable to determine the condition as an overperformance condition distinct from an overloaded condition (Paragraph [0107] of Hanko discloses the server agent calculates whether the path is overloaded and needs to shed load, and what is the path's available (unused) bandwidth. The server agent determines that the path is overloaded if either the server's interface or the adapter's interface is overloaded. Paragraph [0109] of Hanko discloses if there is another path available (between the server and the adapter coupled along the overloaded path) which is not overloaded and which has sufficient available bandwidth for another storage device, the server agent selects such other path for subsequent use) the overperformance condition indicates that the performance metric exceeds an optimal operating range without resource overload (Paragraph [0109] of Hanko discloses if there is another path available (between the server and the adapter coupled along the overloaded path) which is not overloaded and which has sufficient available bandwidth for another storage device); and performing a load adjustment operation, via the first computing device, according to the determination of the overperformance condition, wherein the load adjustment operation reallocates one or both of data and task assignments, to mitigate excess performance while avoiding resource overload (Paragraph [0109] of Hanko discloses if there is another path available (between the server and the adapter coupled along the overloaded path) which is not overloaded and which has sufficient available bandwidth for another storage device, the server agent selects such other path for subsequent use). Haga further discloses load adjusting due to an overperformance condition. Figure 4 of Haga discloses if the system is in over performance the policy is to reduce resources by releasing resources assigned. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the storage system as disclosed by Hanko, with load adjusting due to an overperformance condition as disclosed by Haga. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device ready for improvement to yield predictable results. Hanko and Haga are directed toward storage systems and as such it would be obvious to use the techniques of one in the other. Implementing the teachings of Haga into Hanko would be improve Hanko’s ability to load balance for more conditions. As to Claim 24, Hanko-Haga discloses the method of claim 23, wherein the first set of one or more resources resides on the first computing device (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces. The own server interfaces are understood to reside on the device). As to Claim 25, Hanko-Haga discloses the method of claim 23, wherein the first set of one or more resources comprises the first network adapter (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces. The own server interfaces are understood to be the server’s network adapters). As to Claim 26, Hanko-Haga discloses the method of claim 23 wherein the first network adapter is operable to store DESS traffic in a virtual container (Paragraph [0011] of Haga discloses the resource operations management system comprising a virtual resource management unit for setting virtual resources by assigning the real resources) Examiner recites the same rationale to combine used for claim 1. As to Claim 27, Hanko-Haga discloses the method of claim 23, wherein the second set of one or more resources resides on a second computing device (Paragraph [0115] of Hanko discloses requests (from each adapter agent) reports indicative of current overload status. Paragraph [0038] of Hanko disclose first and second adapters) As to Claim 28, Hanko-Haga discloses the method of claim 27, wherein the second set of one or more resources comprises a second network adapter operable to store DESS traffic (Paragraph [0038] of Hanko disclose first and second adapters. Paragraph [0043] of Hanko discloses the adapters monitor data traffic (e.g., receive traffic and transmit traffic) occurring on each said adapter interface). As to Claim 29, Hanko-Haga discloses the method of claim 23, wherein the condition is adaptable over time (Paragraph [0037] of Hanko discloses ongoing monitoring by each said adapter agent of traffic on each adapter interface of the adapter agent, and after the wait, to begin to evaluate (e.g., reevaluate)). As to Claim 32, Hanko-Haga discloses the method of claim 23 wherein the method comprises: in response to the condition of the DESS being the overperformance condition, performing automatic provisioning of additional network bandwidth for use by the DESS, wherein the automatic provisioning of additional resources comprises one or more of: automatic provisioning of an additional processing core for use by the DESS; automatic provisioning of additional memory for use by the DESS; and automatic provisioning of additional nonvolatile storage for use by the DESS (Paragraph [0073] of Haga discloses this allows the software management unit 210 to upgrade the real resources assigned to the virtual resources of the applications with higher priorities). Examiner recites the same rationale to combine used for claim 1. As to Claim 33, Hanko discloses a system for managing a distributed electronic storage system (DESS), the system comprising: a first network adapter comprising hardware circuitry (Paragraph [0115] of Hanko discloses requests (from each adapter agent) reports indicative of current overload status. Paragraph [0038] of Hanko disclose first and second adapters); a network link (Paragraph [0107] of Hanko discloses the server agent calculates whether the path is overloaded and needs to shed load, and what is the path's available (unused) bandwidth); and a first computing device operable to: generate, via a first network adapter, an indication of a load on a first set of one or more resources associated with the first network adapter (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces); receive, via a network link, an indication of a load on a second set of one or more resources (Paragraph [0115] of Hanko discloses requests (from each adapter agent) reports indicative of current overload status. Paragraph [0038] of Hanko disclose first and second adapters); receive an indication of a performance parameter over the network link, wherein the performance parameter is determined according to one or more bandwidth utilization and latency (Paragraph [0045] of Hanko discloses the adapter agent is configured to report at least one said consumed bandwidth indication and/or at least one said available bandwidth indication); determine a condition of the DESS based on the performance over the network link, the indication of the load on the first set of one or more resources and the indication of the load on the second set of one or more resources, wherein the first computing device is operable to determine the condition as an overperformance condition distinct from an overloaded condition (Paragraph [0107] of Hanko discloses the server agent calculates whether the path is overloaded and needs to shed load, and what is the path's available (unused) bandwidth. The server agent determines that the path is overloaded if either the server's interface or the adapter's interface is overloaded. Paragraph [0109] of Hanko discloses if there is another path available (between the server and the adapter coupled along the overloaded path) which is not overloaded and which has sufficient available bandwidth for another storage device, the server agent selects such other path for subsequent use) the overperformance condition indicates that the performance metric exceeds an optimal operating range without resource overload; and performing a load adjustment operation, via the first computing device, according to the determination of the overperformance condition, wherein the load adjustment operation reallocates one or both of data and task assignments, to mitigate excess performance while avoiding resource overload. As to Claim 34, Hanko-Haga discloses the system of claim 33, wherein the first set of one or more resources resides on the first computing device (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces. The own server interfaces are understood to reside on the device). As to Claim 35, Hanko-Haga discloses the system of claim 33, wherein the first set of one or more resources comprises the first network adapter (Paragraph [0115] of Hanko discloses server agent 28 performs statistical characterization of traffic which agent 28 itself generates for its own server interfaces. The own server interfaces are understood to be the server’s network adapters). As to Claim 36, Hanko-Haga discloses the system of claim 33 wherein the first network adapter is operable to store DESS traffic in a virtual container (Paragraph [0011] of Haga discloses the resource operations management system comprising a virtual resource management unit for setting virtual resources by assigning the real resources) Examiner recites the same rationale to combine used for claim 1. As to Claim 37, Hanko-Haga discloses the system of claim 33, wherein the second set of one or more resources resides on a second computing device (Paragraph [0115] of Hanko discloses requests (from each adapter agent) reports indicative of current overload status. Paragraph [0038] of Hanko disclose first and second adapters) As to Claim 38, Hanko-Haga discloses the system of claim 37, wherein the second set of one or more resources comprises a second network adapter operable to store DESS traffic (Paragraph [0038] of Hanko disclose first and second adapters. Paragraph [0043] of Hanko discloses the adapters monitor data traffic (e.g., receive traffic and transmit traffic) occurring on each said adapter interface). As to Claim 39, Hanko-Haga discloses the system of claim 33, wherein the first computing device is operable adapt the condition over time (Paragraph [0037] of Hanko discloses ongoing monitoring by each said adapter agent of traffic on each adapter interface of the adapter agent, and after the wait, to begin to evaluate (e.g., reevaluate)). As to Claim 42, Hanko-Haga discloses the system of claim 33 wherein the he first computing device is operable to, in response to the condition of the DESS being the overperformance condition, perform automatic provisioning of additional network bandwidth for use by the DESS, wherein the automatic provisioning of additional resources comprises one or more of: automatic provisioning of an additional processing core for use by the DESS; automatic provisioning of additional memory for use by the DESS; and automatic provisioning of additional nonvolatile storage for use by the DESS (Paragraph [0073] of Haga discloses this allows the software management unit 210 to upgrade the real resources assigned to the virtual resources of the applications with higher priorities). Examiner recites the same rationale to combine used for claim 1. Claims 30, 31, 40, and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Hanko-Haga and further in view of US Pub. No. 2013/0227111 to Wright et al. (hereinafter “Wright”). As to Claim 30, Hanko-Haga discloses the method of claim 23. Hanko-Haga does not explicitly disclose wherein the method comprises: reducing a network congestion by changing a priority of DESS traffic according to the condition of the DESS. However, Wright discloses this. Paragraph [0319] of Wright discloses the different QoS Management Policy Sets which are implemented for each respective client may have the effect of prioritizing some Clients over others. Paragraph [0230] of Wright discloses each node in the Cluster reports to each other node its calculated load values. In this way each node (and/or Service) may be informed about each other node's (and/or Service's) load values. Paragraph [0241] of Wright discloses upon receiving the LOAD(Service) value notification update, the other node(s) may automatically and dynamically update their respective local LOAD-Service Tables using the updated LOAD(Service) value information. Paragraph [0268] of Wright discloses at least a portion of the various types of functions, operations, actions, and/or other features provided by the QoS Client Policy Management Procedure may be implemented at one or more nodes and/or volumes of the storage system. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the traffic management system as disclosed by Hanko, with prioritizing traffic as disclosed by Wright. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device. Hanko and Wright are directed toward traffic management systems and as such it would be obvious to use the techniques of one in the other. Paragraph [0004] of Wright discloses prioritization provides a better client experience. As to Claim 31, Hanko-Haga discloses the method of claim 23. Hanko-Haga does not explicitly disclose wherein the method comprises: adjusting, according to the condition of the DESS, one or more of: a read batch timing setting, a read batch size setting, a write batch timing setting, and a write batch size setting. However, Wright discloses this. Paragraph [0225] of Wright discloses the LOAD(Write) values may be automatically and/or dynamically calculated (e.g., in real-time) based, at least partially, on measured amount(s) of write I/O latency and/or write cache queue depth(s) which are associated with the identified Service. Paragraph [0188] of Wright discloses temporarily throttling read and write IOPS for one or more selected services, nodes, volumes, clients, and/or connections. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the traffic management system as disclosed by Hanko, with adjusting read and write IOPS as disclosed by Wright. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device. Hanko and Wright are directed toward traffic management systems and as such it would be obvious to use the techniques of one in the other. Paragraph [0004] of Wright discloses prioritization provides a better client experience and paragraph [0188] of Wright discloses prioritizing by throttling. As to Claim 40, Hanko-Haga discloses the system of claim 33. Hanko-Haga does not explicitly disclose wherein the first computing device is operable to reduce a network congestion by changing a priority of DESS traffic according to the condition of the DESS. However, Wright discloses this. Paragraph [0319] of Wright discloses the different QoS Management Policy Sets which are implemented for each respective client may have the effect of prioritizing some Clients over others. Paragraph [0230] of Wright discloses each node in the Cluster reports to each other node its calculated load values. In this way each node (and/or Service) may be informed about each other node's (and/or Service's) load values. Paragraph [0241] of Wright discloses upon receiving the LOAD(Service) value notification update, the other node(s) may automatically and dynamically update their respective local LOAD-Service Tables using the updated LOAD(Service) value information. Paragraph [0268] of Wright discloses at least a portion of the various types of functions, operations, actions, and/or other features provided by the QoS Client Policy Management Procedure may be implemented at one or more nodes and/or volumes of the storage system. Examiner recites the same rationale to combine used for claim 30. As to Claim 41, Hanko-Haga discloses the system of claim 33. Hanko-Haga does not explicitly disclose wherein the first computing device is operable to adjust, according to the condition of the DESS, one or more of: a read batch timing setting, a read batch size setting, a write batch timing setting, and a write batch size setting. However, Wright discloses this. Paragraph [0225] of Wright discloses the LOAD(Write) values may be automatically and/or dynamically calculated (e.g., in real-time) based, at least partially, on measured amount(s) of write I/O latency and/or write cache queue depth(s) which are associated with the identified Service. Paragraph [0188] of Wright discloses temporarily throttling read and write IOPS for one or more selected services, nodes, volumes, clients, and/or connections. Examiner recites the same rationale to combine used for claim 31. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kevin S Mai whose telephone number is (571)270-5001. The examiner can normally be reached Monday to Friday 9AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached on 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN S MAI/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Aug 11, 2022
Application Filed
Aug 22, 2023
Non-Final Rejection — §102, §103, §112
Nov 09, 2023
Response Filed
Feb 21, 2024
Non-Final Rejection — §102, §103, §112
May 22, 2024
Response Filed
Sep 13, 2024
Final Rejection — §102, §103, §112
Oct 02, 2024
Response after Non-Final Action
Nov 04, 2024
Response after Non-Final Action
Nov 04, 2024
Response after Non-Final Action
Nov 04, 2024
Response after Non-Final Action
Nov 04, 2024
Notice of Allowance
Nov 12, 2024
Request for Continued Examination
Nov 20, 2024
Response after Non-Final Action
Nov 26, 2024
Response Filed
Aug 29, 2025
Non-Final Rejection — §102, §103, §112
Dec 03, 2025
Response Filed
Mar 19, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506731
Conference Data Sharing Method and Conference Data Sharing System Capable of Communicating with Remote Conference Members
2y 5m to grant Granted Dec 23, 2025
Patent 12413610
ASSESSING SECURITY OF SERVICE PROVIDER COMPUTING SYSTEMS
2y 5m to grant Granted Sep 09, 2025
Patent 12406064
PRE-BOOT CONTEXT-BASED SECURITY MITIGATION
2y 5m to grant Granted Sep 02, 2025
Patent 12363200
PROVIDING EVENT STREAMS AND ANALYTICS FOR ACTIVITY ON WEB SITES
2y 5m to grant Granted Jul 15, 2025
Patent 12204570
SYSTEM AND METHOD FOR PROVIDING MESSAGE CONTENT BASED ROUTING
2y 5m to grant Granted Jan 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
29%
Grant Probability
55%
With Interview (+25.5%)
5y 3m
Median Time to Grant
High
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month