Prosecution Insights
Last updated: April 19, 2026
Application No. 18/224,970

NETWORK SCHEDULER IN A DISTRIBUTED STORAGE SYSTEM

Non-Final OA §103§112
Filed
Jul 21, 2023
Examiner
TALIOUA, ABDELBASST
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
VMware, Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
94%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
62 granted / 106 resolved
+0.5% vs TC avg
Strong +35% interview lift
Without
With
+35.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
42 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
70.9%
+30.9% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 106 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 31st, 2025 has been entered. This office action is responsive to a response filed on October 31st, 2025. In this office action: Claims 1-20 are pending. Claims 1-20 are rejected. Response to Amendment The amendments filed on October 31st, 2025 have been entered. Claims 1, 8 and 15 have been amended. Response to Arguments Applicant’s arguments filed on October 31st, 2025 have been fully considered but are not persuasive. Applicant’s Argument: Claim 1 has been amended to recite, in pertinent part, determining a first network congestion category at a first host, wherein the first congestion category is based on a first network congestion condition; based on at least the first network congestion category and a target network congestion category, determining a first packet delay time; Claims 8 and 15 have been amended to recite features generally similar to those of claim 1. Nonetheless, in the interest of compact prosecution, claims 1, 8, and 15 have been amended as indicated above, and the Applicant respectfully submits that the claims, at least as amended, are allowable over the cited combination(s). For example, claim 1 recites, in pertinent part, "based on at least the first network congestion category and a target network congestion category, determining a first packet delay time." None of Knauft, Xu, Nadas, Mallick, Xiang and Dhanabalan has been shown to disclose this feature. In rejecting claim 5, the Office Action alleges that Knauft discloses "selecting the first packet delay time to drive the network congestion condition toward a target congestion category" but does not identify any disclosure in Knauth that discloses a target congestion category, instead relying on the disclosure of the instant application as teaching the concept of a congestion category. Examiner’s response: The Examiner respectfully disagrees. Knauft discloses: determining a first network congestion category, wherein the first congestion category is based on a first network congestion condition (See Parag. [0030]; Depending on the amount of different storage I/O requests coming to the VSAN module 114 to be processed, the queues may fill up at different rates. For each queue, the backpressure congestion controller generates a backpressure signal when the storage I/O requests in that queue reaches a certain threshold (congestion condition) ... Each backpressure signal may include the class of storage I/I/O requests, a backpressure value (congestion category), and identification of the host computer; The backpressure value indicates the fullness or the number of storage I/O requests currently stored in the queue corresponding to the indicated class of storage I/O requests. The backpressure value may be linearly increased as the number of storage I/O requests in the respective queue increases from the minimum threshold number up to the maximum threshold number. See Parag. [0031]; the backpressure congestion controller 426 transmits the backpressure signals to the respective clients or sources that had issued the corresponding storage I/O requests, which were placed in the different queues. See Parag. [0039]; More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue. See also Parag. [0033-0034], Parag. [0037-0039], and Fig. 2. Examiner’s interpretation: The Examiner reasonably interprets “congestion category” to be equivalent to the backpressure value, which indicates a level of fullness (e.g., high, low, etc.,). The Examiner’s interpretation is consistent with Applicant definition in the Specification, Parag. [0040], stating: “Congestion categories 500 includes a low congestion category 501, a medium congestion category 502, and a high congestion category 503”); [and] based on at least the first network congestion category and a target network congestion category, determining a first packet delay time (See Parag. [0031]; host computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay; each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests. See Parag. [0032]; an independent congestion signal for each of the queues is generated from the VSAN module when needed, and transmitted to the relevant sources of storage I/O requests of the class corresponding to that queue so that an appropriate delay can be applied to reduce congestion for that class of storage I/O requests. See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer, to delay issuing the class of I/O requests identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value (congestion category) in the received independent backpressure signal. More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue … Examiner’s note: The applicant discloses in the specification, Parag. [0043], that “if network congestion condition rises above target congestion category, controller will increase packet delay time in an attempt to reduce congestion”). Knauft doesn’t explicitly disclose the first network congestion category is determined at the first host. However, Xu discloses determining a first network congestion category at a first host (See Parag. [0026]; The congestion sensor operates to collect storage congestion data for a particular class of storage I/O requests, e.g., resync I/O storage requests. In an embodiment, the congestion sensor collects storage device latency data, congestion level data (congestion category) and fairness index data. Examiner’s interpretation: The congestion sensor is included in each host computer in the cluster (See Parag. [0023], Fig. 2, and Fig. 3). Thus, network congestion category is determined in each host computer (first host)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the determination of the network congestion category, taught by Knauft, to be determined at a first host, as taught by Xu. This would be convenient to control the congestion of at least one class of storage I/O requests by adaptively adjusting the bandwidth limit on another class of storage I/O requests based on current congestion data related to processing of storage I/O requests (Xu, Parag. [0025]). The Examiner interpretation of the claim regarding “congestion category” is consistent with Applicant’s definition in the specification, as explained above. Regarding claim 5, the Examiner has interpreted “a target congestion category” as a desired congestion level (i.e., acceptable level to reduce congestion) to be reached by an appropriate delay that can be applied to reduce congestion. Therefore, claim 5 is rejected under Knauft, as also presented in this Office Action. Claim Objections Claim 8 is objected to because of the following informality: “... a processor; and; a non-transitory computer storage medium ...” should read (Examiner’s suggestion) “...a processor; and, a non-transitory computer storage medium ...” Appropriate correction(s) is/are required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the term “the sensor” in "wherein the sensor is further configured to determine...” The term “the sensor” has never been introduced in the instant claim or in claim 8 on which the instant claim depends. Therefore, there is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-5, 8-9, 11-12, 15-16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Knauft et al. (Pub. No. US 2019/0303308), hereinafter Knauft; in view of Xu et al. (Pub. No. US 2019/0317665), hereinafter Xu; and further in view of Nadas et al. (Pub. No. US 2017/0085487), hereinafter Nadas. Claim 1. Knauft discloses [a] computerized method comprising: determining a first network congestion category, wherein the first congestion category is based on a first network congestion condition (See Parag. [0030]; Depending on the amount of different storage I/O requests coming to the VSAN module 114 to be processed, the queues may fill up at different rates. For each queue, the backpressure congestion controller generates a backpressure signal when the storage I/O requests in that queue reaches a certain threshold (congestion condition) ... Each backpressure signal may include the class of storage I/I/O requests, a backpressure value (congestion category), and identification of the host computer; The backpressure value indicates the fullness or the number of storage I/O requests currently stored in the queue corresponding to the indicated class of storage I/O requests. The backpressure value may be linearly increased as the number of storage I/O requests in the respective queue increases from the minimum threshold number up to the maximum threshold number. See Parag. [0031]; the backpressure congestion controller 426 transmits the backpressure signals to the respective clients or sources that had issued the corresponding storage I/O requests, which were placed in the different queues. See Parag. [0039]; More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue. See also Parag. [0033-0034], Parag. [0037-0039], and Fig. 2. Examiner’s interpretation: The Examiner reasonably interprets “congestion category” to be equivalent to the backpressure value, which indicates a level of fullness (e.g., high, low, etc.,). The Examiner’s interpretation is consistent with Applicant definition in the Specification, Parag. [0040], stating: “Congestion categories 500 includes a low congestion category 501, a medium congestion category 502, and a high congestion category 503”); based on at least the first network congestion category and a target network congestion category, determining a first packet delay time (See Parag. [0031]; host computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay; each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests. See Parag. [0032]; an independent congestion signal for each of the queues is generated from the VSAN module when needed, and transmitted to the relevant sources of storage I/O requests of the class corresponding to that queue so that an appropriate delay can be applied to reduce congestion for that class of storage I/O requests. See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer, to delay issuing the class of I/O requests identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value (congestion category) in the received independent backpressure signal. More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue … Examiner’s note: The applicant discloses in the specification, Parag. [0043], that “if network congestion condition rises above target congestion category, controller will increase packet delay time in an attempt to reduce congestion”); based on at least a first data packet belonging to a first traffic class of a plurality of traffic classes, delaying transmitting the first data packet, from the first host across a network (See Fig. 1; virtual storage area network” (VSAN) 102) to a second host, by the first packet delay time (See Parag. [0031]; Each host (first host) computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay. Since each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests, the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. If one class (first traffic class of a plurality of traffic classes) of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure… See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests identified in the independent backpressure signal (delay transmitting the first data packet, from the first host across a network to a second host, by the first packet delay time). The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: the host computer (first host) delays transmitting the class of I/O requests to the second host); and based on at least a second data packet belonging to a second traffic class of the plurality of traffic classes, transmitting the second data packet from the first host to the second host without a delay (See Parag. [0031]; the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. Thus, if one class of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure. However, other less backlogged classes of storage I/O requests (second data packet belonging to a second traffic class of the plurality of traffic classes) will still be able to fill up their corresponding queues and get access to the dispatch scheduler 422 without being bottlenecked. Examiner’s interpretation: The host (first host) that transmits I/O requests (second data packet), that don’t result in a congestion in the queue associated with the class of I/O requests will still be transmitting I/O request to the second host without a need for a delay), based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network, by the first packet delay time (See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests (packets including a third data packet belonging to the first traffic class) identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: delaying the transmission a plurality of packets (including a third packet) belonging to the first class by the first packet delay time). Knauft doesn’t explicitly disclose the first network congestion category is determined at the first host; and delaying transmitting the third data packet from the first host across the network to a third host. However, Xu discloses determining a first network congestion category at a first host (See Parag. [0026]; The congestion sensor operates to collect storage congestion data for a particular class of storage I/O requests, e.g., resync I/O storage requests. In an embodiment, the congestion sensor collects storage device latency data, congestion level data (congestion category) and fairness index data. Examiner’s interpretation: The congestion sensor is included in each host computer in the cluster (See Parag. [0023], Fig. 2, and Fig. 3). Thus, network congestion category is determined in each host computer (first host)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the determination of the network congestion category, taught by Knauft, to be determined at a first host, as taught by Xu. This would be convenient to control the congestion of at least one class of storage I/O requests by adaptively adjusting the bandwidth limit on another class of storage I/O requests based on current congestion data related to processing of storage I/O requests (Xu, Parag. [0025]). Knauft in view of Xu doesn’t explicitly disclose delaying transmitting the third data packet from the first host across the network to a third host. However, Nadas discloses based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network to a third host, by the first packet delay time (See Parag. [0055-0059], Fig. 5 and Fig. 6; The forwarding node 120 may compare the estimated delay to the first value extracted from data packet. Depending on the comparison, the forwarding node may then detecting whether or not a delay class to which the data packet is assigned (third data packet belonging to the first traffic class) is subject to congestion … See Parag. [0038]; In accordance with this delay class assignment, the input stage 310 provides the data packets (third data packet) to different queues. In particular, for each delay class, a corresponding queue is provided, in which the data packets are temporarily stored before being forwarded from the forwarding node 120. See also Parag. [0026]; the sending node 100 could send data packets via the forwarding node to additional receivers (including third host). See also Parag. [0024] [0027] [0039]. Examiner’s interpretation: Nadas discloses delaying transmitting packets (including third packet) from the sending node (first host) to a plurality of receivers (including third host) based on the packets (including third packet) belonging to a delay class). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify delaying transmitting third the data packet, by the first packet delay time, based on at least the third data packet belonging to the first traffic class, taught by Knauft in view of Xu, to be transmitted from the first host across the network to a third host, as taught by Nadas. This would be convenient to allow for efficiently controlling the handling of different kinds of data traffic (Nadas, Parag. [0005]). Claim 2. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, further comprising: Knauft further discloses wherein: the first, second, and third packets are transmitted by one of a plurality of virtual machines running on the first host (See Parag. [0031]; The sources of storage I/O requests include the host computers 104 of the cluster 106, the VMs 124 running on the host computers 104 and software processes or routines (not shown) operating in the host computers 104. Thus, for the queue 424A holding VM I/O requests, the backpressure signal will be sent to the VMs that are issuing the VM I/O requests. See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests identified in the independent backpressure signal (delay transmitting from the first host). The amount of delay applied may depend on the backpressure value in the received independent backpressure signal); the second node is a storage node of a virtual storage area network (virtual SAN) (As shown in FIG. 1, the distributed storage system 100 provides a software-based “virtual storage area network” (VSAN) 102 that leverages local storage resources of host computers 104, which are part of a logically defined cluster 106 of host computers (including the second node) that is managed by a cluster management server 108. The VSAN 102 allows local storage resources of the host computers 104 to be aggregated to form a shared pool of storage resources, which allows the host computers 104, including any software entities running on the host computers, to use the shared storage resources); and the method further comprises: shaping, with a scheduler running on the first host, network traffic transmitted by the plurality of virtual machines running on the first host (See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests identified in the independent backpressure signal (delay transmitting from the first host). The amount of delay applied may depend on the backpressure value in the received independent backpressure signal). Claim 4. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, Knauft further discloses wherein the first traffic class comprises resync input/output operations (I/Os) and the second traffic class comprises non-resync traffic I/Os (See Parag. [0026]; the queues 424A, 424B, 424C and 424D are used for VM I/O requests, resync I/O requests, namespace I/O requests and internal metadata I/O requests, respectively. See Parag. [0031] and Fig. 4; For the queue 424B holding resync I/O requests (the first traffic class), the backpressure signal will be sent to the owner of the resync process, which may be one of the host computers in the cluster … other less backlogged classes (interpreted as other than resync I/O requests class) of storage I/O requests (second traffic class) will still be able to fill up their corresponding queues and get access to the dispatch scheduler 422 without being bottlenecked. See also Parag. [0018] [0022]. Examiner’s note: The applicant discloses in the specification, Parag. [0039], that “Non-resync I/Os has three sub-classes: VM I/Os (guest I/Os), namespace I/Os, and metadata I/Os”). Claim 5. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, Knauft further discloses wherein determining the first packet delay time comprises: selecting the first packet delay time to drive the network congestion condition toward a target congestion category (See Parag. [0032]; an independent congestion signal for each of the queues is generated from the VSAN module when needed, and transmitted to the relevant sources of storage I/O requests of the class corresponding to that queue so that an appropriate delay can be applied to reduce congestion for that class of storage I/O requests. Examiner’s note: The applicant discloses in the specification, Parag. [0043], that “if network congestion condition rises above target congestion category, controller will increase packet delay time in an attempt to reduce congestion”). Claim 8. Knauft discloses [a] computer system comprising: a processor; and; a non-transitory computer storage medium having stored thereon program code executable by the processor (See Parag. [0014]), the program code embodying a method comprising: determining a first network congestion category, wherein the first congestion category characterizes a first network congestion condition (See Parag. [0030]; Depending on the amount of different storage I/O requests coming to the VSAN module 114 to be processed, the queues may fill up at different rates. For each queue, the backpressure congestion controller generates a backpressure signal when the storage I/O requests in that queue reaches a certain threshold (congestion condition) ... Each backpressure signal may include the class of storage I/I/O requests, a backpressure value (congestion category), and identification of the host computer; The backpressure value indicates the fullness or the number of storage I/O requests currently stored in the queue corresponding to the indicated class of storage I/O requests. The backpressure value may be linearly increased as the number of storage I/O requests in the respective queue increases from the minimum threshold number up to the maximum threshold number. See Parag. [0031]; the backpressure congestion controller 426 transmits the backpressure signals to the respective clients or sources that had issued the corresponding storage I/O requests, which were placed in the different queues. See Parag. [0039]; More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue. See also Parag. [0033-0034], Parag. [0037-0039], and Fig. 2. Examiner’s interpretation: The Examiner reasonably interprets “congestion category” to be equivalent to the backpressure value, which indicates a level of fullness (e.g., high, low, etc.,). The Examiner’s interpretation is consistent with Applicant definition in the Specification, Parag. [0040], stating: “Congestion categories 500 includes a low congestion category 501, a medium congestion category 502, and a high congestion category 503”); based on at least the first network congestion category and a target network congestion category, determining a first packet delay time (See Parag. [0031]; host computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay; each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests. See Parag. [0032]; an independent congestion signal for each of the queues is generated from the VSAN module when needed, and transmitted to the relevant sources of storage I/O requests of the class corresponding to that queue so that an appropriate delay can be applied to reduce congestion for that class of storage I/O requests. See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer, to delay issuing the class of I/O requests identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value (congestion category) in the received independent backpressure signal. More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue … Examiner’s note: The applicant discloses in the specification, Parag. [0043], that “if network congestion condition rises above target congestion category, controller will increase packet delay time in an attempt to reduce congestion”); based on at least a first data packet belonging to a first traffic class of a plurality of traffic classes, delaying transmitting the first data packet, from the first host across a network (See Fig. 1; virtual storage area network” (VSAN) 102) to a second host, by the first packet delay time (See Parag. [0031]; Each host (first host) computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay. Since each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests, the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. If one class (first traffic class of a plurality of traffic classes) of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure… See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests identified in the independent backpressure signal (delay transmitting the first data packet, from the first host across a network to a second host, by the first packet delay time). The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: the host computer (first host) delays transmitting the class of I/O requests to the second host); and based on at least a second data packet belonging to a second traffic class of the plurality of traffic classes, transmitting the second data packet from the first host to the second host without a delay (See Parag. [0031]; the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. Thus, if one class of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure. However, other less backlogged classes of storage I/O requests (second data packet belonging to a second traffic class of the plurality of traffic classes) will still be able to fill up their corresponding queues and get access to the dispatch scheduler 422 without being bottlenecked. Examiner’s interpretation: The host (first host) that transmits I/O requests (second data packet), that don’t result in a congestion in the queue associated with the class of I/O requests will still be transmitting I/O request to the second host without a need for a delay); and based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network, by the first packet delay time (See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests (packets including a third data packet belonging to the first traffic class) identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: delaying the transmission a plurality of packets (including a third packet) belonging to the first class by the first packet delay time). Knauft doesn’t explicitly disclose the first network congestion category is determined at the first host; and delaying transmitting the third data packet from the first host across the network to a third host. However, Xu discloses determine a first network congestion category at a first host (See Parag. [0026]; The congestion sensor operates to collect storage congestion data for a particular class of storage I/O requests, e.g., resync I/O storage requests. In an embodiment, the congestion sensor collects storage device latency data, congestion level data (congestion category) and fairness index data. Examiner’s interpretation: The congestion sensor is included in each host computer in the cluster (See Parag. [0023], Fig. 2, and Fig. 3). Thus, network congestion category is determined in each host computer (first host)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the determination of the network congestion category, taught by Knauft, to be determined at a first host, as taught by Xu. This would be convenient to control the congestion of at least one class of storage I/O requests by adaptively adjusting the bandwidth limit on another class of storage I/O requests based on current congestion data related to processing of storage I/O requests (Xu, Parag. [0025]). Knauft in view of Xu doesn’t explicitly disclose delaying transmitting the third data packet from the first host across the network to a third host. However, Nadas discloses wherein the scheduler is configured to delay transmitting a third data packet, from the first host across the network to a third host, by the first packet delay time, based on at least the third data packet belonging to the first traffic class (See Parag. [0055-0059], Fig. 5 and Fig. 6; The forwarding node 120 may compare the estimated delay to the first value extracted from data packet. Depending on the comparison, the forwarding node may then detecting whether or not a delay class to which the data packet is assigned (third data packet belonging to the first traffic class) is subject to congestion … See Parag. [0038]; In accordance with this delay class assignment, the input stage 310 provides the data packets (third data packet) to different queues. In particular, for each delay class, a corresponding queue is provided, in which the data packets are temporarily stored before being forwarded from the forwarding node 120. See also Parag. [0026]; the sending node 100 could send data packets via the forwarding node to additional receivers (including third host). See also Parag. [0024] [0027] [0039]. Examiner’s interpretation: Nadas discloses delaying transmitting packets (including third packet) from the sending node (first host) to a plurality of receivers (including third host) based on the packets (including third packet) belonging to a delay class). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify delaying transmitting third the data packet, by the first packet delay time, based on at least the third data packet belonging to the first traffic class, taught by Knauft in view of Xu, to be transmitted from the first host across the network to a third host, as taught by Nadas. This would be convenient to allow for efficiently controlling the handling of different kinds of data traffic (Nadas, Parag. [0005]). Claim 9 is taught by Knauft in view of Xu and Nadas as described for claim 2. Claim 11 is taught by Knauft in view of Xu and Nadas as described for claim 4. Claim 12 is taught by Knauft in view of Xu and Nadas as described for claim 5. Claim 15. Knauft discloses [a] non-transitory computer storage medium having stored thereon program code executable by a processor (See Parag. [0014]), the program code embodying a method comprising: determining a first network congestion category, wherein the first congestion category characterizes a first network congestion condition (See Parag. [0030]; Depending on the amount of different storage I/O requests coming to the VSAN module 114 to be processed, the queues may fill up at different rates. For each queue, the backpressure condition controller generates a backpressure signal when the storage I/O requests in that queue reaches a certain threshold (congestion condition) ... Each backpressure signal may include the class of storage I/I/O requests, a backpressure value (congestion category), and identification of the host computer; The backpressure value indicates the fullness or the number of storage I/O requests currently stored in the queue corresponding to the indicated class of storage I/O requests. The backpressure value may be linearly increased as the number of storage I/O requests in the respective queue increases from the minimum threshold number up to the maximum threshold number. See Parag. [0031]; the backpressure congestion controller 426 transmits the backpressure signals to the respective clients or sources that had issued the corresponding storage I/O requests, which were placed in the different queues. See Parag. [0039]; More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue. See also Parag. [0033-0034], Parag. [0037-0039], and Fig. 2. Examiner’s interpretation: The Examiner reasonably interprets “congestion category” to be equivalent to the backpressure value, which indicates a level of fullness (e.g., high, low, etc.,). The Examiner’s interpretation is consistent with Applicant definition in the Specification, Parag. [0040], stating: “Congestion categories 500 includes a low congestion category 501, a medium congestion category 502, and a high congestion category 503”); based on at least the first network congestion category and a target network congestion category, determining a first packet delay time (See Parag. [0031]; host computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay; each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests. See Parag. [0032]; an independent congestion signal for each of the queues is generated from the VSAN module when needed, and transmitted to the relevant sources of storage I/O requests of the class corresponding to that queue so that an appropriate delay can be applied to reduce congestion for that class of storage I/O requests. See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer, to delay issuing the class of I/O requests identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value (congestion category) in the received independent backpressure signal. More delay can be applied as the backpressure value in the received independent backpressure signal indicates a higher level of fullness for the corresponding queue … Examiner’s note: The applicant discloses in the specification, Parag. [0043], that “if network congestion condition rises above target congestion category, controller will increase packet delay time in an attempt to reduce congestion”); based on at least a first data packet belonging to a first traffic class of a plurality of traffic classes, delaying transmitting the first data packet, from the first host across a network (See Fig. 1; virtual storage area network” (VSAN) 102) to a second host, by the first packet delay time (See Parag. [0031]; Each host (first host) computer that receives a backpressure signal will implement a delay based on the received congestion signal, which may be a time-averaged latency-based delay. Since each backpressure signal is based on the current fullness of the corresponding queue for particular class of storage I/O requests, the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. If one class (first traffic class of a plurality of traffic classes) of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure… See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests identified in the independent backpressure signal (delay transmitting the first data packet, from the first host across a network to a second host, by the first packet delay time). The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: the host computer (first host) delays transmitting the class of I/O requests to the second host); and based on at least a second data packet belonging to a second traffic class of the plurality of traffic classes, transmitting the second data packet from the first host to the second host without a delay (See Parag. [0031]; the backpressure signals provide independent backpressure congestion controls for the different classes of storage I/O requests. Thus, if one class of storage I/O requests is overwhelming the VSAN module 114, that particular class of storage I/O requests will get backpressure. However, other less backlogged classes of storage I/O requests (second data packet belonging to a second traffic class of the plurality of traffic classes) will still be able to fill up their corresponding queues and get access to the dispatch scheduler 422 without being bottlenecked. Examiner’s interpretation: The host (first host) that transmits I/O requests (second data packet), that don’t result in a congestion in the queue associated with the class of I/O requests will still be transmitting I/O request to the second host without a need for a delay); and based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network, by the first packet delay time (See also Parag. [0039]; independent backpressure signal can then be used by each source, which may be a host computer (first host), to delay issuing the class of I/O requests (packets including a third data packet belonging to the first traffic class) identified in the independent backpressure signal. The amount of delay applied may depend on the backpressure value in the received independent backpressure signal. Examiner’s interpretation: delaying the transmission a plurality of packets (including a third packet) belonging to the first class by the first packet delay time). Knauft doesn’t explicitly disclose the first network congestion category is determined at the first host; and delaying transmitting the third data packet from the first host across the network to a third host. However, Xu discloses determining a first network congestion category at a first host (See Parag. [0026]; The congestion sensor operates to collect storage congestion data for a particular class of storage I/O requests, e.g., resync I/O storage requests. In an embodiment, the congestion sensor collects storage device latency data, congestion level data (congestion category) and fairness index data. Examiner’s interpretation: The congestion sensor is included in each host computer in the cluster (See Parag. [0023], Fig. 2, and Fig. 3). Thus, network congestion category is determined in each host computer (first host)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the determination of the network congestion category, taught by Knauft, to be determined at a first host, as taught by Xu. This would be convenient to control the congestion of at least one class of storage I/O requests by adaptively adjusting the bandwidth limit on another class of storage I/O requests based on current congestion data related to processing of storage I/O requests (Xu, Parag. [0025]). Knauft in view of Xu doesn’t explicitly disclose delaying transmitting the third data packet from the first host across the network to a third host. However, Nadas discloses based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network to a third host, by the first packet delay time (See Parag. [0055-0059], Fig. 5 and Fig. 6; The forwarding node 120 may compare the estimated delay to the first value extracted from data packet. Depending on the comparison, the forwarding node may then detecting whether or not a delay class to which the data packet is assigned (third data packet belonging to the first traffic class) is subject to congestion … See Parag. [0038]; In accordance with this delay class assignment, the input stage 310 provides the data packets (third data packet) to different queues. In particular, for each delay class, a corresponding queue is provided, in which the data packets are temporarily stored before being forwarded from the forwarding node 120. See also Parag. [0026]; the sending node 100 could send data packets via the forwarding node to additional receivers (including third host). See also Parag. [0024] [0027] [0039]. Examiner’s interpretation: Nadas discloses delaying transmitting packets (including third packet) from the sending node (first host) to a plurality of receivers (including third host) based on the packets (including third packet) belonging to a delay class). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify delaying transmitting third the data packet, by the first packet delay time, based on at least the third data packet belonging to the first traffic class, taught by Knauft in view of Xu, to be transmitted from the first host across the network to a third host, as taught by Nadas. This would be convenient to allow for efficiently controlling the handling of different kinds of data traffic (Nadas, Parag. [0005]). Claim 16 is taught by Knauft in view of Xu and Nadas as described for claim 2. Claim 18 is taught by Knauft in view of Xu and Nadas as described for claim 4. Claim 19 is taught by Knauft in view of Xu and Nadas as described for claim 5. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Knauft et al. (Pub. No. US 2019/0303308), hereinafter Knauft; in view of Xu et al. (Pub. No. US 2019/0317665), hereinafter Xu; further in view of Nadas et al. (Pub. No. US 2017/0085487), hereinafter Nadas; and further in view of Mallick et al. (Pub. No. US 2019/0334987), hereinafter Mallick. Claim 3. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, Knauft further discloses the computerized method further comprising: wherein the first network congestion condition is between the first host and the second host (See Parag. [0030]; Depending on the amount of different storage I/O requests coming to the VSAN module 114 (in the second host) to be processed, the queues may fill up at different rates. For each queue, the backpressure congestion controller generates a backpressure signal when the storage I/O requests in that queue reaches a certain threshold. See Parag. [0031]; the backpressure congestion controller 426 transmits the backpressure signals to the respective clients or sources (first host) that had issued the corresponding storage I/O requests, which were placed in the different queues. Examiner’s interpretation: the determined congestion is based on I/O requests issued form the first host to the second host (the first network congestion condition is between the first host and the second host)). Knauft in view of Xu doesn’t explicitly disclose: determining a second network congestion condition at the first host, wherein the second network congestion condition is between the first host and a third host; based on at least the second network congestion condition, determining a second packet delay time; and based on at least the third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network to the third host, by the second packet delay time. However, Mallick disclose: determining a second network congestion condition at the first host, wherein the first network congestion condition is between the first host and the second host, and wherein the second network congestion condition is between the first host and a third host (See Parag. [0030]; The MPIO driver of the host device (first host) in determining latencies of the storage devices (second host and third host) is illustratively configured to determine a current latency for a given one of the storage devices based at least in part on an average response time of the given storage device to IO operations delivered to the given storage device over a designated period of time. Similar determinations are made for other ones of the storage devices. See Parag. [0098]; configuring an MPIO driver of a host device to interact with a storage array or other storage system in implementing SLO-based IO selection functionality can limit the situations in which the artificial delays are introduced in storage system queues, thereby reducing storage system queue congestion. See Parag. [0083-0085]; The expected response time (determining latency) is in association with different SLO (service level objective) categories (Silver, Gold, and Diamond). See also Parag. [0004] [0009] [0044-0046]. Examiner’s interpretation: The applicant discloses in the specification, in Parag. [0041] and Fig. 5, that network congestion conditions are defined according to network latency. Mallick discloses determining a network latency of a storage device (second host) based on IO operations delivered to the given storage device from the host (first host) over a designated period of time; and determining a network latency of another storage device (third host) based on IO operations delivered to the given storage device from the host (first host) over a designated period of time); based on at least the second network congestion condition, determining a second packet delay time (See Parag. [0031]; The MPIO driver (within first host) may be configured to compute initial TTBU (initial time-to-become-urgent) values for assignment to respective ones of the IO operations by decrementing an average response time of a corresponding one of the storage devices (third host) from an expected response time of an SLO of the source of each IO operation to obtain as the TTBU value for that IO operation an amount of time that the IO operation can remain in the set of IO queues without adversely impacting its expected response time. See Parag. [0035]; TTBU values are construed as encompassing values of various types that are each indicative of an amount of time that a given IO operation can be queued (packet delay time) in a host device. See Parag. [0083-0085]; The expected response time (determining latency) is in association with different SLO (service level objective) categories (Silver, Gold, and Diamond); where a TTBU (initial time-to-become-urgent) values of zero are assigned to Gold, and Diamond categories, while a non-zero TTBU is assigned to silver category. See also Parag. [0032-0034]. Examiner’s interpretation: A TTBU indicative of an amount of time that a given IO operation can be queued in a host device is computed based on the current latency (e.g., silver category) corresponding to the storage device (third host, in this case)); and based on at least the third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network to the third host, by the second packet delay time (See Parag. [0033]; the MPIO driver (within first host) is configured to select IO operations from the set of IO queues for delivery to the storage array (in this case, to the third host)... The MPIO driver may assign initial TTBU values of zero to respective ones of the IO operations having a highest priority level so as to ensure that any such operations are selected for delivery to the storage array prior to other IO operations having non-zero TTBU values (e.g., the silver category) (third data packet belonging to the first traffic class). See also Parag. [0083-0085]. Examiner’s interpretation: IO operations (third packet) belonging to a low priority level (first traffic class) is assigned non-zero TTBU values corresponding to low priority level indicative of an amount of time (delay) that a given IO operation can be queued for delivery to the storage (in this case, the third host), where IO operations (third packet) are delivered from the host (first host)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the determination of the network congestion condition, taught by Knauft in view of Xu, to determine a second network congestion condition at the first host between the first host and a third host, based on at least the second network congestion condition, determining a second packet delay time, and based on at least a third data packet belonging to the first traffic class, delaying transmitting the third data packet, from the first host across the network to the third host, by the second packet delay time, as taught by Mallick. This would be convenient for reducing storage system queue congestion. As a result, the number of “queue full” messages received in the host devices from the storage system is reduced, and overall storage system performance is improved (Mallick, Parag. [0098]). Claim 10 is taught by Knauft in view of Xu, Nadas, and Mallick as described for claim 3. Claim 17 is taught by Knauft in view of Xu, Nadas, and Mallick as described for claim 3. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Knauft et al. (Pub. No. US 2019/0303308), hereinafter Knauft; in view of Xu et al. (Pub. No. US 2019/0317665), hereinafter Xu; further in view of Nadas et al. (Pub. No. US 2017/0085487), hereinafter Nadas; and further in view of Xiang et al. (Pub. No. US 2019/0312925), hereinafter Xiang. Claim 6. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, Knauft in view of Xu doesn’t explicitly disclose the computerized method further comprising: reducing a delay of a data packet belonging to the first traffic class to ensure a minimum bandwidth allocation for the first traffic class. However, Xiang discloses reducing a delay of a data packet belonging to the first traffic class to ensure a minimum bandwidth allocation for the first traffic class (See Parag. [0041-0042]; adjusted congestion signal is transmitted to sources of storage I/O requests so that discounted delay can be applied to new storage I/O requests issued from the sources. The adjusted or discounted congestion signal will help resync I/O requests delay less (reducing a delay of a data packet belonging to the first traffic class), balance off the single OM limit of the resync I/O pattern, increase its I/O bandwidth and reach the expected I/O fairness ratio for the different classes of storage I/O requests … rebalances more bandwidth to the low OM resync I/O once its bandwidth is squelched (suppressed) too much (ensure a minimum bandwidth allocation for the first traffic class) by high OM guest VM I/O, caused by the resource constraint congestion, and guarantees IO fairness under the per-component resource constraint conditions). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the host computer, taught by Knauft in view of Xu, to reduce a delay of a data packet belonging to the first traffic class to ensure a minimum bandwidth allocation for the first traffic class, as taught by Xiang. This would be convenient to reach the expected I/O fairness ratio for the different classes of storage I/O requests (Xiang, Parag. [0042]). Claim 13 is taught by Knauft in view of Xu, Nadas, and Xiang as described for claim 6. Claim 20 is taught by Knauft in view of Xu, Nadas, and Xiang as described for claim 6. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Knauft et al. (Pub. No. US 2019/0303308), hereinafter Knauft; in view of Xu et al. (Pub. No. US 2019/0317665), hereinafter Xu; further in view of Nadas et al. (Pub. No. US 2017/0085487), hereinafter Nadas; and further in view of Dhanabalan et al. (Pub. No. US 2016/0373361), hereinafter Dhanabalan. Claim 7. Knauft in view of Xu and Nadas discloses [t]he computerized method of claim 1, Knauft in view of Xu doesn’t explicitly disclose wherein determining the first network congestion condition comprises: determining a net round trip transit time through the network for the second traffic class. However, Dhanabalan discloses determining the first network congestion condition comprises: determining a net round trip transit time through the network for the second traffic class (See Parag. [0059]; A value for queuing delay dq is determined by: dq=RTTcurrent−RTTlow, where RTTcurrent is the round trip time currently experienced for a particular flow, and RTTlow is the shortest round trip time experienced by the plurality of packets sent and received by TCP controller during the predetermined time interval. Appliance calculates values for dq for each of a plurality of active P1 flows (High priority) (the second traffic class). See Parag. [0060]; appliance can assign one or more flow priorities at traffic prioritization module, which are fed back to traffic priority controller. Traffic priority controller can determine link congestion for the active TCP link using the values for dq, and the number of packets queued. For example, if the queuing delay or the number of packets queued in the network for a P1 flow (High priority) is high, appliance can determine that early congestion is imminent for the P1 flow. When early congestion is detected for P1 traffic, it means a P1 packet drop could occur in the near future. Examiner’s note: the P1 flow is high/first priority traffic (See Parag. [0002] [0047] [0049] [0054]). See also Parag. [0004] [0061]. Examiner’s interpretation: The network congestion is determined based on the values of the queuing delay, determined by the round trip time, for each of a plurality of active High priority flows (P1)). It would be obvious to one of ordinary skill in the art at the time before the effective filling date of the claimed invention to modify the first network congestion condition determination, taught by Knauft in view of Xu, to comprise determining a net round trip transit time through the network for the second traffic class, as taught by Dhanabalan. This would be convenient to mitigate network congestion for higher priority traffic and improve the efficiency of the network data flow through optimization of the bandwidth (Dhanabalan, Parag. [0017]). Claim 14 is taught by Knauft in view of Xu, Nadas, and Dhanabalan as described for claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Geng et al. (Pub. No. US 2022/0345412) – Related art in the area of network transmissions and coordinated control of network traffic within data flows, (Abstract; A regular buffer and a shadow buffer are maintained at a receiver host. Responsive to receiving a data flow from a sender host that is clock-synchronized with the receiver host using a common reference clock, a first indication of data of the data flow is stored to the regular buffer, the shadow buffer is transitioned from an idle state to an active state, and a counter of the shadow buffer is incremented that indicates a unit of data traffic received. A dynamic drain rate is determined based on a number of units of the data removed from the regular buffer per unit of time while the shadow buffer is in the active state, where the shadow buffer reverts to an idle state responsive to a break in the receiver host receiving the data flow. Dwell time is calculated as a function of the counter of the shadow buffer and the dynamic drain rate, and a congestion signal for the data flow is determined based on the dwell time.). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELBASST TALIOUA whose telephone number is (571)272-4061. The examiner can normally be reached on Monday-Thursday 7:30 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached on 571-270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Abdelbasst Talioua/Examiner, Art Unit 2445
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Dec 28, 2024
Non-Final Rejection — §103, §112
Apr 03, 2025
Response Filed
Jul 12, 2025
Final Rejection — §103, §112
Oct 31, 2025
Request for Continued Examination
Nov 07, 2025
Response after Non-Final Action
Dec 27, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12445386
Mesh network system and communication method of the same having data flow transmission sorting mechanism
2y 5m to grant Granted Oct 14, 2025
Patent 12401608
PORTABLE DOCUMENT FILE COMMUNICATION SYSTEM
2y 5m to grant Granted Aug 26, 2025
Patent 12388882
Detecting Interactive Content In A Media Conference
2y 5m to grant Granted Aug 12, 2025
Patent 12388724
NETWORK DEGRADATION PREDICTION
2y 5m to grant Granted Aug 12, 2025
Patent 12381792
SOFTWARE SERVICE PLATFORM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
94%
With Interview (+35.2%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 106 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month