Prosecution Insights
Last updated: April 19, 2026
Application No. 18/322,556

DIFFERENTIATED ADMISSION CONTROL FOR SINGULAR FLOW WITH BIFURCATED PRIORITIES

Final Rejection §103
Filed
May 23, 2023
Examiner
FENNER, RAENITA ANN
Art Unit
2468
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
20 granted / 24 resolved
+25.3% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
41 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
0.5%
-39.5% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
26.8%
-13.2% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103
DETAILED ACTION The action is responsive to claims filed on 01/06/2026. Claims 1-20 are pending for evaluation. Note: The claims are presented with independent claims listed first in numerical order, followed by dependent claims also in numerical order; any dual or mirror claims are grouped with the lowest-numbered claim in their respective pairing. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/12/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The Amendment filed on 01/06/2026 has been entered. Claims 1, 7, 8, 14 and 15 have been amended; Claims 1-20 remain pending for evaluation. Applicant’s amendments the Claims have overcome each and every §112(b) rejection previously set forth in the Non-Final Office Action mailed on 10/10/2025. Response to Arguments Applicant's arguments filed 01/06/2026 have been fully considered but they are not persuasive. In response to Applicant’s argument on pg. 14-15 of Applicant Remarks that, in substance, Prakash nor Liu teaches or suggests at least “each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows” in Claim 1, Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). As set forth in the Non-Final Rejection mailed on 10/10/2025, Prakash is relied upon for teaching receiving a bandwidth request for each of a plurality of data flows of a WAN, where WM FLOW REQUESTS 3401 are generated per individual flow (Prakash Fig. 3B, Para. [0106]). Liu is relied upon for teaching that each bandwidth request includes service flow attributes distinguishing delay-sensitive and delay-insensitive traffic, which correspond to primary and deferrable bandwidth request portions associated with each service flow (Liu Fig. 2, Para. [0036-0040]). When the teachings are considered in combination, the cited art teaches bandwidth requests received per data flow, each request indicating bandwidth subject to prioritized and deferrable allocation treatment as recited. Accordingly, Applicant’s arguments addressing Prakash and Liu separately do not overcome the rejection. Applicant’s arguments with respect to the limitations of “generate bandwidth demand predictions based on history of the plurality of data flows” and “determine, based on at least the aggregate bandwidth request, a granted primary aggregate bandwidth and a granted deferrable aggregate bandwidth using network topology and the bandwidth demand predictions” in Claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments presented with respect to independent Claim(s) 8 and 15 and the dependent claims are substantively the same as those set forth for Claim 1. Accordingly, the same reasoning and supporting explanation provided for Claim 1 are equally applicable to independent Claim(s) 8 and 15 and the dependent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Prakash et al. (US 2016/0080207, previously presented), Prakash hereinafter, and Liu et al. (US 2018/0048586), Liu hereinafter, and Nag et al. (US 2006/0020694), Nag hereinafter. Liu was presented in the IDS submitted on 10/03/2024. Regarding Claim 1, Prakash teaches a system comprising (Fig. 14, Para. [0328, 0333-0343]): a processor (Fig. 14, element 1410; Para. [0328, 0333-0343]); and a computer-readable medium storing instructions that are operative upon execution by the processor to (Fig. 14, element 1430; Para. [0328, 0333-0343]): receive a bandwidth request for each of a plurality of data flows of a wide area network (WAN) (Fig. 3B, 3401; Para. [0106] - The WM module 340 sends WM FLOW REQUESTS 3401, representing bandwidth requests per flow, to the QOS manager module 330. The flow logic module 362 determines the amount of bandwidth to be requested in the WM FLOW REQUEST 3401. In various embodiments, the amount of bandwidth requested in a WM FLOW REQUEST 3401 equals W.sub.MAX/RTT.sub.2 for an individual flow. In various embodiments, computing the bandwidth request of W.sub.MAX/RTT.sub.2 for the first individual flow is based on an estimated rate at which a receiving host is receiving data packets from the BW management system 125 for an individual flow; Para. [0341] - In various example embodiments, one or more portions of the network 1480 may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, aWLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a POTS network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]), aggregate the bandwidth requests for the plurality of data flows into an aggregate bandwidth request, the aggregate bandwidth request indicating a primary aggregate bandwidth request portion and a deferrable aggregate bandwidth request portion (Fig. 3B, 3301; Para. [0107-0108] - [0107] The formula for W.sub.MAX/RTT.sub.2 is described in further detail with respect to FIGS. 10A-10F. The QOS logic module 361 processes the WM FLOW REQUEST 3401 and computes a QOS COLLECTION REQUEST 3301 representing an aggregate bandwidth amount for a collection of flows associated with a traffic class. The QOS logic module 361 aggregates the total request size (referred to as R) for an application priority class across all application priority classes which are associated with a collection flows associated with a traffic class. The aggregate total request size R represents the aggregate bandwidth amount requested or indicated in the QOS COLLECTION REQUEST 3301. In some embodiments, the QOS logic module 361 tracks the bandwidth amount for the WM FLOW REQUESTS 3401 by each application priority class (for a collection of flows associated with a traffic class) and then aggregates the bandwidth requested per flow across all application priority classes (for the collection of flows associated with a traffic class) to compute R. [0108] In an example embodiment, there may be three application priority classes referred to as P1, P2 and P3. P1 may represent a high application priority class, P2 may represent a medium priority class and P3 may represent a low application priority class. Each priority application class has an aggregate total request size referred to as P1 for R.sub.P1, P2 for R.sub.P2 and P3 for R.sub.P3, where the aggregate total request size R across all three application priority classes equals R.sub.P1+R.sub.P2+R.sub.P3. FIG. 7C illustrates an example of a portion of a QOS manager 331 having three application priority classes P1, P2, and P3; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); Prakash Para. [0107] teaches the QoS Logic Module aggregates flow requests by application priority class into a total request size R, which, when the classes are interpreted a primary and deferrable portions, constitutes an aggregate bandwidth request indicating both a primary and a deferrable portion. determine, based on at least the aggregate bandwidth request, a granted primary aggregate bandwidth and a granted deferrable aggregate bandwidth (Fig. 3B, 3302; Para. [0109-0112] - [0109] The BW logic module 360 processes the QOS COLLECTION REQUEST 3301 received by the BW manager module system 320. In various embodiments, multiple communications may be sent between the BW managers (e.g., 321-329) associated with a HBT before the BW manager module 320 sends a BW COLLECTION RESPONSE 3302 in response to the QOS COLLECTION REQUEST 3301. [0110] In various examples, the BW logic module 360 is responsible for controlling the bandwidth utilization of the collection of individual flows associated with the universal traffic class by controlling the bandwidth utilization of the collection of individual flows associated with each of the traffic classes such that each of the traffic classes conforms to the bandwidth limits assigned to the node representing the traffic class. In various embodiments the bandwidth amount specified in a WM FLOW REQUEST 3401 represents the current utilization of an individual flow at a specific point in time, and the aggregate bandwidth request amount specified in the QOS COLLECTION REQUEST 3301 represents the current utilization of a collection of flows for a traffic class at a specific point in time, where the traffic class represents a node having assigned bandwidth limits. [0111] The QOS COLLECTION REQUEST 3301 and the BW COLLECTION RESPONSE 3302 represent communications associated with a collection of flows associated with traffic classes (or traffic subclasses of the universal traffic class). The BW logic module 360 determines the amount of bandwidth to allocate in the BW COLLECTION RESPONSE 3302 and sends the BW COLLECTION RESPONSE 3302 to the QOS manager module 330. The amount of bandwidth allocated by the BW manager module 320 to the QOS manager module 330 is referred to as the ALLOCATED BANDWIDTH (B). The BW COLLECTION RESPONSE 3302 specifying the ALLOCATED BANDWIDTH (B) received by the QOS manager module 330 is processed by the QOS logic module 361 based on application priority information. In various embodiments, the user configures or assigns the application priorities, for example, high priority, medium priority or low priority. Generally, the flows associated with the higher priority applications are allocated a larger share of the ALLOCATED BANDWIDTH (B). The QOS logic module 361 determines the allocation for the application priority shares and the flow shares. In various embodiments, the portion of the ALLOCATED BANDWIDTH (B) assigned to each of the application priority classes may be referred to as the percentage share of the ALLOCATED BANDWIDTH (B). The allocation of percentage shares will be discussed further with FIG. 7C. [0112] In various embodiments, the percentage share of the allocated bandwidth limit for each priority application class (also referred to as the application priority share) is a dynamic value that may be modified as new data packets are received by the BW management system 125. In various embodiments, the QOS logic module 361 is responsible for allocating flow shares to the individual flows of the application priority shares. In some embodiments, the QOS FLOW RESPONSE 3402 specifies a flow share having a bandwidth amount of QOS ALLOCATE. The various factors in determining the amount of bandwidth to allocate to each flow share is discussed further in conjunction with FIG. 3E; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320] ); Prakash Para. [0109-0112] teach determining bandwidth based on the aggregate bandwidth request (i.e., QoS Collection Request 3301), allocating a granted aggregate bandwidth (allocated bandwidth B), and assigning portions to application priorities (granted primary aggregate bandwidth and granted deferrable aggregate bandwidth), satisfying the limitation. Yet, Prakash does not expressly teach each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocate, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth. However, Liu teaches each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows (Fig. 2, S101; Para. [0036-0040] - [0036] FIG. 2 is a flowchart of a first embodiment of an upstream bandwidth allocation method according to the present invention. As shown in FIG. 2, based on the networking architecture shown in FIG. 1, the solution is performed by a CMTS. The upstream bandwidth allocation method specifically includes the following steps. [0037] S101: The CMTS obtains a service flow attribute of each online CM, where the service flow attribute includes a delay-sensitive service and a delay-insensitive service. [0038] In this embodiment, a service flow is classified into a delay-sensitive service and a delay-insensitive service according to sensitivity of a service to a transmission delay. For example, the delay-sensitive service may be a conversational service (such as a voice service or a videophone service). For example, the delay-insensitive service may be an interactive service (such as web page access). [0039] When detecting that a CM gets online, the CMTS first needs to obtain service flow attributes of all online CMs, so as to allocate a bandwidth subsequently according to different types. Specifically, a specific manner of obtaining a service flow attribute of a CM includes detecting, by the CMTS, an identifier of each online CM, and querying, by the CMTS, a preconfigured mapping relationship between an identifier of each CM and a service flow attribute according to the identifier of each online CM, to obtain the service flow attribute of each online CM. [0040] That is, the CMTS stores a mapping relationship between an identifier of each CM and a service flow attribute, and only needs to query the mapping relationship according to an identifier of a CM; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]); Liu in Fig. 2, S101 and Para. [0036-0040] teaches the limitation by disclosing that the CMTS obtains a service flow attribute for each online CM, where the service flow attribute distinguishes between delay-sensitive (voice, video) and delay-insensitive (web) services. For purposes of this analysis, the service flow attribute is interpreted as a bandwidth request message, and each online CM is interpreted as a data flow. Under this interpretation, a delay-sensitive attribute corresponds to a primary portion and a delay-insensitive attribute to a deferrable portion, thereby meeting the requirement that each bandwidth request indicate both a primary and a deferrable portion. and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocate, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth (Fig. 2, S102-S103; Para. [0041-0046] - [0041] S102: The CMTS allocates an upstream bandwidth to each online CM according to a received service request, and obtains a remaining bandwidth. [0042] In this embodiment, each CM first sends a service request to the CMTS according to a bandwidth requirement of the CM, so that the CMTS allocates a resource on which interaction can be performed. That is, the CMTS first allocates a preset total bandwidth according to a service request sent by each online CM; and after allocating a corresponding bandwidth to each CM, the CMTS obtains an entire remaining bandwidth after allocation. That is, the remaining bandwidth is a remaining part in the preset total bandwidth except the upstream bandwidth that is allocated to each online CM according to the service request. [0043] S103: The CMTS allocates at least a part of the remaining bandwidth to a CM whose service flow attribute is a delay-sensitive service in the online CMs. [0044] In this embodiment, the entire remaining bandwidth may be allocated to the CM whose service flow attribute is the delay-sensitive service in the online CM, or a part of the remaining bandwidth may be allocated to the CM whose service flow attribute is the delay-sensitive service in the online CM. [0045] Specifically, the at least a part of the remaining bandwidth may be allocated evenly or unevenly to an online CM corresponding to a delay-sensitive service, as long as it is ensured that the remaining bandwidth except a requested bandwidth is allocated to each CM corresponding to a delay-sensitive service. This is not specifically limited. [0046] Optionally, if the CMTS allocates a part of the remaining bandwidth to each online CM whose service flow attribute is a delay-sensitive service, the CMTS may configure a remaining part of the remaining bandwidth as a contention-based bandwidth. The contention-based bandwidth is a bandwidth that is specially provided for a bandwidth contention request, and is shared by CMs sending bandwidth requests when each flow has an extra burst. That is, a problem that a burst of a CM corresponding to a delay-sensitive service causes a delay is resolved, and a burst status of a CM corresponding to a delay-insensitive service is also resolved. An extra bandwidth is provided, an upstream throughput is increased, and resource utilization is improved; See Also Fig. 2, S104, Para. [0047-0050]; Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078] ). Liu Para. [0041-0046] teach allocating a granted primary bandwidth and a granted deferrable bandwidth. The CMTS first allocated upstream bandwidth to each CM based on its service request (granted primary bandwidth) and then allocates at least part of the remaining bandwidth to delay-sensitive service flows (granted deferrable bandwidth), satisfying the limitation. Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to combine Prakash’s invention of “a system and method” for a bandwidth management system (Prakash §Abstract) with Liu’s invention of an “upstream bandwidth allocation method, apparatus, and system” (Liu Para. [0002]) because Liu’s invention provides a solution to reduce large upstream transmission delay caused by insufficient resource allocation (Liu Para. [0006]). Yet, Prakash nor Liu expressly teach generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions. However, Nag teaches generate bandwidth demand predictions based on history of the plurality of data flows (Fig. 9, Para. [0075-0077] - [0075] FIG. 9 is a flow chart depicting a process of analyzing a selected path according to one embodiment of the present invention. In embodiments including the bandwidth allocation screen of FIG. 7, FIG. 9 demonstrates what happens when the analyze button 790 is selected. At block 910, a schedule of bandwidth allocation is determined for the selected path. At block 920, after the predicted schedule for the selected path has been determined, the schedule of increased bandwidth allocation is determined by overlaying the bandwidth requirements of the selected path on top of the bandwidth previously reserved on the nodes of the path. Finally, at block 940, the combined schedule of usage is optionally displayed to the administrator to allow the administrator to evaluate the impact of establishing a reservation protocol session over the selected path; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) The Examiner interprets “a schedule of bandwidth allocation” as bandwidth demand predictions. using network topology and the bandwidth demand predictions (Fig. 3, steps 330-380 and steps 330-410; Para. [0059-0065] - [0060] … In either case, a Bandwidth Allocation screen is presented to the user enabling him or her to select a pair of media aggregation managers and indicate the number of users capable of communicating via the selected media aggregation managers 330. Once the user indicates which media aggregation managers are to be allocated and how many users are expected/predicted to utilize the reservation protocol session being initialized, one or more potential paths through the communication network coupling the two media aggregation managers are displayed on the bandwidth allocation interface. The user may select one of the potential paths for analysis and, through the graphical user interface, indicate that the selected path is to be analyzed. [0061] At processing block 340, the selected path is analyzed to determine projected bandwidth utilization for each link of the selected path. Once analyzed, the administrator may select BW on Link 206 from the menu or the BW on Link screen may automatically appear after analysis has completed. [0062] On the BW on Link screen, the user may select a node within the network to view a schedule of usage. Specifically of interest to an administrator, would be those nodes affected by the predicted increase in usage. Responsive to the node selection, a GUI screen displays a schedule of usage for the selected node and optionally a projection indicating if the predicted usage increase is within an acceptable range 350. When the predicted usage is within an acceptable range, the media aggregation managers may be initialized. [0063] In one embodiment, assuming all nodes fall within an acceptable range, the user may initiate configuration of the routers on the selected path and allocates the bandwidth for the selected media aggregation managers 360 by selecting Bandwidth Allocation 203 from the menu; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) The examiner notes that Nag Fig. 3 and associated paragraphs teach bandwidth allocation (Fig. 3, step 360) informed by network topology (i.e., “path” described in Fig. 3, steps 330 and 340) and bandwidth demand predictions (i.e., “projected bandwidth utilization”). Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to provide generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions as taught by Nag, in the combined system of Prakash/Liu, so that it would provide apparatus and methods for “initializing, allocating and de-allocating reservation protocol sessions between a plurality of network devices in a communication network,” which facilitate “allocating and de-allocating bandwidth and establishing reservation protocol sessions between network devices,” thereby providing means for the network administrator “to analyze various repercussions of increasing/decreasing demand over various paths through a communication network and viewing the bandwidth effects at all nodes on the path for a schedule that may vary based on usage deviations at various times of the day, week, month or year” (Nag Para. [0037]). Regarding Claim 8, Prakash teaches a computer-implemented method comprising (Fig. 14, Para. [0328, 0333-0343]; See also Fig. 3A, Para. [0090-0099]; Fig. 3B, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]): receiving a bandwidth request for each of a plurality of data flows of a wide area network (WAN) (Fig. 3B, 3401; Para. [0106]; Para. [0341]; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]), aggregating the bandwidth requests for the plurality of data flows into an aggregate bandwidth request, the aggregate bandwidth request indicating a primary aggregate bandwidth request portion and a deferrable aggregate bandwidth request portion (Fig. 3B, 3301; Para. [0107-0108] ; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); determining, based on at least the aggregate bandwidth request, a granted primary aggregate bandwidth and a granted deferrable aggregate bandwidth (Fig. 3B, 3302; Para. [0109-0112]; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320] ); Yet, Prakash does not expressly teach each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocating, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth. However, Liu teaches each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows (Fig. 2, S101; Para. [0036-0040]; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]); and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocating, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth (Fig. 2, S102-S103; Para. [0041-0046]; See Also Fig. 2, S104, Para. [0047-0050]; Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078] ). Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to combine Prakash’s invention of “a system and method” for a bandwidth management system (Prakash §Abstract) with Liu’s invention of an “upstream bandwidth allocation method, apparatus, and system” (Liu Para. [0002]) because Liu’s invention provides a solution to reduce large upstream transmission delay caused by insufficient resource allocation (Liu Para. [0006]). Yet, Prakash nor Liu expressly teach generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions. However, Nag teaches generate bandwidth demand predictions based on history of the plurality of data flows (Fig. 9, Para. [0075-0077]; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) using network topology and the bandwidth demand predictions (Fig. 3, steps 330-380 and steps 330-410; Para. [0059-0065]; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to provide generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions as taught by Nag, in the combined system of Prakash/Liu, so that it would provide apparatus and methods for “initializing, allocating and de-allocating reservation protocol sessions between a plurality of network devices in a communication network,” which facilitate “allocating and de-allocating bandwidth and establishing reservation protocol sessions between network devices,” thereby providing means for the network administrator “to analyze various repercussions of increasing/decreasing demand over various paths through a communication network and viewing the bandwidth effects at all nodes on the path for a schedule that may vary based on usage deviations at various times of the day, week, month or year” (Nag Para. [0037]). Regarding Claim 15, Prakash teaches a computer storage device having computer-executable instructions stored thereon, which, on execution by a computer, cause the computer to perform operations comprising: (Fig. 14, Para. [0328, 0333-0343]; See also Fig. 3A, Para. [0090-0099]; Fig. 3B, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]): receiving a bandwidth request for each of a plurality of data flows of a wide area network (WAN) (Fig. 3B, 3401; Para. [0106]; Para. [0341]; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]), aggregating the bandwidth requests for the plurality of data flows into an aggregate bandwidth request, the aggregate bandwidth request indicating a primary aggregate bandwidth request portion and a deferrable aggregate bandwidth request portion (Fig. 3B, 3301; Para. [0107-0108] ; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); determining, based on at least the aggregate bandwidth request, a granted primary aggregate bandwidth and a granted deferrable aggregate bandwidth (Fig. 3B, 3302; Para. [0109-0112]; See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320] ); Yet, Prakash does not expressly teach each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocating, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth. However, Liu teaches each bandwidth request indicating a primary bandwidth request portion and a deferrable bandwidth request portion for each of the plurality of data flows (Fig. 2, S101; Para. [0036-0040]; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]); and based on at least the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth, allocating, for each bandwidth request for the plurality of data flows, a granted primary bandwidth and a granted deferrable bandwidth (Fig. 2, S102-S103; Para. [0041-0046]; See Also Fig. 2, S104, Para. [0047-0050]; Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078] ). Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to combine Prakash’s invention of “a system and method” for a bandwidth management system (Prakash §Abstract) with Liu’s invention of an “upstream bandwidth allocation method, apparatus, and system” (Liu Para. [0002]) because Liu’s invention provides a solution to reduce large upstream transmission delay caused by insufficient resource allocation (Liu Para. [0006]). Yet, Prakash nor Liu expressly teach generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions. However, Nag teaches generate bandwidth demand predictions based on history of the plurality of data flows (Fig. 9, Para. [0075-0077]; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) using network topology and the bandwidth demand predictions (Fig. 3, steps 330-380 and steps 330-410; Para. [0059-0065]; See also Fig. 3, Para. [0059-0065]; Fig. 9, Para. [0075-0077]; Fig. 10, Para. [0078-0080]; Fig. 11, Para. [0082-0084]; Fig. 14, Para. [0098-0105]; Fig. 15, Para. [0106-0109]; Fig. 17, Para. [0123-0129]; Fig. 18, Para. [0130-0134]; Fig. 19, Para. [0135-0139]; Fig. 20, Para. [0140-0144]; Fig. 21, Para. [0145-0146]) Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to provide generate bandwidth demand predictions based on history of the plurality of data flows and using network topology and the bandwidth demand predictions as taught by Nag, in the combined system of Prakash/Liu, so that it would provide apparatus and methods for “initializing, allocating and de-allocating reservation protocol sessions between a plurality of network devices in a communication network,” which facilitate “allocating and de-allocating bandwidth and establishing reservation protocol sessions between network devices,” thereby providing means for the network administrator “to analyze various repercussions of increasing/decreasing demand over various paths through a communication network and viewing the bandwidth effects at all nodes on the path for a schedule that may vary based on usage deviations at various times of the day, week, month or year” (Nag Para. [0037]). Regarding Claims 2, 9, and 16, Prakash in view of Liu and Nag teaches Claims 1, 8, and 15. Prakash further teaches transmit data through the WAN (Para. [0341]; Para. [0100] - FIG. 3B illustrates another embodiment of the BW management system 125. The BW management system 125 shown in FIG. 3B illustrates an example of the communication and interaction between the BW manager module 320, the QOS manager module 330, and the WM module 340. In various embodiments, communications are sent from the WM module 340 to the QOS manager module 330, from the QOS manager module 330 to the BW manager module 320, and then from the BW manager module 320 to the QOS manager module 330, from the QOS manager module 330 to the WM module 340, as packets are sent from a sending host. The BW management system 125 actively manages the collection of flows associated with a universal traffic class by controlling the rate that data packets are transmitted for the individual flows in the collection of flows. In example embodiments, the communications between the modules 340, 330 and 320 include a WM FLOW REQUEST 3401, a QOS COLLECTION REQUEST 3301 (in response to the WM FLOW REQUEST 3401), a BW COLLECTION RESPONSE 3302 (in response to the QOS COLLECTION REQUEST 3301), and a QOS FLOW RESPONSE 3402 (in response to the WM FLOW REQUEST 3302); See also Fig. 3A, Para. [0090-0099]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]). Regarding Claims 3, 10, and 17, Prakash in view of Liu and Nag teaches Claims 1, 8, and 15. Prakash further teaches wherein the broker backend aggregates the bandwidth requests for the plurality of data flows into the aggregate bandwidth request (Fig. 11E, step 1142; Para. [0311] - FIG. 11E is a flow diagram 1140 illustrating an example method for determining a bandwidth request for a collection of flows, according to one embodiment. The flow diagram 1140 includes operations 1141-1143. At operation 1141, receiving, by the QOS manager module 330, the bandwidth requests indicating the bandwidth amounts. In various embodiments, the WM FLOW REQUEST 3401 represents the bandwidth request. At operation 1142, aggregating the bandwidth amounts for the individual flows in the collection of individual flows associated with the traffic class to create an aggregated bandwidth amount. At operation 114, sending, to the BW manager module 320, a bandwidth request indicating the aggregated bandwidth amount. In various embodiments, the QOS COLLECTION REQUEST 3301 represents the bandwidth request indicating the aggregated bandwidth amount; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); wherein the broker backend allocates the granted primary bandwidth and the granted deferrable bandwidth (Fig. 11E, step 1142; Para. [0311] - FIG. 11E is a flow diagram 1140 illustrating an example method for determining a bandwidth request for a collection of flows, according to one embodiment. The flow diagram 1140 includes operations 1141-1143. At operation 1141, receiving, by the QOS manager module 330, the bandwidth requests indicating the bandwidth amounts. In various embodiments, the WM FLOW REQUEST 3401 represents the bandwidth request. At operation 1142, aggregating the bandwidth amounts for the individual flows in the collection of individual flows associated with the traffic class to create an aggregated bandwidth amount. At operation 114, sending, to the BW manager module 320, a bandwidth request indicating the aggregated bandwidth amount. In various embodiments, the QOS COLLECTION REQUEST 3301 represents the bandwidth request indicating the aggregated bandwidth amount; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); wherein an admission controller determines the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth (Fig. 11F, step 1152; Para. [0312] - FIG. 11F is a flow diagram 1150 illustrating an example method for determining a bandwidth response per a collection of flows, according to one embodiment. The flow diagram 1150 includes operations 1151-1154. At operation 1151, receiving, by the BW manager module 320, the bandwidth request indicating the aggregate bandwidth amount. For various embodiments, the QOS COLLECTION REQUEST 3301 represents the bandwidth request indicating the aggregate bandwidth amount. At operation 1152, determining available bandwidth for the collection of individual flows associated with the traffic class based on the bandwidth utilization of the collection of individual flows associated with the traffic class and the bandwidth limits assigned to the root node and the plurality of nodes in the HBT. At operation 1153, allocating bandwidth to the collection of individual flows associated with the traffic class based on the available bandwidth for the collection of individual flows associated with the traffic class. At operation 1154, sending, to the QOS manager module 330, a bandwidth response indicating the bandwidth allocated. For various embodiments, the QOS FLOW RESPONSE 3402 represents the bandwidth response indicating the bandwidth allocated; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); and (wherein the instructions are further operative to)/ (wherein the method further comprises)/(wherein the operations further comprise): transmit(ing), by the broker agent, to the broker backend, the bandwidth requests for the plurality of data flows (Fig. 3B, 3401; Para. [0106] - The WM module 340 sends WM FLOW REQUESTS 3401, representing bandwidth requests per flow, to the QOS manager module 330.; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); transmit(ing), by the broker backend, to the admission controller, the aggregate bandwidth request (Fig. 3B, 3301; Para. [0109] - The BW logic module 360 processes the QOS COLLECTION REQUEST 3301 received by the BW manager module system 320. In various embodiments, multiple communications may be sent between the BW managers (e.g., 321-329) associated with a HBT before the BW manager module 320 sends a BW COLLECTION RESPONSE 3302 in response to the QOS COLLECTION REQUEST 3301; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); transmit(ing), by the admission controller, to the broker backend, an indication of the granted primary aggregate bandwidth and the granted deferrable aggregate bandwidth (Fig. 3B, 3302; Para. [0111] - The QOS COLLECTION REQUEST 3301 and the BW COLLECTION RESPONSE 3302 represent communications associated with a collection of flows associated with traffic classes (or traffic subclasses of the universal traffic class). The BW logic module 360 determines the amount of bandwidth to allocate in the BW COLLECTION RESPONSE 3302 and sends the BW COLLECTION RESPONSE 3302 to the QOS manager module 330. The amount of bandwidth allocated by the BW manager module 320 to the QOS manager module 330 is referred to as the ALLOCATED BANDWIDTH (B). The BW COLLECTION RESPONSE 3302 specifying the ALLOCATED BANDWIDTH (B) received by the QOS manager module 330 is processed by the QOS logic module 361 based on application priority information. In various embodiments, the user configures or assigns the application priorities, for example, high priority, medium priority or low priority. Generally, the flows associated with the higher priority applications are allocated a larger share of the ALLOCATED BANDWIDTH (B). The QOS logic module 361 determines the allocation for the application priority shares and the flow shares. In various embodiments, the portion of the ALLOCATED BANDWIDTH (B) assigned to each of the application priority classes may be referred to as the percentage share of the ALLOCATED BANDWIDTH (B). The allocation of percentage shares will be discussed further with FIG. 7C.; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); transmit(ing), by the broker backend, to the broker agent, an indication of the granted primary bandwidth and the granted deferrable bandwidth (Fig. 3B, 3402; Para. [0113] - The QOS manager module 330 provides a QOS FLOW RESPONSE 3402 to the WM module 340 for the flows from the collection of flows associated with the traffic class. The QOS FLOW RESPONSES 3042 specify the bandwidth allocated to the flows in a collection of flows associated with a traffic class. The allocated bandwidth for the flow shares may be referred to as QOS ALLOCATE, as shown in FIG. 3B; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]). Examiner’s Notes: The examiner interprets “traffic class” found throughout Prakash as primary and deferrable bandwidths. Figure 1 provides further explanation used to map the system components in Prakash to Applicant’s invention. PNG media_image1.png 776 1231 media_image1.png Greyscale Figure 1: Fig. 3B from Prakash (US 2016/0080207). Annotations provide the mappings between Applicant's invention and the reference. Regarding Claims 4, 11, and 18, Prakash in view of Liu and Nag teaches Claims 3, 10, and 17. Prakash further teaches receive/receiving, by the broker agent, from an application, (Fig. 3A, 311; Para. [0086] - FIG. 3A illustrates the BW management system 125, according to example embodiments. The BW management system 125 is implemented in the vTCP module 109 in various embodiments…The BW management system 125 accepts the data packets on behalf of the receiving host (e.g., data packets to a TCP receiver at 312)…; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); The examiner interprets data packets transmitted from a TCP sender as traffic received from an application since the TCP sender operates on behalf of the application to deliver its data across the network. based on at least the indication of deferrable traffic, create/creating, by the broker agent, the bandwidth requests for the plurality of data flows (Fig. 3B, 3401; Para. [0106-0107] - The WM module 340 sends WM FLOW REQUESTS 3401, representing bandwidth requests per flow, to the QOS manager module 330. The flow logic module 362 determines the amount of bandwidth to be requested in the WM FLOW REQUEST 3401. In various embodiments, the amount of bandwidth requested in a WM FLOW REQUEST 3401 equals W.sub.MAX/RTT.sub.2 for an individual flow. In various embodiments, computing the bandwidth request of W.sub.MAX/RTT.sub.2 for the first individual flow is based on an estimated rate at which a receiving host is receiving data packets from the BW management system 125 for an individual flow. [0107] The formula for W.sub.MAX/RTT.sub.2 is described in further detail with respect to FIGS. 10A-10F. The QOS logic module 361 processes the WM FLOW REQUEST 3401 and computes a QOS COLLECTION REQUEST 3301 representing an aggregate bandwidth amount for a collection of flows associated with a traffic class. The QOS logic module 361 aggregates the total request size (referred to as R) for an application priority class across all application priority classes which are associated with a collection flows associated with a traffic class. The aggregate total request size R represents the aggregate bandwidth amount requested or indicated in the QOS COLLECTION REQUEST 3301. In some embodiments, the QOS logic module 361 tracks the bandwidth amount for the WM FLOW REQUESTS 3401 by each application priority class (for a collection of flows associated with a traffic class) and then aggregates the bandwidth requested per flow across all application priority classes (for the collection of flows associated with a traffic class) to compute R; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]); The examiner interprets “The QOS logic module 361 processes the WM FLOW REQUEST 3401 and computes a QOS COLLECTION REQUEST 3301 representing an aggregate bandwidth amount for a collection of flows associated with a traffic class” as the WM FLOW REQUEST contained information regarding traffic class (i.e., deferrable bandwidth). transmit/transmitting, by the broker agent, to the application, an indication of a granted total bandwidth for the first data flow (Fig. 3A, 312; Para. [0086] - FIG. 3A illustrates the BW management system 125, according to example embodiments. The BW management system 125 is implemented in the vTCP module 109 in various embodiments. In alternative embodiments, the BW management system 125 may reside in other types of accelerated protocol modules. For example, the protocol acceleration module may not be limited to TCP protocols or used in a virtualized environment. As shown in FIGS. 2A-2E, the vTCP module 109 may reside within a VMM hypervisor 240, VM (e.g., VM 210, VM 220, or VM 230), host operating system 101, guest operating system 212 or 222, or VM 235 having the vTCP module 109 deployed as a server. In some embodiments, the BW management system 125 is configured to accept data packets from a sending host (e.g., data packets from a TCP sender at 311 in FIG. 3A). The BW management system 125 accepts the data packets on behalf of the receiving host (e.g., data packets to a TCP receiver at 312). The data packets accepted by the BW management system 125 may be temporarily stored by the BW management system 125 before the BW management system 125 forwards the data packets to the receiving host; Fig. 11H, step 1172; Para. [0314] - FIG. 11H is a flow diagram 1170 illustrating an example method for generating a window size for an individual flow, according to one embodiment. The flow diagram 1130 includes operations 1171-1172. At operation 1171, receiving, by the WM module 340, the bandwidth responses indicating the allocated flow shares. In various embodiments, the QOS FLOW RESPONSE 3402 represents the bandwidth responses indicating the allocated flow shares. At operation 1172, generating, for the first individual flow, the window size W.sub.A to be advertised to the sending host. In various embodiments, W.sub.A represents the window size to be advertised to the sending host; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]), The examiner interprets the sending host in Para. [0314] as the application. Yet, Prakash does not expressly teach an indication of deferrable traffic for a first data flow of the plurality of data flows, the granted total bandwidth for the first data flow including, for the first data flow, a sum of the granted primary bandwidth and the granted deferrable bandwidth, and assign/assigning, by the application, to the first data flow, the granted total bandwidth for the first data flow. However, Liu teaches an indication of deferrable traffic for a first data flow of the plurality of data flows (Fig. 2, S101; Para. [0036-0040]; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]); the granted total bandwidth for the first data flow including, for the first data flow, a sum of the granted primary bandwidth and the granted deferrable bandwidth (Fig. 2, S102-S103; Para. [0041-0046]; See Also Fig. 2, S104, Para. [0047-0050]; Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]); and assign/assigning, by the application, to the first data flow, the granted total bandwidth for the first data flow (Fig. 2, S104; Para. [0047-0050] - [0047] S1004: The CMTS informs each online CM of an upstream bandwidth allocation result, so that each online CM performs upstream data transmission according to the allocated bandwidth. [0048] In this embodiment, after the remaining bandwidth is allocated in the foregoing manner, each CM is informed of the bandwidth allocation result by means of an upstream bandwidth allocation map (MAP), and each CM performs upstream data transmission according to an allocated bandwidth. [0049] In the upstream bandwidth allocation method provided in this embodiment, it is obtained whether a service flow attribute of an online CM is a delay-sensitive service or a delay-insensitive service, a remaining bandwidth is obtained after allocation is performed according to a preset bandwidth and a received service request of the CM, at least a part of the remaining bandwidth is allocated to each CM corresponding to a delay-sensitive service, and each online CM is informed of an allocation result. The remaining bandwidth is actively allocated to a CM corresponding to a delay-sensitive service, so that upstream throughputs of some CMs are increased, and upstream transmission delays are reduced; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]). Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to combine Prakash’s invention of “a system and method” for a bandwidth management system (Prakash §Abstract) with Liu’s invention of an “upstream bandwidth allocation method, apparatus, and system” (Liu Para. [0002]) because Liu’s invention provides a solution to reduce large upstream transmission delay caused by insufficient resource allocation (Liu Para. [0006]). Regarding Claims 5, 12, and 19, Prakash in view of Liu and Nag teaches Claims 3, 10, and 17. Prakash further teaches maintain/maintaining, by the broker backend, a first pool for the granted primary bandwidth and a second pool for the granted deferrable bandwidth (Fig. 3B, 330; Para. [0109-0112] - [0111] The QOS COLLECTION REQUEST 3301 and the BW COLLECTION RESPONSE 3302 represent communications associated with a collection of flows associated with traffic classes (or traffic subclasses of the universal traffic class). The BW logic module 360 determines the amount of bandwidth to allocate in the BW COLLECTION RESPONSE 3302 and sends the BW COLLECTION RESPONSE 3302 to the QOS manager module 330. The amount of bandwidth allocated by the BW manager module 320 to the QOS manager module 330 is referred to as the ALLOCATED BANDWIDTH (B). The BW COLLECTION RESPONSE 3302 specifying the ALLOCATED BANDWIDTH (B) received by the QOS manager module 330 is processed by the QOS logic module 361 based on application priority information. In various embodiments, the user configures or assigns the application priorities, for example, high priority, medium priority or low priority. Generally, the flows associated with the higher priority applications are allocated a larger share of the ALLOCATED BANDWIDTH (B). The QOS logic module 361 determines the allocation for the application priority shares and the flow shares. In various embodiments, the portion of the ALLOCATED BANDWIDTH (B) assigned to each of the application priority classes may be referred to as the percentage share of the ALLOCATED BANDWIDTH (B). The allocation of percentage shares will be discussed further with FIG. 7C; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]; Fig. 7A-7D, Para. [0117, 0178, 0229-0266]); wherein allocating the granted primary bandwidth comprises allocating the granted primary bandwidth from the first pool (Fig. 3B, 330; Para. [0109-0112] - [0112] In various embodiments, the percentage share of the allocated bandwidth limit for each priority application class (also referred to as the application priority share) is a dynamic value that may be modified as new data packets are received by the BW management system 125. In various embodiments, the QOS logic module 361 is responsible for allocating flow shares to the individual flows of the application priority shares. In some embodiments, the QOS FLOW RESPONSE 3402 specifies a flow share having a bandwidth amount of QOS ALLOCATE. The various factors in determining the amount of bandwidth to allocate to each flow share is discussed further in conjunction with FIG. 3E; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]; Fig. 7A-7D, Para. [0117, 0178, 0229-0266]); and wherein allocating the granted deferrable bandwidth comprises allocating the granted deferrable bandwidth from the second pool (Fig. 3B, 330; Para. [0109-0112] - [0112] In various embodiments, the percentage share of the allocated bandwidth limit for each priority application class (also referred to as the application priority share) is a dynamic value that may be modified as new data packets are received by the BW management system 125. In various embodiments, the QOS logic module 361 is responsible for allocating flow shares to the individual flows of the application priority shares. In some embodiments, the QOS FLOW RESPONSE 3402 specifies a flow share having a bandwidth amount of QOS ALLOCATE. The various factors in determining the amount of bandwidth to allocate to each flow share is discussed further in conjunction with FIG. 3E; See also Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]; Fig. 7A-7D, Para. [0117, 0178, 0229-0266]). Prakash Para. [0109-0112] teach that the QoS Manager Module 330 (broker backend) allocates the allocated bandwidth B into shares for different application priority classes. The division of B into percentage shares for high, medium, and low-priority traffic corresponds to maintaining separate sets (i.e., pools) of bandwidth resources, where higher-priority allocations are interpreted as granted primary bandwidth from a first pool and lower-priority/dynamic allocations interpreted as granted deferrable bandwidth from a second pool. The QoS Flow Response 3402 further specifies individual flow shares drawn from these pools. Regarding Claims 6, 13, and 20, Prakash in view of Liu and Nag teaches Claims 1, 8, and 15. Prakash further teaches wherein each of the bandwidth requests, each of the granted primary bandwidths, and each of the granted deferrable bandwidths is for a specific time period (Fig. 3B, 3401; Para. [0110] - In various examples, the BW logic module 360 is responsible for controlling the bandwidth utilization of the collection of individual flows associated with the universal traffic class by controlling the bandwidth utilization of the collection of individual flows associated with each of the traffic classes such that each of the traffic classes conforms to the bandwidth limits assigned to the node representing the traffic class. In various embodiments the bandwidth amount specified in a WM FLOW REQUEST 3401 represents the current utilization of an individual flow at a specific point in time, and the aggregate bandwidth request amount specified in the QOS COLLECTION REQUEST 3301 represents the current utilization of a collection of flows for a traffic class at a specific point in time, where the traffic class represents a node having assigned bandwidth limits; See also Para. [0143, 0148, 0153, 0164, 0254, 0287, 0292, 0299, 0307]; Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]). wherein each of the bandwidth requests, each of the granted primary bandwidths, and each of the granted deferrable bandwidths is for a specific time period (Para. [0148] - In example embodiments, these communications related to bandwidth requests (for individual flows and collections of flows) represent the requested bandwidth or allocated bandwidth at a particular point in time. In one embodiment, the bandwidth requests change as each of the flows in the collection of flows associated with traffic sub-class changes. In some embodiments, the responses (based on the requests) are generated almost instantaneously. Once the communications are received by the modules 320, 330 and 340, their respective logic modules 360, 361, and 362 performs processes the requests and generates responses. In various example embodiments, the flow logic module 362 associated with the WM module 340 computes the WM FLOW REQUESTS 3401. The computations performed by the QOS logic module 361 and the BW logic module 360 are used to generate the QOS FLOW RESPONSES 3402, in response to the WM FLOW REQUESTS 3401. The flow logic module 362 receives the QOS FLOW REPONSES 3402, which may be a value equal to, less than, or greater than the WM FLOW REQUESTS 3401. The QOS ALLOCATE, which represents the bandwidth allocated to the individual flows from the QOS RESPONSES 3402 is used by the flow logic module 362 to compute the window size W.sub.A advertised to the sending host; See also Para. [0143, 0148, 0153, 0164, 0254, 0287, 0292, 0299, 0307]; Fig. 3A, Para. [0090-0099]; Fig. 3b, Para. [0100-0134]; Fig. 3C, Para. [0135-0169]; Fig. 3D, Para. [0170-0176]; Fig. 3E, Para. [0177-0181]; Fig. 3F, Para. [0182-0187]; Fig. 11A-11H, Para. [0303-0314]; Fig. 12, Para. [0315-0320]). Prakash Para. [0148] teaches that QoS Flow Responses specify the allocated bandwidth to individual flows, which is interpreted as the granted primary and granted deferrable bandwidths. The BW Management System computes a window size based on the receiving host’s acceptance rate and advertises the window size to the TCP sender to control its transmission rate. Because the window size governs how much data can be sent during a given interval, the granted primary and deferrable bandwidths are applied for a specific time period defined by the validity of the advertised window size. Regarding Claims 7 and 14, Prakash in view of Liu and Nag teaches Claims 1 and 8. Yet, Prakash does not expressly teach wherein each bandwidth request indicates its primary bandwidth request portion and its deferrable bandwidth request portion separately. However, Liu teaches wherein each bandwidth request includes a value for the primary bandwidth request portion and a separate value for the deferrable bandwidth request portion (Fig. 2, S101; Para. [0036-0040]; See Also Fig. 3a-3b, Para. [0051-0058]; Fig. 4a-4b, Para. [0059-0078]). Liu Para. [0036-0040] teach that each service flow attribute specifies delay-sensitive (primary) and delay-insensitive (deferrable) services. Interpreted as a bandwidth request, this indicates a primary portion and a deferrable portion separately. Therefore, it would have been obvious to one having ordinary skill of the art before the effective filing date of the claimed invention to combine Prakash’s invention of “a system and method” for a bandwidth management system (Prakash §Abstract) with Liu’s invention of an “upstream bandwidth allocation method, apparatus, and system” (Liu Para. [0002]) because Liu’s invention provides a solution to reduce large upstream transmission delay caused by insufficient resource allocation (Liu Para. [0006]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAENITA ANN FENNER whose telephone number is (571)270-0880. The examiner can normally be reached 8:00 - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marcus Smith can be reached at (571) 270-1096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.A.F./Examiner, Art Unit 2468 /Thomas R Cairns/Primary Examiner, Art Unit 2468
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Oct 03, 2025
Non-Final Rejection — §103
Dec 02, 2025
Interview Requested
Dec 08, 2025
Examiner Interview Summary
Dec 08, 2025
Applicant Interview (Telephonic)
Jan 06, 2026
Response Filed
Feb 25, 2026
Final Rejection — §103
Mar 26, 2026
Applicant Interview (Telephonic)
Mar 26, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604165
COMMUNICATION CONTROL METHOD AND COMMUNICATION DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12592751
METHOD AND DEVICE FOR RECEIVING FEEDBACK FRAME IN WIRELESS LAN SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12574156
DYNAMICALLY CONFIGURING RETRY POLICIES OF NETWORK FUNCTIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12542638
SOUNDING REFERENCE SIGNAL RESOURCE ALLOCATION AND OPPORTUNISTIC BEAMFORMING PREPARATION FOR A USER DEVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12532347
PRIORITY MANAGEMENT FOR A SIDELINK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+6.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month