Prosecution Insights
Last updated: April 19, 2026
Application No. 17/961,347

DYNAMIC TRANSMISSION BANDWIDTH CONTRACTS BETWEEN HUB AND SPOKE DEVICES

Non-Final OA §103
Filed
Oct 06, 2022
Examiner
PHAM, NHU
Art Unit
2479
Tech Center
2400 — Computer Networks
Assignee
Hewlett Packard Enterprise Development LP
OA Round
2 (Non-Final)
90%
Grant Probability
Favorable
2-3
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+31.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
36.0%
-4.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 2-3, filed on 09/18/2025, with respect to claim 1 have been fully considered and are persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1,3,6,7,11-16, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Arumugam et al. (US 11223538 B1; hereinafter “Arumugam”) in view of Leigh et al. (US 20210021473; hereinafter “Leigh”) Regarding claim 1, Arumugam discloses: A method for configuring a plurality of branch gateway devices located at branch offices coupled to a virtual private network (VPN) concentrator located at a central office, the method comprising: (Column 11, row 37-44: WAN uplinks 404 may be coupled with one or more headend gateways of different LAN sites, and the responses received from the headend gateways may include information and meta-information that can be used to determine the health and available bandwidth of each uplink.) negotiating, by the VPN concentrator with a respective branch gateway device, a transmission-bandwidth contract for the respective branch gateway device, wherein the transmission-bandwidth contract specifies an upper bandwidth limit for the respective branch gateway device to transmit data to the VPN concentrator; (Column 3 row 58 – 64: The machine learning model also predicts the amount of available bandwidth for each uplink, and reserves a portion of the bandwidth based on each application type. This bandwidth reservation is taken into account when generating the DPS policies, and may result in some applications not being routed through the “best” uplink as measured by SLA bucket.) receiving, from the respective branch gateway device, a request for additional transmission bandwidth; (Column 3, row 19-22: the machine learning model may receive network operation data from Rio indicating that the current SLA for Office 365 applications is not being met most of the time.) analyzing, by the VPN concentrator, traffic patterns associated with the plurality of branch gateway devices to identify one or more branch gateway devices with unused transmission bandwidth according to corresponding transmission-bandwidth contracts; (Column 6, row 48-58: In order to generate accurate results, machine learning model 114 preprocesses certain network operating information. In some examples, the information used as inputs into the preprocessing algorithm (per application and per uplink) includes total bandwidth percentage used by the DPS policy associated with the respective application class, underlay traffic used percentage, overlay traffic used percentage, overlay traffic used percentage per headend gateway, number of headend gateways terminating WAN connections for the application type, average total available bandwidth percentage for the respective uplink.) transmitting contract-update notifications to the respective branch gateway device and the identified one or more branch gateway devices to allow each branch gateway device to update a corresponding transmission-bandwidth contract, which comprises increasing the upper bandwidth limit at the respective branch gateway device while reducing the upper bandwidth limit at the identified one or more branch gateway devices; (Column 7 row 13-20: For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%; (2) all uplinks are in bucket D and/or E for the prior 30 minutes, increase the SLA by 10%; (3) all uplinks are in bucket A for the prior 3 hours, decrease the SLA by 10%.) and in response to expiration of a predetermined timer, revoking the additional transmission bandwidth allocated to the respective branch gateway device by replacing, at the respective branch gateway device, the updated transmission-bandwidth contract with the negotiated transmission-bandwidth contract. (Column 4, row 18-29: Over relatively short periods of time (e.g. 10 minute increments), uplinks are bucketized on a per-policy basis, and adjustments are made to DPS policies where beneficial. Over longer periods of time, if the network is struggling to meet SLA for an application across all uplinks (or is consistently meeting SLA with a large margin across all uplinks) the SLA is adjusted to either reduce the demands of the application (in the case where the network is struggling to meet SLA for the application) or increase the buffer beyond minimal application requirements (in the case where the network is consistently meeting SLA with a large margin across all uplinks)) However, Arumugam does not disclose: allocating the requested additional transmission bandwidth to the respective branch gateway device by reducing transmission bandwidth allocated to the identified one or more branch gateway devices by an amount corresponding to the requested additional transmission bandwidth; Leigh discloses: allocating the requested additional transmission bandwidth to the respective branch gateway device by reducing transmission bandwidth allocated to the identified one or more branch gateway devices by an amount corresponding to the requested additional transmission bandwidth; ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of Arumugam with the teachings of Leigh, to include allocating the requested additional transmission bandwidth to the respective branch gateway device by reducing transmission bandwidth allocated to the identified one or more branch gateway devices by an amount corresponding to the requested additional transmission bandwidth. The motivation would have been to adapt the server connections to high-density switches in a manner that can make better use of the high-density switches full capabilities. (¶ [0002]). Regarding claims 3, 13, Arumugam discloses: further comprising: discarding, at the respective branch gateway device, traffic exceeding the upper bandwidth limit specified by the transmission-bandwidth contract to cause traffic loss at the respective branch gateway device; (Column 6 row 48 -Column 7 row 17: For example, machine learning model 114 may classify each policy-uplink in one of five buckets: A, B, C, D, E… Bucket E is for uplinks that are not consistently meeting SLA, by a medium margin (e.g. −50% to −10%) or a large margin (−50% and below)… As previously mentioned, there are multiple techniques for adjusting the network … For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%;) in response to determining that the respective branch gateway device has experienced traffic loss for a predetermined duration, estimating an amount of the additional transmission bandwidth to prevent the traffic loss; (Column 6 row 48 -Column 7 row 17: For example, machine learning model 114 may classify each policy-uplink in one of five buckets: A, B, C, D, E… Bucket E is for uplinks that are not consistently meeting SLA, by a medium margin (e.g. −50% to −10%) or a large margin (−50% and below)… As previously mentioned, there are multiple techniques for adjusting the network … For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%;) and transmitting, by the respective branch gateway device to the VPN concentrator, the request for additional transmission bandwidth. (Column 6 row 48 -Column 7 row 17: For example, machine learning model 114 may classify each policy-uplink in one of five buckets: A, B, C, D, E… Bucket E is for uplinks that are not consistently meeting SLA, by a medium margin (e.g. −50% to −10%) or a large margin (−50% and below)… As previously mentioned, there are multiple techniques for adjusting the network … For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%;) Regarding claims 6, Arumugam discloses: wherein a duration of the predetermined time window is between 30 minutes and one hour. (Column 7 row 10-17: Thus, machine learning model 114 may compare the classification results for each application class to a set of thresholds before determining whether to alter the SLA from the current values. For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%,… ; in other words, machine learning model 114 analyzes the traffic and classify each policy-uplink in one of five buckets: A, B, C, D, E (column 6 row 48 – column 7 row 5). To determine whether to alter the SLA from current values, all uplinks are in bucket E (for the application class) for the prior 30 minutes. Bucket E is for uplinks that are not consistently meeting SLA, by a medium margin (e.g. −50% to −10%) or a large margin (−50% and below) (column 7 row 3-5)) Regarding claims 7, 16, Arumugam discloses: wherein the predetermined timer expires after a duration that is between 20% and 60% of a duration of the predetermined time window. (Column 7 row 10-20: machine learning model 114 may compare the classification results for each application class to a set of thresholds before determining whether to alter the SLA from the current values. For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%; (2) all uplinks are in bucket D and/or E for the prior 30 minutes, increase the SLA by 10%; (3) all uplinks are in bucket A for the prior 3 hours, decrease the SLA by 10%.) Regarding claim 11, Arumugam disclose: A first gateway device to connect to a network hub device connected to a plurality of gateway devices, the first gateway device comprising: a processor; and a non-transitory storage medium storing instructions executable on the processor to: negotiate, with the network hub device, a transmission-bandwidth contract specifying an upper bandwidth limit for the first gateway device to transmit data to the network hub device over a tunnel; (Column 3 row 58 – 64: The machine learning model also predicts the amount of available bandwidth for each uplink, and reserves a portion of the bandwidth based on each application type. This bandwidth reservation is taken into account when generating the DPS policies, and may result in some applications not being routed through the “best” uplink as measured by SLA bucket.) send, from the first gateway device, a request to the network hub device for additional transmission bandwidth; (Column 3, row 19-22: the machine learning model may receive network operation data from Rio indicating that the current SLA for Office 365 applications is not being met most of the time.) However, Arumugam does not disclose: receive, at the first gateway device, a first contract-update notification from the network hub device, the first contract-update notification to increase the upper bandwidth limit for the first gateway device by the additional transmission bandwidth comprising a portion of unused transmission bandwidth of a second gateway device of the plurality of gateway devices a contract update unit to update, based on the first contract-update notification, the transmission-bandwidth contract to increase the upper bandwidth limit by the additional transmission bandwidth in response to receiving a response from the VPN concentrator, thereby increasing the upper bandwidth limit; receive, at the first gateway device, a second contract-update notification from the network hub device, the second contract-update notification to reduce the upper bandwidth limit for the first gateway device by a respective transmission bandwidth to allocate to another gateway device that has requested more transmission bandwidth; Leigh discloses: receive, at the first gateway device, a first contract-update notification from the network hub device, the first contract-update notification to increase the upper bandwidth limit for the first gateway device by the additional transmission bandwidth comprising a portion of unused transmission bandwidth of a second gateway device of the plurality of gateway devices ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) a contract update unit to update, based on the first contract-update notification, the transmission-bandwidth contract to increase the upper bandwidth limit by the additional transmission bandwidth in response to receiving a response from the VPN concentrator, thereby increasing the upper bandwidth limit; ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) receive, at the first gateway device, a second contract-update notification from the network hub device, the second contract-update notification to reduce the upper bandwidth limit for the first gateway device by a respective transmission bandwidth to allocate to another gateway device that has requested more transmission bandwidth; ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) and update, based on the second contract-update notification, the transmission- bandwidth contract to decrease the upper bandwidth limit by the respective transmission bandwidth. ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of Arumugam with the teachings of Leigh, to receive a first contract-update notification, update transmission-bandwidth contract to increase the upper bandwidth limit by the additional transmission bandwidth in response to receiving a response from the VPN concentrator, thereby increasing the upper bandwidth limit, and receive a second contract-update notification to reduce the upper bandwidth limit for the first gateway device by a respective transmission bandwidth to allocate to another gateway device that has requested more transmission bandwidth. The motivation would have been to adapt the server connections to high-density switches in a manner that can make better use of the high-density switches full capabilities. (¶ [0002]). Regarding claim 12, Arumugam discloses: wherein the network hub device comprises a virtual private network (VPN) concentrator, and the first gateway device comprises a branch gateway device connected over the tunnel to the VPN concentrator. (Col 13 row 26 - Headend gateways (sometimes referred to as VPN concentrators) are network infrastructure devices that are placed 25 infrastructure device. Some network infrastructure devices monitor load parameters for various physical and logical resources of the network infrastructure device, and report at the edge of a core site LAN; Col 13 row 10-16: Branch gateways are network infrastructure devices that 10 are placed at the edge of a branch LAN. Often branch gateways are routers that interface between the LAN and a wider network, whether it be directly to other LANs of the WAN via dedicated network links (e.g. MPLS) or to the other LANs of the WAN via the Internet through links 15 provided by an Internet Service Provider connection) Regarding claim 14, Arumugam does not disclose: wherein the additional transmission bandwidth further comprises a portion of unused transmission bandwidth of a third gateway device of the plurality of gateway devices Leigh discloses: wherein the additional transmission bandwidth further comprises a portion of unused transmission bandwidth of a third gateway device of the plurality of gateway devices ([0022] That is, the DRAG 120 can reduce the bandwidth provided to server 125a via server interface port 140a from its initial allocation, using a portion of the difference, or unused bandwidth (e.g., 48 G), to supplement the additional bandwidth requested by server 125b, thereby increasing the bandwidth allocation for server 125b from 50 G to 60G, for example.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of Arumugam with the teachings of Leigh, to include the additional transmission bandwidth further comprises a portion of unused transmission bandwidth of a third gateway device of the plurality of gateway devices. The motivation would have been to adapt the server connections to high-density switches in a manner that can make better use of the high-density switches full capabilities. (¶ [0002]). Regarding claim 15, Arumugam discloses: further comprising: a timer, wherein the instructions are executable on the processor to respond to an expiration of the timer by revoking the additional transmission bandwidth allocated to the first gateway device. (Column 6 row 48 -Column 7 row 17: For example, machine learning model 114 may classify each policy-uplink in one of five buckets: A, B, C, D, E… Bucket E is for uplinks that are not consistently meeting SLA, by a medium margin (e.g. −50% to −10%) or a large margin (−50% and below)… As previously mentioned, there are multiple techniques for adjusting the network … For example, the thresholds for altering the SLA for an application class may be to change the SLA when (1) all uplinks are in bucket E (for the application class) for the prior 30 minutes, increase the SLA thresholds (i.e. reduce the difficulty of meeting the SLA) by 20%;) Regarding claim 18 based on claim 1, Arumugam discloses: A network hub device to connect to a plurality of gateway devices, the network hub device comprising: a timer; (Column 4, row 18-29: Over relatively short periods of time (e.g. 10 minute increments), uplinks are bucketized on a per-policy basis, and adjustments are made to DPS policies where beneficial.) a processor; (Column 14 row 36-37: Processing circuitry may include one processor or multiple processors.) Regarding claim 20, Arumugam discloses: wherein the network hub device comprises a virtual private network (VPN) concentrator (Col 13 row 26 - Headend gateways (sometimes referred to as VPN concentrators) are network infrastructure devices that are placed 25 infrastructure device. Some network infrastructure devices monitor load parameters for various physical and logical resources of the network infrastructure device, and report at the edge of a core site LAN) Claims 2,4,5,19,21 are rejected under 35 U.S.C. 103 as being unpatentable over Arumugam et al. (US 11223538 B1; hereinafter “Arumugam”) in view of Leigh et al. (US 20210021473; hereinafter “Leigh”) further in view of Singh et al. (US 20190199646; hereinafter “Singh”). Regarding claims 2, combination of Arumugam and Leigh does not disclose: wherein the request for the additional transmission bandwidth comprises a request for a corresponding number of transmission credits, wherein a transmission credit of the transmission credits corresponds to a predetermined amount of transmission bandwidth. However, Singh discloses: wherein the request for the additional transmission bandwidth comprises a request for a corresponding number of transmission credits, wherein a transmission credit of the transmission credits corresponds to a predetermined amount of transmission bandwidth. ([0044] At 422, an amount of bandwidth credits allocated per user is determined. [0062] At 434, bandwidth credits are allocated to user packets…In some examples, packets of a highest priority user are allocated more bandwidth credits than packets of a next highest priority user and so forth. A credit can represent one or more bytes.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of method of combination of Arumugam and Leigh with the teachings of Singh, to include the request for the additional transmission bandwidth comprises a request for a corresponding number of transmission credits, wherein a transmission credit of the transmission credits corresponds to a predetermined amount of transmission bandwidth. The motivation would have been to ensure sufficient bandwidth is allocated for users based on their subscriptions or service level agreements. (¶ [0002]). Regarding claims 4, 21, combination of Arumugam and Leigh does not disclose: wherein identifying the one or more branch gateway devices with the unused transmission bandwidth comprises identifying a minimum number of branch gateway devices to provide the additional transmission bandwidth. However, Singh discloses: wherein identifying the one or more branch gateway devices with the unused transmission bandwidth comprises identifying a minimum number of branch gateway devices to provide the additional transmission bandwidth. ([0067] For example, a particular scheduling instance for a priority level consumes 100% of the allocated bandwidth in the previous time slot, while the instance with a higher priority level consumes only 80% bandwidth in that same time slot. Packet dequeue 510 can allow lower priority traffic-class instance to increase the bandwidth allocation and decrease the bandwidth allocation to the high-priority traffic class for the current time-slice. Amount of bandwidth adjustments can depend upon the applies policy. In this example, a policy will allocate the total unused bandwidth among all the priority levels which have consumed relatively more bandwidth in a certain proportion.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of method of combination of Arumugam and Leigh with the teachings of Singh, to wherein identifying the one or more branch gateway devices with the unused transmission bandwidth comprises identifying a minimum number of branch gateway devices to provide the additional transmission bandwidth. The motivation would have been to ensure sufficient bandwidth is allocated for users based on their subscriptions or service level agreements. (¶ [0002]). Regarding claims 5,19, combination of Arumugam and Leigh does not disclose: wherein analyzing the traffic patterns comprises determining, for each branch gateway device of the plurality of branch gateway devices, an average transmission bandwidth usage within a predetermined time window; and wherein identifying the one or more branch gateway devices with the unused transmission bandwidth comprises comparing the average transmission bandwidth usage of each branch gateway device with a corresponding transmission-bandwidth contract. However, Singh discloses: wherein analyzing the traffic patterns comprises determining, for each branch gateway device of the plurality of branch gateway devices, an average transmission bandwidth usage within a predetermined time window; and wherein identifying the one or more branch gateway devices with the unused transmission bandwidth comprises comparing the average transmission bandwidth usage of each branch gateway device with a corresponding transmission-bandwidth contract.(([0067] For example, a particular scheduling instance for a priority level consumes 100% of the allocated bandwidth in the previous time slot, while the instance with a higher priority level consumes only 80% bandwidth in that same time slot. Packet dequeue 510 can allow lower priority traffic-class instance to increase the bandwidth allocation and decrease the bandwidth allocation to the high-priority traffic class for the current time-slice. Amount of bandwidth adjustments can depend upon the applies policy. In this example, a policy will allocate the total unused bandwidth among all the priority levels which have consumed relatively more bandwidth in a certain proportion.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of combination of Arumugam and Leigh with the teachings of Singh, to include analyzing the traffic patterns comprises determining, for each branch gateway device, an average transmission bandwidth usage within a predetermined time window. The motivation would have been to ensure sufficient bandwidth is allocated for users based on their subscriptions or service level agreements. (¶ [0002]). Claims 8,9, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Arumugam et al. (US 11223538 B1; hereinafter “Arumugam”) in view of Leigh et al. (US 20210021473; hereinafter “Leigh”) further in view of Yadav et al. (US 20160080502; hereinafter “Yadav”). Regarding claim 8, Arumugam discloses: wherein the respective branch gateway device is connected to the VPN concentrator. (Column 13 row 10-11: Branch gateways are network infrastructure devices that are placed at the edge of a branch LAN. Column 13 row 26-27: Headend gateways (sometimes referred to as VPN concentrators) are network infrastructure devices that are placed at the edge of a core site LAN.) However, combination of Arumugam and Leigh does not disclose: wherein the respective branch gateway device is connected to the VPN concentrator by an Internet Protocol Security (IPSec)-based VPN tunnel. Yadav discloses: wherein the respective branch gateway device is connected to the VPN concentrator by an Internet Protocol Security (IPSec)-based VPN tunnel. ([0806] For each private and public WAN circuit, a spoke device may establish an Internet Protocol Security (IPSEC or IPSec) VPN Tunnel to both hub elements in the hub element HA pair assigned to it. The spoke device may use an algorithm to decide which hub element in the pair should be its primary. The algorithm may take as input the bandwidth per private and public WAN circuit, the pricing model per circuit, the health of the individual IPSEC VPN tunnels to both the hub sites over each of the public and private WAN circuits, the routing reachability of the hub devices to the hub core router 178 and the like.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of combination of Arumugam and Leigh with the teachings of Yadav, to the respective branch gateway device is coupled to the VPN concentrator via an Internet Protocol Security (IPSec)-based VPN tunnel. The motivation would have been to establish a IPSEC VPN TUNNEL and instruct one or more devices of the first network and the partner network to establish an IPSEC data tunnel between themselves (¶ [0043]). Regarding claims 9, 17, Arumugam discloses: wherein the request for additional transmission bandwidth comprises a probe request frame, and wherein a respective contract-update notification comprises a probe response frame. (Column 11 row 37-44: Instructions 408b cause device 400 to transmit a plurality of probes across WAN uplinks 404 and receive responses from destination devices across the links. For examples, WAN uplinks 404 may be coupled with one or more headend gateways of different LAN sites, and the responses received from the headend gateways may include information and meta-information that can be used to determine the health and available bandwidth of each uplink.) However, combination of Arumugam and Leigh does not disclose: an IPSec Encapsulating Security Payload (ESP) probe Yadav discloses: an IPSec Encapsulating Security Payload (ESP) probe ([0723] Internet Protocol Security (IPSEC) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. Enterprise Site to Site IPSEC Virtual Private Network (VPN) over Wide Area Networks (WAN) uses a suite of encapsulation, data encryption and data authentication for data path tunnels (e.g. encapsulating security payload (ESP) and Authentication Header (AH)) and a separate control channel protocol such as, for example, internet key exchange (IKE) and IKEv2 for the derivation of key exchange and for decisions related to what traffic to encrypt between the two gateways in each site.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of combination of Arumugam and Leigh with the teachings of Yadav, to include a time density related to a phase-tracking reference signal (PT-RS). The motivation would have been to establish a IPSEC VPN TUNNEL and instruct one or more devices of the first network and the partner network to establish an IPSEC data tunnel between themselves (¶ [0043]). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Arumugam et al. (US 11223538 B1; hereinafter “Arumugam”) in view of Leigh et al. (US 20210021473; hereinafter “Leigh”) in view of Yadav et al. (US 20160080502; hereinafter “Yadav”) further in view of Singh et al. (US 20190199646; hereinafter “Singh”). Regarding claim 10, combination of Arumugam, Leigh, and Yadav does not disclose: wherein the IPSec ESP probe request or response frame comprises a vendor-defined special opcode. However, Singh discloses: wherein the IPSec ESP probe request or response frame comprises a vendor-defined special opcode. ([0044] At 422, an amount of bandwidth credits allocated per user is determined. [0062] At 434, bandwidth credits are allocated to user packets…In some examples, packets of a highest priority user are allocated more bandwidth credits than packets of a next highest priority user and so forth. A credit can represent one or more bytes.) It would be obvious to the person of ordinary skill in the art, before the effective filling date of the claimed invention, to modify the teachings of combination of Arumugam, Leigh, and Yadav with the teachings of Singh, to include the IPSec ESP probe request or response frame comprises a vendor-defined special opcode. The motivation would have been to ensure sufficient bandwidth is allocated for users based on their subscriptions or service level agreements. (¶ [0002]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHU PHAM whose telephone number is (703)756-4511. The examiner can normally be reached Monday - Friday: 8:30 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jae Y. Lee can be reached at (571) 270-3936. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHU PHAM/Examiner, Art Unit 2479 /JAE Y LEE/Supervisory Patent Examiner, Art Unit 2479
Read full office action

Prosecution Timeline

Oct 06, 2022
Application Filed
May 17, 2025
Non-Final Rejection — §103
Sep 16, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Examiner Interview Summary
Sep 18, 2025
Response Filed
Dec 12, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574832
METHOD AND SYSTEM FOR PROVIDING BACK-OFF TIMER TO UES DURING NETWORK SLICE ADMISSION CONTROL
2y 5m to grant Granted Mar 10, 2026
Patent 12574161
METHOD AND APPARATUS FOR DETERMINING TIME DENSITY RELATED TO PT-RS IN NR V2X
2y 5m to grant Granted Mar 10, 2026
Patent 12550033
COMMUNICATION METHOD, APPARATUS, AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12519698
METAVERSE END-TO-END (E2E) NETWORK ARCHITECTURE
2y 5m to grant Granted Jan 06, 2026
Patent 12513622
POWER CONTROL METHOD AND DEVICE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+12.5%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month