Response to an Amendment
This office action is a response to a communication made on 11/13/2025.
Claims 9-10 and 18-19 are canceled.
Claims 1 and 12 are currently amended.
Claims 23-24 are new.
Claims 1-8, 11-17 and 20-24 are pending for this application.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Republic of India on 09/21/2022. It is noted, however, applicant has not filed a certified copy of the 18/470793 application as required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/13/2025 was filed before the mailing date of the final rejection on 12/11/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments, see remarks on page 6-8, filed 11/13/2025, with respect to the rejection(s) of claim(s) 1 and 12 under 103 have been considered and regarding the amended feature of “determining, via a policy, whether the traffic flow is a candidate for the first accelerator system, based on subscriber mapping to the traffic flow” are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Marisha (WO 2006/078953) in view of Liu et al. (US 2021/0176070), and further in view of Sidebottom et al. (US 8675488B1).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-5, 7-8, 11-12, 16-17 and 20, 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marisha (WO 2006/078953) in view of Liu et al. (US 2021/0176070), hereinafter “Liu”, and further in view of Sidebottom et al. (US 8675488B1), hereinafter “Sidebottom”. An English translation has been added with this application for WIPO document.
With respect to claim 1, Marashi discloses a method for distributed traffic management on a computer network, the method comprising:
receiving an initial communication of a traffic flow by a packet processor of a first accelerator system from one side of a TCP connection (page-4, II. 6-7, teaches the application traffic is routed through the server network and accelerated along a portion of the network path, page-9, II. 8-11, teaches TCP acceleration (i.e. the traffic flow may be a Transmission Control Protocol traffic flow) is probably the most common and is used to improve the throughput of any TCP session, Since the acceleration techniques must interoperate with existing network stacks in the clients and application servers, the original TCP session must be restored at another accelerator associated with the application server, page -10, II. 10, teaches the data packet is sent to accelerator 414 (i.e. first accelerator);
retrieving message parameters from the initial communication, via a logic node of the first accelerator system (page-8, II. 13-15, teaches the measurement servers 218,228 measure the availability and performance metrics of each server in the SDP, as well as the network performance such as loss, latency, jitter to both the application servers and the clients or client LDNS servers, wherein measurements are the message parameters page-9, II. 8-9, teaches TCP acceleration (i.e. the traffic flow may be a Transmission Control Protocol traffic flow) is probably the most common and is used to improve the throughput of any TCP session, see Fig. 4, step 414, page -10, II. 10, teaches the data packet is sent to accelerator 414 (i.e. logic node of the first accelerator), page-16, II. 7-9, and II. 28-29, teaches the accelerated data is sent to the matching (i.e. retrieving) accelerator in the server SDP where the same acceleration technique(s) are applied to the data in step 607 to restore the initial data stream… The measurement servers in each candidate server SDP also collect network measurements (i.e. message parameters) from the SDP towards the client SDP);
pairing the first accelerator system and the second accelerator system to provide for traffic management of the traffic flow (Page-9, II. 10-13, teaches the original TCP session must be restored at another accelerator associated with the application server. Acceleration is an end-to end stateful function and requires that traffic is routed through a compatible pair of accelerators for the duration of the session, page-11, II. 16-20, teaches the matching accelerator 424 modifies the data stream and the original session is restored. Once accelerator 424 processes the traffic it is sent to G/W server 423 for additional address translation. This translation ensures that the resulting communication from the application server 20 431 for this session is routed back through the same set of infrastructure, i.e. SDPs and servers); and
sending at least one sync-up protocol message between the first and the second accelerator system at predetermined time intervals (page-11, II. 25-33 and page 12, II. 2-5, teaches Data packets from the client have a source address of' A' and a destination address of 'B' 501…The packets are sent through the accelerator (not shown) in the client SDP and routed to a matching accelerator in the server SDP…The packets are then sent to the application provider and further routed to the specific application server 531. Return traffic follows the reverse path and the reverse set of translations occur, until the traffic sent back to the client has the source address of 'B' and the destination address of 'A', page-12, II. 18-20, teaches an end-to-end process and must communicate with another matching accelerator of the same type before being engaged and altering network traffic…accelerators typically synchronize (i.e. sync-up message as synchronize) with each other to ensure this process occurs properly, page-12, II. 24-26, teaches TCP acceleration is a series of techniques designed to improve the throughput of TCP traffic under network conditions of high latency or high packet loss. TCP throughput has an inverse relationship with the round trip time (i.e. predetermined time intervals) or network latency, page-20, II. 1-6, teaches the measurement servers initiate reverse DNS, and other network measurements back to the client LDNS from every configured SDP. These measurements (i.e. RTT as predetermined time intervals) assess the network quality between any given SDP and the Client LDNS as shown in step 1006. Once all of the measurements have been collected, the best SDP for the client LDNS is selected in step 1007 and the record for that client LDNS is configured in the SDP DNS servers associated with the client LDNS for future requests to use in step 1008), wherein the predetermined time interval is based on a round trip time of the traffic flow (page-12, II. 24-26, teaches TCP acceleration is a series of techniques designed to improve the throughput of TCP traffic under network conditions of high latency or high packet loss. TCP throughput has an inverse relationship with the round trip time or network latency, page-20, II. 1-6, teaches the measurement servers initiate reverse DNS, and other network measurements back to the client LDNS from every configured SDP. These measurements (i.e. RTT as predetermined time intervals) assess the network quality between any given SDP and the Client LDNS as shown in step 1006. Once all of the measurements have been collected, the best SDP for the client LDNS is selected in step 1007 and the record for that client LDNS is configured in the SDP DNS servers associated with the client LDNS for future requests to use in step 1008.).
However, Marashi remain silent on broadcasting the message parameters, via trigger module of the first accelerator system, to determine a second accelerator system receiving a reply to the initial communication.
Liu discloses broadcasting the message parameters, via trigger module of the first accelerator system, to determine a second accelerator system receiving a reply to the initial communication (¶0021, teaches broadcast a message among virtual DP accelerators (DPAs), in response to receiving a broadcast instruction from an application via a communication switch, the broadcast instruction designating one or more virtual DP accelerators of a plurality of virtual DP accelerators to receive a broadcast message, a system encrypts the broadcast message based on a broadcast session key for a broadcast communication session, ¶0074 teaches, the VDP accelerators that are communicatively coupled to host 104. VDP accelerators, e.g. 105A-105D, 106A-106D and 107A-107D, can each have a unique virtual device ID (VID) 501, e.g. VDP_105A_ID, etc.,… Message can be any payload specified by the sender. Example payloads include instructions to a VDP accelerator to configure itself for secure communications with another node (host or VDP accelerator. Payload can include a computational task transmitted from a VDP accelerator to another VDP accelerator. A payload can include the another VDP accelerator sending a result back to the VDP accelerator that assigned the another VDP accelerator a task, wherein sending a result back is retransmission parameter as message parameter, see Fig. 12, step 1528 as trigger module ¶0129, processing module/unit/logic 1528 (i.e. trigger module of the first accelerator) may also reside, completely or at least partially, within memory 1503 and/or within processor 1501 during execution thereof by data processing system 1500, ).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s network measurements (i.e. message parameters) from the SDP towards the client SDP with broadcasting the message parameters to determine a second accelerator system of Liu, in order to help one or more virtual DP accelerators of a plurality of virtual DP accelerators to receive a broadcast message (Liu, ¶0021).
However, Marashi in view of Liu remain silent on determining, via a policy, whether the traffic flow is a candidate for the first accelerator system, based on subscriber mapping to the traffic flow.
Sidebottom discloses determining, via a policy, whether the traffic flow is a candidate for the first accelerator system, based on subscriber mapping to the traffic flow (Col-4, II. 3-7, teaches a service unit to apply the enforcement policy to the packet flow. The session resource controller comprises: an attachment sessions table comprising one or more attachment session records that each map packet flow information to a subscriber identifier, Col-7, II. 43-54, teaches an enforcement policy includes a set of actions that a service node is to perform upon the occurrence of a condition that characterizes the packet flow in some way. For example, a condition may relate to an application protocol (i.e. accelerator system) or other application information, or a source address. The actions implement the various exemplary services listed above that service nodes 12 apply to subscriber data traffic. Upon receiving the policies, the service node applies the Subscriber-specific enforcement policies to the new packet flow. In this way, service nodes 12 apply policies to packet flows based on the identity of the subscribers associated with the packet flows).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s TCP acceleration and the data packet is sent to accelerator i.e. first accelerator in view of Liu’s system with determining, via a policy, whether the traffic flow is a candidate for the first accelerator system, based on subscriber mapping to the traffic flow of Sidebotom, in order to allow network to apply acceleration only to authorized, eligible and appropriate flows, ensuring correct enforcement of subscription rules and fairness across users (Sidebottom).
For claim 12, it is a system claim corresponding to the method of claim 1. Therefore claim 12 is rejected under the same ground as claim 1.
With respect to claim 4, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1 further comprising using early retransmission from the local cache on determination of packet loss for the traffic flow (Marashi, page-12, II. 24-25, teaches TCP acceleration is a series of techniques designed to improve the throughput of TCP traffic under network conditions of high latency or high packet loss page-13, II. 25, teaches Data segment caching can also be used to accelerate application performance).
With respect to claim 5, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1 further comprising advertising a window size associated with the traffic flow to be higher than an initial size to increase the available bandwidth for the traffic flow (Marashi, page-12, II. 26-28, teaches various network stacks have a preconfigured maximum window size, which also limits the amount of data that can be in transit without an acknowledgement, page -13, II. 4-7, teaches Another acceleration technique is session splitting whereby large sessions are split into number smaller sessions and transmitted concurrently end to end. This permits the delay bandwidth product to be multiplied by the number of split sessions, increasing overall throughput).
With respect to claims 7 and 16, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1, further comprising: retrieving further policies from a policy engine to influence the behavior of the first and second accelerator system (Sidebottom, Col-13, II. 35-38, teaches the SAE converts the newly activated service to a set of enforcement policies and installs the policies to the service node that applies services (i.e. first and second accelerator) to the affected subscriber sessions (286)).
With respect to claims 8 and 17, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1 wherein the traffic flow is a Transmission Control Protocol, User Datagram Protocol or QUIC traffic protocol traffic flow (Marashi, page-9, II. 8-9, teaches TCP acceleration is probably the most common and is used to improve the throughput of any TCP session, page-8, II. 21, teaches the alternative embodiments could use a web service protocol, such as UDDI).
With respect to claims 11 and 20, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1, wherein messages between the first and the second accelerator system comprise batched messages (Liu, ¶0108, teaches host 104 encrypt the broadcast session key with each of the public keys to generate a set of messages (i.e. batched messages) and host 104 sends the set of messages to VDPA 105A).
With respect to claims 23 and 24, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1, wherein the policy is updated dynamically during runtime (Sidebottom, Col-1, II. 63-67, teaches the service nodes include additional functionality that allow the service nodes to easily be dynamically configured by other components within the service provider network to apply services to sub scriber traffic based on the subscriber identity, Col-13, II. 10-16, teaches updating the subscriber session involves installing enforcement policies to the service node using a PTSP interface session with the service node).
Claim(s) 2 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marashi in view of Liu in view of Sidebottom, and further in view of Yang (US 10237153).
With respect to claims 2 and 13, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1 further comprising:
receiving a data segment at the first accelerator system (Marashi, page-14, II. 25, teaches Data segment caching can also be used to accelerate application performance);
adding the data segment to a local cache of the first accelerator system (Marashi, page-14, II. 25-27, teaches Data segment caching can also be used to accelerate application performance. Data segment caching is a form of caching where small elements of the data stream are stored on the disk or in the memory of each accelerator).
However, Marashi in view of Liu, and further in view of Sidebottom remain silent on triggering an acknowledgement message from the second accelerator system; and sending the acknowledgement to a sender of the data segment from the second accelerator system.
Yang discloses triggering an acknowledgement message from the second accelerator system (Col-16, II. 5-10, teaches When a retransmission packet is transmitted on a second path on the TCP control connection, a transmission status of the retransmission packet may be detected in a timely manner by sending acknowledgment information, where the acknowledgment information is, for example, the foregoing ACK information); and
sending the acknowledgement to a sender of the data segment from the second accelerator system (Col-16, II. 5-13, teaches a transmission status of the retransmission packet may be detected in a timely manner by sending acknowledgment information, where the acknowledgment information is, for example, the foregoing ACK information…the ACK message may be encapsulated according to a TCP protocol format of the response message in S140).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s accelerators typically synchronize with each other in view of Liu’s system with triggering an acknowledgement message and sending the acknowledgement to a sender of the data segment of Yang, in order to ensure reliable communication, synchronization, error management and efficient coordination between the two systems (Yang, Col-1, II. 55-67).
Claim(s) 3 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marashi in view of Liu in view of Sidebottom in view of Yang, and further in view of Murgia (US2014/0101306).
With respect to claims 3 and 14, Marashi in view of Liu in view of Sidebottom, and further in view of Yang discloses the method of claim 2, however, Marashi in view of Liu in view of Sidebottom, and further in view of Yang emain silent on wherein sending an acknowledgement comprises sending pre-acknowledgement to provide for flow acceleration for the traffic flow.
Murgia discloses wherein sending an acknowledgement comprises sending pre-acknowledgement to provide for flow acceleration for the traffic flow (¶0135, teaches the appliance 200 may regulate the flow of packets from the sender, for example when the appliance's 200 buffer is becoming full, by appropriately setting the TCP window size in each preack, ¶0141, teaches window virtualization is to insert a preacking appliance 200 into a TCP session. In reference to any of the environments of FIG. 1A or 1B, initiation of a data communication session between a source node, e.g., client 102 (for ease of discussion, now referenced as source node 102), and a destination node, e.g., server 106 (for ease of discussion, now referenced as destination node 106) is established).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s accelerators typically synchronize with each other in view of Liu’s system with wherein sending an acknowledgement comprises sending pre-acknowledgement to provide for flow acceleration for the traffic flow of Murgia, in order to allow transmit more data without waiting for the actual end-to end acknowledgement (Murgia).
Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marashi in view of Liu in view of Sidebottom, and further in view of Li et al. (US 20220255692), hereinafter “Li”.
With respect to claims 6 and 15, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 1, however Marashi in view of Liu, and further in view of Sidebottom remain silent on further comprising: receiving an acknowledgment for a data segment from a recipient of the traffic flow at the second accelerator system; and triggering a release from cache of the acknowledged segment from the first accelerator system.
Li discloses further comprising: receiving an acknowledgment for a data segment from a recipient of the traffic flow at the second accelerator system (¶0013, teaches after receiving the first negotiation packet, the second communications apparatus (i.e. second accelerometer) may determine, by parsing the first permitted option, the ACK mechanism supported by the first communications apparatus, ¶0015, teaches receiving an ACK frequency frame from the second communications apparatus, ¶0095, teaches The data packet is sent from the sender to the receiver, and the ACK packet is returned from the receiver to the sender) ; and
triggering a release from cache of the acknowledged segment from the first accelerator system (¶0254, teaches an acknowledgement number field in a TCP packet header in the TACK packet has a same meaning as an acknowledgement number in a TCP ACK, is a maximum number of consecutive sequence numbers currently received, and is used to notify the sender that all data packets whose sequence numbers are before the sequence number are received, and the sender may release the buffer occupied by the data packets)
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s accelerators typically synchronize with each other in view of Liu’s in view of Sidebottom’s system with receiving an acknowledgment for a data segment from a recipient of the traffic flow at the second accelerator system and triggering a release from cache of the acknowledged segment from the first accelerator system of Li, in order to free up its resources by releasing the segment from the cache and help maintain efficiency, reliability and network performance (Li, ¶0189, ¶0216, ¶0244, ).
Claim(s) 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marashi in view of Liu in view of Sidebottom, and further in view of Mathewson, II et al. (US 2003/0018724), hereinafter “Mathewson”.
With respect to claims 21 and 22, Marashi in view of Liu, and further in view of Sidebottom discloses the method of claim 11, Liu ¶0108, teaches host 104 encrypt the broadcast session key with each of the public keys to generate a set of messages (i.e. batched messages) and host 104 sends the set of messages to VDPA 105A. However, Marashi in view of Liu, and further in view of Sidebottom remain silent on wherein a time sensitivity of the sync-up message is determined prior to creating the batched messages.
Mathewson discloses wherein a time sensitivity of the sync-up message is determined prior to creating the batched messages (¶0024, ¶004-¶0047, teaches determining whether a selected one of the received electronic messages is time-sensitive… FIGS. 3 and 4. FIG. 3 illustrates logic which pertains to message creation and delivery, and FIG. 4 illustrates logic which may be used to handle incoming (or previously-received) messages (i.e. batched messages) at a receiver…determining whether the message (i.e. synch-up message) is time-sensitive).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Marashi’s in view of Liu’s in view of Murgia’s system with time sensitivity of the sync-up message is determined prior to creating the batched messages of Mathewson, in order to ensure the system can make infomed decisions about prioritization, batching strategy and delivery timing which is cruicial for preserving the integrity and performance of time critical communications (Mathewson, ¶0043).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GOLAM MAHMUD whose telephone number is (571)270-0385. The examiner can normally be reached Mon-Fri 8.00-5.00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 5712703037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GOLAM MAHMUD/Examiner, Art Unit 2458
/UMAR CHEEMA/Supervisory Patent Examiner, Art Unit 2458