Prosecution Insights
Last updated: April 19, 2026
Application No. 17/202,017

Delegated Services Platform System and Method

Final Rejection §103§112
Filed
Mar 15, 2021
Examiner
ZHAO, YU
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Pavlov Media Inc.
OA Round
4 (Final)
51%
Grant Probability
Moderate
5-6
OA Rounds
4y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
185 granted / 360 resolved
-3.6% vs TC avg
Strong +42% interview lift
Without
With
+42.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
8 currently pending
Career history
368
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 360 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Acknowledgment is made of applicant’s amendment filed on 29 September 2025. Claims 6-10 are presented for examination. Claims 1-5 were cancelled. Priority It is acknowledged that the pending application claims priority to non-provisional application 14206952 filed on 12 March 2014. Priority date of 12 March 2014 is given. Information Disclosure Statement The information disclosure statement (IDS) submitted on 6 May 2025, 17 July 2025 and 9 September 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Remarks Communication: Examiner strongly recommends applicant(s) to contact the examiner with two communication methods (e.g. phone and email). If applicant tries to contact examiner using only one communication methods, applicant should at least follow up with examiner after one business day. Especially, if applicant has not received a response from the examiner. Examiner receives many spams and emails every day. Additionally, email(s) may be overlooked, accidently deleted or blocked as spam by spam filter. Claim Objections Claim 6 is objected to because of the following informalities: Applicant did not follow MPEP 714, 37 CFR 1.121 Manner of making amendments in application(c). Claim 6, filed on 27 November 2023, which was entered and recited the claim limitation “determining at the central service endpoint, when a previously selected Claim 6, filed on 28 February 2025, which was entered and recited the claim limitation “determining at the central service endpoint, when a previously selected response to the end user device;” The most recent Claim 6, filed on 29 September 2025, recited the same claim limitation with the same amendment and markings, “determining at the central service endpoint, when a previously selected The limitation, filed on 27 November 2023, “determining at the central service endpoint, when a previously selected delegate appliance is no longer successfully transmitting portions of the response to the end user device” is “the immediate prior version” of the claims filed on 28 February 2025 which already deleted the word “ Similarly, the limitation, filed on 28 February 2025, “determining at the central service endpoint, when a previously selected delegate appliance is no longer successfully transmitting portions of the response to the end user device” is “the immediate prior version” of the claims filed on 29 September 2025 which already deleted the word “ Therefore, the claims filed on 29 September 2025 should be amended based on “the immediate prior version” of the claims which is filed on 27 November 2023 and 28 February 2025. MPEP 714, 37 CFR 1.121 Manner of making amendments in application, (c), “(c) Claims. Amendments to a claim must be made by rewriting the entire claim with all changes (e.g., additions and deletions) as indicated in this subsection, except when the claim is being canceled. Each amendment document that includes a change to an existing claim, cancellation of an existing claim or addition of a new claim, must include a complete listing of all claims ever presented, including the text of all pending and withdrawn claims, in the application. The claim listing, including the text of the claims, in the amendment document will serve to replace all prior versions of the claims, in the application. In the claim listing, the status of every claim must be indicated after its claim number by using one of the following identifiers in a parenthetical expression: (Original), (Currently amended), (Canceled), (Withdrawn), (Previously presented), (New), and (Not entered)… (2) When claim text with markings is required. All claims being currently amended in an amendment paper shall be presented in the claim listing, indicate a status of “currently amended,” and be submitted with markings to indicate the changes that have been made relative to the immediate prior version of the claims. The text of any added subject matter must be shown by underlining the added text. The text of any deleted matter must be shown by strike-through except that double brackets placed before and after the deleted characters may be used to show deletion of five or fewer consecutive characters. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived. Only claims having the status of “currently amended,” or “withdrawn” if also being amended, shall include markings. If a withdrawn claim is currently amended, its status in the claim listing may be identified as “withdrawn— currently amended.”” Appropriate correction is required. Response to Argument Applicant’s arguments filed in the amendment filed on 29 September 2025, have been fully considered but they are not deemed persuasive: Applicant argued “Non-Compliant Office Action The Office Action dated 3/27/2025 is non-compliant as required by MPEP §707.07(f) because it fails to respond to applicant’s arguments. Applicant’s response filed 2/28/2025 specifically pointed out that the Examiner’s failure to properly respond to applicant’s arguments in the response dated 11/27/2023 constituted an admission and created an estoppel.” Examiner respectfully disagrees. 1) Applicant did not provide which argument(s) in applicant’s arguments in the response dated 11/27/2023 that examiner did not response to. 2) Further, applicant filed “Request for Continued Examination” (RCE) on 28 February 2025. In response to RCE, examiner issued Non-Final Office Action (27 March 2025) with further detail responses to all arguments. Therefore, Non-Final Office Action (27 March 2025) is compliant. 3) Both of those cases (In re Hermann and In re Soni) relate to what the Federal Circuit was limited to consider when these cases were on appeal to that Court. MPEP 706.04 explicitly allows “rejection of previously allowed claims”. If a claim that was previously indicated as allowable can then later be rejected after a primary examiner has considered all the facts, this supports that an examiner can also reject or change rejections of previously rejected claims. The examiner is not estopped from rejecting claims an earlier action said were allowable, likewise the examiner is not estopped from rejecting a previously rejected claim either. If the rejections made in the latest action properly and validly reflect the reasons the claims are unpatentable, Applicant has not pointed to any authority that would negate that finding merely because of Examiner’s change in view. There is no authority supporting any legal theory of Examiner estoppel during examination (and as above Applicant’s cited cases don’t support this either). Even if the current rejections do show the Examiner changed positions in this case as compared to the previous actions, the Examiner is permitted to do so as prosecution progresses or as warranted by the evidence even if that is not ideal. See In re Ruschig, 379 F.2d 990 (CCPA 1967); In re Ellis, 86 F.2d 412 (CCPA 1936), and In re Becker, 101 F.2d 557(CCPA 1939). 4) Further, MPEP 707.07(f) merely says, “Where the applicant traverses any rejection, the examiner should, if they repeat the rejection, take note of the applicant’s argument and answer the substance of it.” Notably, “should” is not “shall” or “must”, and nothing in MPEP 707.07(f) speaks to the specific manner in which an Examiner “should … answer the substance” of such traversals. The Examiner’s response to traversed material is not set forth as a requirement for making a rejection final under MPEP 706.07(a) or 37 CFR 1.113. Thus, failing to respond to arguments is not grounds for withdrawing an Office action, not changing the finality of an Office action (if that was at issue. Applicant argued “Applicant's representative has checked all call logs for both the office phone and his cell phone. That careful check shows that there have been no such calls from the examiner since June 2023, and that call related to a different case. However, to avoid any potential miscommunication, applicant’s representative authorized email communications in this case and sent the examiner an email suggesting that as a method of communication. Further, on 9/19/2025, applicant’s representative emailed the Examiner requesting a phone interview in the hope of resolving at least some of the issues identified above. The Examiner never responded. To help avoid future miscommunication, a copy of this response is also being emailed to Sherief Badawi.…” Communication records: Examiner called James E. Eakin on 9/30/2025 to check the status of the application, because examiner did not see a response in the system filed within 6 months from last office action mailed on 27 March 2025. Examiner left message on both Office line and cell phone line. Later that day, examiner opened the application and noticed the recent responses was now showing in the system, but they are not divided in to sub-files (divided in to “Argument” and “Claims”). They are saved as one single big/huge bundle file. Examiner called James E. Eakin, and left a message with the new updates. Power of Attorney Form, Interview request and Internet communication authorization form: On 10/1/2025, examiner found more issues. Examiner called James E. Eakin and left other messages with more updates, e.g. Power of attorney form is not submitted, no name is on Attorneys of Record, and Interview issue. Power of Attorney Form: As examiner mentioned in the recent voice message and email (emailed on 8/28/2025), that James E. Eakin is NOT on Attorneys of Records. Interview request: for the above issue, James E. Eakin needs to file Power of Attorney Form before requesting an interview. If James E. Eakin refused to file Power of Attorney From, James E. Eakin can request an interview by other methods as provided in MPEP. MPEP 713.01 IV. SCHEDULING AND CONDUCTING AN INTERVIEW to submit a proper interview request. MPEP 713.01 “IV. SCHEDULING AND CONDUCTING AN INTERVIEW An interview should be arranged in advance to ensure that the primary examiner and/or the examiner in charge of the application will be available. Use of the USPTO’s Automated Interview Request (AIR) Form…but in the alternative, the examiner may be contacted by letter, facsimile, electronic mail, telephone or the "Applicant Initiated Interview Request" form (PTOL-413A) to schedule the interview. The AIR form or the PTOL-413A form may be submitted to the examiner prior to the interview in order to permit the examiner to prepare in advance and to focus on the issues to be discussed. These forms should identify the participants of the interview, the proposed date of the interview, the communication mode (e.g., telephonic, video conference, or in-person), and should include a brief description of the issues to be discussed. Upon completion of the interview, a copy of the completed Interview Summary form (PTOL-413/413b) should be given or mailed to the applicant (or applicant’s attorney or agent) along with any attachments. See MPEP § 713.04. MPEP 713.05 “713.05 Interviews Prohibited or Granted, Special Situations… Interviews are generally not granted to registered individuals to whom there is no power of attorney or authorization to act in a representative capacity. Registered practitioners, when acting in a representative capacity under 37 CFR 1.34, can show authorization to conduct an interview by completing, signing and filing an Applicant Initiated Interview Request Form (PTOL-413A). This eliminates the need to file, and have accepted, a power of attorney before having an interview. However, an interview concerning an application that has not been published under 35 U.S.C. 122(b) with an attorney or agent not of record who obtains authorization through use of the interview request form will be conducted based on the information and files supplied by the attorney or agent in view of the confidentiality requirements of 35 U.S.C. 122(a). See MPEP § 405. Authorization to conduct an interview is not established merely by having previously filed a paper, such as a reply, in an application because the filing of a paper is only a representation of being authorized to act in a representative capacity with respect to that particular filing.” Internet Communication Authorization: Initially, two submitted Internet Communication Authorization forms were mixed with multiple Information Disclosure Statements. After examiner went over all recent submitted documents, examiner found these two Internet Communication Authorization forms. However, examiner found an issue after carefully reviewed them. In these two Internet Communication Authorization forms, one states James E. Eakin Reg. No.: 70723 and another one states James E. Eakin Reg. No.: 27874. It appears James E. Eakin has two Reg. No. 70723 and 27874. This raises a concern and a clarification is needed from James E. Eakin. Applicant argued “The §112 Rejection is a red herring. The Examiner should review, as just one example, paragraph [0009] of the application. This rejection is baseless and should be withdrawn…” Examiner respectfully disagrees. Applicant argued that Specification, paragraph [0009] supported the claim limitation “in the event an assignment to a delegate appliance has not yet been made or the previously assigned delegate appliance is unable to continue transmitting one or more portions of the response, transmitting from the central service endpoint to the end user device one or more portions of the response associated with the request for seamless service” in U.S.C 112 rejection. However, Specification, paragraph [0009], merely recites “In an aspect of the invention, the system comprises a processing appliance together with associated storage deployed proximate to, or within, an end-consumer's local network. In an embodiment, this local appliance communicates directly with consumer devices over a high capacity low latency local network, and also communicates with a central service endpoint over the Internet. The local appliance comprises functionality for emulating exactly the communication from a central service endpoint to the end-consumer, given identification and service metadata from a cooperating central service endpoint,” which does not mention the subject matters in the 112 rejection. For example, Specification [0009], does not disclose “in the event an assignment to a delegate appliance has not yet been made or the previously assigned delegate appliance is unable to continue transmitting one or more portions of the response, transmitting from the central service endpoint to the end user device one or more portions of the response associated with the request for seamless service.” Such as no “assignment,” “assign” or “one or more portions.” (Note: Although original claim recited “portions,” it did not recite “a portion.” At the most, it only supports “two or more portions,” not “one or more portions.” Further, “portion” and “segment” have different technical meanings, “portion” can be any part of data without particular requirement, where “segment” is a more specific term for a named, discrete portion with a distinct purpose, especially within memory management, networking, or computer graphics. For example, a “portion” of data can be as broad as a character of the whole data, but each “segment” must have a segment attribute, such as same length or same size. A person having ordinary skill in the art would have understand that receiving three portions of data is different and broader than receiving three segments of data) For the above reasons, the rejections are maintained. The replies to the above arguments are applied equally to other similar argument for other claims. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 6-10 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the Claim 1 recites “in the event an assignment to a delegate appliance has not yet been made or the previously assigned delegate appliance is unable to continue transmitting one or more portions of the response, transmitting from the central service endpoint to the end user device one or more portions of the response associated with the request for seamless service,” however, examiner could not find support in the specification. Dependent claims 7-10 are rejected for fully incorporating the deficiencies of their respective base claims by dependency. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6, 7, 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over WEI (U.S. Pub. No.: US 20100223364), in view of Sau et al. (U.S. Pub. No.: US 20100214921, hereinafter Sau), Cisco (“Cisco IOS Server Load Balancing Configuration Guide,” 23 March 2011, and further in view of ANG et al. (U.S. Pub. No.: US 20090119087, hereinafter ANG). For claim 6, WEI discloses a method of adapting delegated content processing and delivery to a mobile end user device in emerging conditions comprising: receiving at a central service endpoint from a mobile end user device a request for service requiring a response to the end user device (WEI: paragraph [0023], “…providing a computer service that is hosted at one or more servers comprised in a set of computing nodes and is accessible to clients via a first network. Providing a second network including a plurality of traffic processing nodes and load balancing means. The load balancing means is configured to provide load balancing among the set of computing nodes running the computer service. Providing means for redirecting network traffic comprising client requests to access the computer service from the first network to the second network. Providing means for selecting a traffic processing node of the second network for receiving the redirected network traffic comprising the client requests to access the computer service and redirecting the network traffic to the traffic processing node via the means for redirecting network traffic. For every client request for access to the computer service, determining an optimal computing node among the set of computing nodes running the computer service by the traffic processing node via the load balancing means, and then routing the client request to the optimal computing node by the traffic processing node via the second network.” WHERE “central service endpoint” is broadly interpreted as “a computer service that is hosted at one or more servers…via a first network”, WHERE “a mobile end user device” is broadly interpreted as “clients” WHERE “a request for service requiring a response to the end user device” is broadly interpreted as “client request”), issuing a probe to one or more delegate appliances of a plurality of delegate appliances to identify a delegate appliance capable of providing at least a portion of the response to the end user device (WEI: paragraph [0008], “…An effective way to address performance, scalability and availability concerns is to host a web application on multiple servers (server clustering)…including documents, data, code and all other software, to two different data centers (site mirroring), and load balance client requests among these servers (or sites). Load balancing spreads the load among multiple servers. If one server fails, the load balancing mechanism will direct traffic away from the failed server so that the site is still operational…” paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node…The optimal computing node is determined in real-time based on feedback from the monitoring means.”); assigning to a delegate appliance that responds to the probe the task of delivering one or more portions of the response to the end user device that have not been already transmitted to the end user by the central station endpoint (WEI: paragraph [0008], “…Load balancing spreads the load among multiple servers. If one server fails, the load balancing mechanism will direct traffic away from the failed server so that the site is still operational…” where “no longer successfully transmitting portions of the response” is broadly interpreted as “server fails” or “failed server,” paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node…The optimal computing node is determined in real-time based on feedback from the monitoring means.”), determining at the central service endpoint, when a previously selected delegate appliance is no longer successfully transmitting portions of the response to the end user device (WEI: paragraph [0008], “…Load balancing spreads the load among multiple servers. If one server fails, the load balancing mechanism will direct traffic away from the failed server so that the site is still operational…” where “no longer successfully transmitting portions of the response” is broadly interpreted as “server fails” or “failed server,” paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node…The optimal computing node is determined in real-time based on feedback from the monitoring means.” where “no longer successfully transmitting portions of the response” is broadly interpreted as “a failed traffic processing node or a failed computing node”); in the event an assignment to a delegate appliance has not yet been made or the previously assigned delegate appliance is unable to continue transmitting one or more portions of the response, transmitting from the central service endpoint to the end user device one or more portions of the response associated with the request for seamless service (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm. The second network is an overlay network superimposed over the first network. The traffic processing node inspects the redirected network traffic and routes all client requests originating from the same client session to the same optimal computing node. The method may further include directing responses from the computer service to the client requests originating from the same client session to the traffic processing node of the second network and then directing the responses by the traffic processing node to the same client…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria…Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means…” paragraph [0027], “The present invention is a scalable, fault-tolerant traffic management system that performs load balancing and failover. Failure of individual nodes within the traffic management system does not cause the failure of the system…” paragraph [0059], “…These traffic processing nodes run specialized traffic handling software to perform functions such as traffic re-direction, traffic splitting, load balancing…” where “portions of the response associated with the request for seamless service” is broadly interpreted as “detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node…traffic splitting, load balancing” paragraph [0126], “…Further, the system uses a multi-tiered DNS hierarchy so that it naturally spreads loads onto different YTM nodes to efficiently distribute load and be highly scalable, while being able to adjust the TTL value for different nodes and be responsive to node status changes”); when the previously selected delegate appliance is no long successfully transmitting portions of the response to the end user, selecting from the remainder of the plurality of processing appliances available to the central service endpoint, in accordance with a network performance criteria, at least one more optimal delegate appliance to act as a delegate for continuing transmission to the end user device one or more portions for the response (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm… The traffic processing node is selected based on geographic proximity of the traffic processing node to the request originating client. The traffic processing node is selected based on metrics related to load conditions of the traffic processing nodes of the second network. The traffic processing node is selected based on metrics related to performance statistics of the traffic processing nodes of the second network…The optimal computing node is determined based on the load balancing algorithm…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means…The second network scales its processing capacity and network capacity by dynamically adjusting the number of traffic processing nodes. The computer service is a web application, web service or email service…” paragraph [0059], “…These traffic processing nodes run specialized traffic handling software to perform functions such as traffic re-direction, traffic splitting, load balancing, traffic inspection, traffic cleansing, traffic optimization, route selection, route optimization, among others. A typical configuration of such nodes includes virtual machines at various cloud computing data centers…enables the virtual network to scale both its processing capacity and network bandwidth capacity. A cloud routing network contains a traffic management system 330 that redirects network traffic to its traffic processing units (TPU), a traffic processing mechanism 334 that inspects and processes the network traffic and a global data store 332 that gathers data from different sources and provides global decision support and means to configure and manage the system.” paragraph [0068], “More specifically, when a client issues a request to a server (for example, a consumer enters a web URL into a web browser to access a web site), the default Internet routing mechanism would route the request through the network hops along a certain network path from the client to the target server ("default path"). Using a cloud routing network, if there are multiple server nodes, the cloud routing network first selects an "optimal" server node from the multiple server nodes as the target serve node to serve the request. This server node selection process takes into consideration factors including load balancing, performance, cost, and geographic proximity, among others. Secondly, instead of going through the default path, the traffic management service redirects the request to an "optimal" Traffic Processing Unit (TPU) within the overlay network "Optimal" is defined by the system's routing policy, such as being geographically nearest, most cost effective, or a combination of a few factors.” WHERE “selecting from the plurality of processing appliances available to the central service endpoint in physical locations” is broadly interpreted as “traffic processing node is selected based on geographic proximity of the traffic processing node to the request originating client.” WHERE “network performance criteria” is broadly interpreted as “The load balancing algorithm utilizes optimal computing node performance…weighted traffic distribution as computing criteria,” “network bandwidth capacity” and “most cost effective”); transmitting from the central service endpoint to the newly selected delegate, metadata describing the request, the central service endpoint and the progress of the transmission of the response (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm…The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means. The second network comprises virtual machines nodes. The second network scales its processing capacity and network capacity by dynamically adjusting the number of traffic processing nodes. The computer service is a web application, web service or email service”). However, WEI does not explicitly disclose issuing a probe to one or more delegate appliances of a plurality of delegate appliances; the delegate appliance configured to emulate the transmission details of the central service endpoint such that the end user device perceives each of the one or more portions of the response as originating from the central service endpoint, receiving from the end user device at the central service endpoint an acknowledgement of each of the one or more portions of the response successfully received by the end user device; analyzing over time at the central service endpoint, the acknowledgements from the end user device to enable the central station endpoint to maintain the status and tracking of the server. Sau discloses receiving from the end user device at the central service endpoint an acknowledgement of each of the one or more portions of the response successfully received by the end user device (Sau: Abstract, “…selective acknowledgements are received at a sending computer from a receiving computer. The sending computer is configured to analyze patterns in the selective acknowledgements and infer a type of packet loss. As a result of the inference, the packet delivery strategy from the sending computer can be adjusted.” paragraphs [0015]-[0019], “…receiving, at a sending computer, selective acknowledgements from a receiving computer…[0016] determining from said selective acknowledgements whether any of said packets were lost; [0017] if said selective acknowledgements indicate none of said packets were lost, maintaining said delivery strategy; [0018] if said selective acknowledgements indicate packets were lost, determining if any of said lost packets were clustered; [0019] if said lost packets were clustered, adjusting said delivery strategy using a first factor to accommodate a first type of packet loss”); analyzing over time at the central service endpoint, the acknowledgements from the end user device to enable the central station endpoint to maintain the status and tracking of the server (Sau: Abstract, “…selective acknowledgements are received at a sending computer from a receiving computer. The sending computer is configured to analyze patterns in the selective acknowledgements and infer a type of packet loss. As a result of the inference, the packet delivery strategy from the sending computer can be adjusted.” paragraphs [0015]-[0019], “…receiving, at a sending computer, selective acknowledgements from a receiving computer…[0016] determining from said selective acknowledgements whether any of said packets were lost; [0017] if said selective acknowledgements indicate none of said packets were lost, maintaining said delivery strategy; [0018] if said selective acknowledgements indicate packets were lost, determining if any of said lost packets were clustered; [0019] if said lost packets were clustered, adjusting said delivery strategy using a first factor to accommodate a first type of packet loss”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve upon “SYSTEM AND METHOD FOR NETWORK TRAFFIC MANAGEMENT AND LOAD BALANCING” as taught by WEI by implementing “METHOD, APPARATUS AND SYSTEM FOR IMPROVING PACKET THROUGHPUT BASED ON CLASSIFICATION OF PACKET LOSS IN DATA TRANSMISSIONS” as taught by Sau, because it would provide WEI’s method with the enhanced capability of “The sending computer is configured to analyze patterns in the selective acknowledgements and infer a type of packet loss. As a result of the inference, the packet delivery strategy from the sending computer can be adjusted.” (Sau: Abstract). However, WEI and Sau do not explicitly disclose issuing a probe to one or more delegate appliances of a plurality of delegate appliances; the delegate appliance configured to emulate the transmission details of the central service endpoint such that the end user device perceives each of the one or more portions of the response as originating from the central service endpoint. Cisco discloses issuing a probe to one or more delegate appliances of a plurality of delegate appliances (Cisco: page 19, “Probes Probes determine the status of each real server in a server farm, or each firewall in a firewall farm. The Cisco IOS SLB Probe feature supports DNS, HTTP, ping, TCP, custom UDP, and WSP probes…”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve upon “SYSTEM AND METHOD FOR NETWORK TRAFFIC MANAGEMENT AND LOAD BALANCING” as taught by WEI by implementing “Cisco IOS Server Load Balancing Configuration Guide” as taught by Cisco, because it would provide WEI’s modified method with the enhanced capability of “…determine the status of each real server…” (Cisco: page 19). However, WEI, Sau and Cisco do not explicitly disclose the delegate appliance configured to emulate the transmission details of the central service endpoint such that the end user device perceives each of the one or more portions of the response as originating from the central service endpoint. ANG discloses the delegate appliance configured to emulate the transmission details of the central service endpoint such that the end user device perceives each of the one or more portions of the response as originating from the central service endpoint, (ANG: paragraph [0058], “…virtual devices can switch between emulation and pass-through (that is, from emulation to pass-through, and from pass-through to emulation). For example, infrequent operations can be handled via emulation, but pass-through can be used for frequent, and perhaps, performance critical operations. Other policies can be employed to decide when to use emulation and when to use pass-through. In addition, the virtual device can be exposed first to one physical device then to another physical device such that, in essence, the virtual device is switched from one hardware device to another hardware device, e.g., for failover or load balancing.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve upon “SYSTEM AND METHOD FOR NETWORK TRAFFIC MANAGEMENT AND LOAD BALANCING” as taught by WEI by implementing “PASS-THROUGH AND EMULATION IN A VIRTUAL MACHINE ENVIRONMENT” as taught by ANG, because it would provide WEI’s modified method with the enhanced capability of “…instead of providing virtual devices that are only emulated in software, virtual devices can switch between emulation and pass-through (that is, from emulation to pass-through, and from pass-through to emulation)…” (ANG: paragraph [0058]). For claim 7, WEI, Sau, Cisco and ANG disclose the method of claim 6 wherein a more optimal processing appliance is available due to a change in content availability (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm. The second network is an overlay network superimposed over the first network. The traffic processing node inspects the redirected network traffic and routes all client requests originating from the same client session to the same optimal computing node. The method may further include directing responses from the computer service to the client requests originating from the same client session to the traffic processing node of the second network and then directing the responses by the traffic processing node to the same client…The traffic processing node is selected based on metrics related to performance statistics of the traffic processing nodes of the second network. The traffic processing node is selected based on a sticky-session table mapping clients to the traffic processing nodes. The optimal computing node is determined based on the load balancing algorithm. The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means. The second network comprises virtual machines nodes. The second network scales its processing capacity and network capacity by dynamically adjusting the number of traffic processing nodes. The computer service is a web application, web service or email service.” WHERE “a change in content availability” is broadly interpreted as “failover” and “detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node” (e.g. after nodes are failed and cannot provide content, therefore, “content availability” is changed). For claim 8, WEI, Sau, Cisco and ANG disclose the method of claim 6 wherein a more optimal processing appliance is available due to device capacity or failure (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm. The second network is an overlay network superimposed over the first network. The traffic processing node inspects the redirected network traffic and routes all client requests originating from the same client session to the same optimal computing node. The method may further include directing responses from the computer service to the client requests originating from the same client session to the traffic processing node of the second network and then directing the responses by the traffic processing node to the same client…The traffic processing node is selected based on metrics related to performance statistics of the traffic processing nodes of the second network. The traffic processing node is selected based on a sticky-session table mapping clients to the traffic processing nodes. The optimal computing node is determined based on the load balancing algorithm. The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means. The second network comprises virtual machines nodes. The second network scales its processing capacity and network capacity by dynamically adjusting the number of traffic processing nodes. The computer service is a web application, web service or email service.” WHERE “device capacity or failure” is broadly interpreted as “failover” and “detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node”). For claim 10, WEI, Sau, Cisco and ANG disclose the method of claim 6 wherein a more optimal processing appliance is available due to network congestion (WEI: paragraph [0024], “Implementations of this aspect of the invention may include one or more of the following. The load balancing means is a load balancing and failover algorithm. The second network is an overlay network superimposed over the first network. The traffic processing node inspects the redirected network traffic and routes all client requests originating from the same client session to the same optimal computing node. The method may further include directing responses from the computer service to the client requests originating from the same client session to the traffic processing node of the second network and then directing the responses by the traffic processing node to the same client…The traffic processing node is selected based on metrics related to performance statistics of the traffic processing nodes of the second network. The traffic processing node is selected based on a sticky-session table mapping clients to the traffic processing nodes. The optimal computing node is determined based on the load balancing algorithm. The load balancing algorithm utilizes optimal computing node performance, lowest computing cost, round robin or weighted traffic distribution as computing criteria. The method may further include providing monitoring means for monitoring the status of the traffic processing nodes and the computing nodes. Upon detection of a failed traffic processing node or a failed computing node, redirecting in real-time network traffic to a non-failed traffic processing node or routing client requests to a non-failed computing node, respectively. The optimal computing node is determined in real-time based on feedback from the monitoring means. The second network comprises virtual machines nodes. The second network scales its processing capacity and network capacity by dynamically adjusting the number of traffic processing nodes. The computer service is a web application, web service or email service.” Paragraph [0056], “When traffic grows to a certain level, the virtual network starts up more TPUs as a way to increase its processing power as well as bandwidth capacity.” WHERE “processing appliance is available due to network congestion” is broadly interpreted as “When traffic grows to a certain level, the virtual network starts up more TPUs as a way to increase its processing power as well as bandwidth capacity”.) Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over WEI (U.S. Pub. No.: US 20100223364), in view of Sau et al. (U.S. Pub. No.: US 20100214921, hereinafter Sau), Cisco (“Cisco IOS Server Load Balancing Configuration Guide,” 23 March 2011, and further in view of ANG et al. (U.S. Pub. No.: US 20090119087, hereinafter ANG), and further in view of Diwan et al. (U.S. Pub. No.: US 20020198937, hereinafter Diwan) For claim 9, WEI, Sau, Cisco and ANG disclose the method of claim 6. However, WEI, Sau, Cisco and ANG do not explicitly disclose wherein a more optimal processing appliance is available due to a mobile end user device migration. Diwan discloses wherein a more optimal processing appliance is available due to a mobile end user device migration (Diwan: paragraph [0018], “…a content-request redirection system of the present invention receives near real-time information on the disposition of content items at servers and services and receives near real-time information on the operational status and load of servers and other components and services at the edges. This information can include, for example, how many content streams the server can transmit, the delivery rate of each stream, and an amount of headroom assigned to the server in order to avoid overloads. Thus, in embodiments, selection of the location, server or service for serving content items in response to a user request may be based on one or a combination of factors, including the proximity of the server and the user, the availability and load of the server, the availability of the content item, and the cost of transmitting the content item from a server to the user…” WHERE “a mobile end user device migration” is broadly interpreted as “proximity of the server and the user” (e.g. a person having ordinary skill in the art would have been understood when user is moving, server will be changed correspondingly with user while the user is moving in order to maintain “proximity of the server and the user”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to improve upon “SYSTEM AND METHOD FOR NETWORK TRAFFIC MANAGEMENT AND LOAD BALANCING” as taught by WEI by implementing “METHOD, APPARATUS AND SYSTEM FOR IMPROVING PACKET THROUGHPUT BASED ON CLASSIFICATION OF PACKET LOSS IN DATA TRANSMISSIONS” as taught by Sau, because it would provide WEI’s modified method with the enhanced capability of “Thus, in embodiments, selection of the location, server or service for serving cont
Read full office action

Prosecution Timeline

Mar 15, 2021
Application Filed
Mar 15, 2021
Response after Non-Final Action
May 20, 2023
Non-Final Rejection — §103, §112
Nov 27, 2023
Response Filed
Jan 26, 2024
Final Rejection — §103, §112
Jul 31, 2024
Notice of Allowance
Feb 28, 2025
Request for Continued Examination
Mar 05, 2025
Response after Non-Final Action
Mar 22, 2025
Non-Final Rejection — §103, §112
Sep 29, 2025
Response Filed
Oct 04, 2025
Final Rejection — §103, §112
Apr 06, 2026
Request for Continued Examination
Apr 09, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602427
TRANSFORMING DATA FROM STREAMING MEDIA
2y 5m to grant Granted Apr 14, 2026
Patent 12602393
Modular Execution and Management of Extract, Transform, Load (ETL) Processes
2y 5m to grant Granted Apr 14, 2026
Patent 12585720
INTERACTIVE GEOGRAPHICAL MAP
2y 5m to grant Granted Mar 24, 2026
Patent 12579105
DELETING DATA IN A VERSIONED DATABASE
2y 5m to grant Granted Mar 17, 2026
Patent 12572576
RECOMMENDER METHODS AND SYSTEMS FOR PATENT PROCESSING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
51%
Grant Probability
94%
With Interview (+42.3%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 360 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month