Prosecution Insights
Last updated: April 19, 2026
Application No. 18/608,604

TRACKING BEHIND-THE-SERVICE ENDPOINTS IN A SERVICE CHAIN

Non-Final OA §103
Filed
Mar 18, 2024
Examiner
KATSIKIS, KOSTAS J
Art Unit
2441
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
613 granted / 758 resolved
+22.9% vs TC avg
Strong +29% interview lift
Without
With
+28.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
766
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 758 resolved cases

Office Action

§103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is in response to the Application filed on March 18, 2024, in which claims 1-20 have been presented for examination. Status of Claims 3. Claims 1-20 are pending, of which claims 1-6, 8-14 and 16-20 are rejected under 35 U.S.C. 103. Priority 4. Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original non-provisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994) The disclosure of the prior-filed application, Provisional Application No. 63/609,831, having a filing date of 12/13/2023, hereinafter “Provisional Application,” fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Indeed, Examiner has checked the entire Provisional Application in detail and can find no explanation or related disclosure of how Applicant has chosen to achieve the claimed limitations or to program such a machine to perform the claimed limitations. Examiner notes that the entire 3 page Provisional Application is devoid of any substantive detail explaining how the inventive concepts are to be implemented. Examiner wishes to draw Applicant’s attention to the subject matter discussed in the Provisional Application, particularly spanning paragraphs [0009]-[0010] and [0012]-[0013]. For example, paragraph [0009] recites, “FIG. 1 illustrates an example of a service chain deployment. In FIG. 1, the way to track the service endpoints can be by tracking IP1, IP2, IP3 (which is reachable via the service), IP4 (which is reachable via the service), and IP5 (reachable through the tunnel). Tracking IP1 and IP2 can tell if the service Endpoint is reachable but does not tell if the service is up. IP3, IP4 and IP5 are called Behind-the-Service IPs. The tracker packet needs to go to the service and then through the service to imply that the service is in-fact, up. Without this, a user would have to deploy additional mechanisms to determine if the service is truly up or not. Which would be an extra maintenance and orchestration for the user. An additional problem is routing to IP3, IP4 and IP5 would require custom routing to be setup in the SC-Hub by the user. This is an orchestration inconvenience. In the case of IP3, the user may have to setup custom routing in the firewall” (Recited from paragraph [0009] of Provisional Application). In addition, paragraph [0010] recites, “This disclosure provide a solution to avoid this step (as we have the knowledge of where the tracker packet will return from). With this disclosure, a feature is provided to solve the issue to track IP3, IP4 and IP5. This feature provides a way to track the Behind-the-Service endpoints IP. Also, to force the tracer packet over the interface where the service endpoint is configured, which will avoid additional route lookup on device and user need not configure any additional routes for the Behind-the-Service” (Recited from paragraph [0010] of Provisional Application). Importantly, while Applicant has indicated that without a “tracker packet” going to the service, and then through the service, to imply that the service is in-fact up, a user would have to deploy additional mechanisms to determine if the service is truly up or not, which would be an extra maintenance and orchestration for the user, and that, an additional problem is routing to IP3, IP4 and IP5 would require the orchestration inconvenience of custom routing to be setup in the SC-Hub by the user, the Provisional Application is nonetheless devoid of any detail as to how mechanisms are implemented to mitigate against this. In particular, while the Provisional Application goes on to describe that a solution is provided to avoid this step (as we have the knowledge of where the tracker packet will return from), and that a “feature” is provided to solve the issue to track IP3, IP4 and IP5, thus providing a way to track the Behind-the-Service endpoints IP, as well as forcing the tracker packet over the interface where the service endpoint is configured, which will avoid additional route lookup on device, and precluding the user from having to configure any additional routes for the Behind-the-Service, nevertheless, again, the Provisional Application is devoid of any detail and/or the subject matter disclosed in the instant Application, and delineated by the independent claims. While paragraph [0012] further recites, “using the solution described herein, tracker Ip for each of the HA-PAIR can be overridden, this will help user to have multiple paths to test towards the service and also since the IP is behind the Service, this will make sure the service is really up, before declaring it up or vice versa” (Recited from paragraph [0012] of Provisional Application), and while paragraph [0013] goes on to recite, “Also, if we need to send tracker packet towards the service, a custom route over the interface should exist. With our innovation additional custom routing is not needed. This solution includes using the routing from the Service IP endpoint and Service Outgoing interface to force the tracker packets to the Behind-the-Service IP. This reduces work on admin and prevents networking errors. With the innovation described herein, the tracker packet would be sent over the same interface over which the service is configured” (Recited from paragraph [0013] of Provisional Application), nevertheless, the Provisional Application fails to describe how the routing from the Service IP endpoint and Service Outgoing interface forces the tracker packets to the Behind-the-Service IP. Importantly, the Provisional Application lacks any support for the claimed limitations of identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network; determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface of the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address, as recited in independent claim 1, identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network; determining a tunnel interface associated with the service endpoint device, the tunnel interface configured to transmit network traffic to the service; installing, by the network controller, an IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the tunnel interface and to the IP address, as recited in independent claim 9, and identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network; determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface associated with the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address in association with a service hub associated with the computing resource network; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address, as recited in independent claim 16. Examiner further notes that FIGS. 1 and 2 of instant Specification are missing from the Provisional Application, and the subject matter spanning paragraphs [0030]-[0047], which reference FIGS. 1 and 2, is equally missing from the Provisional Application. Accordingly, the instant Application fails to comply with the requirements in the manner provided by 112(a), and cannot not be afforded a filing date of 12/13/2023, but will rather be afforded an effective filing date of 03/18/2024. Information Disclosure Statement 5. The information disclosure statements, filed on March 18, 2024, and March 24, 2025 are in compliance with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609. They have been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 9. Claims 1-4, 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Berenberg et al. (United States Patent No. US 11,838,199 B1), hereinafter “Berenberg” in view of Wells et al. (United States Patent Application Publication No. US 2020/0389427 A1), hereinafter “Wells”. Regarding claim 1, Berenberg discloses a system comprising: one or more processors (wherein FIG. 6 illustrates a network endpoint group (NEG) controller and an example Load Balance Controller. Each controller may include, e.g., a memory storing data and instructions, and one or more processors) (Berenberg, FIG. 6 col. 16, ll. 46-50); and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors (again, the memory storing data and instructions. In addition, Berenberg teaches that the subject matter described in this specification can be implemented as one or more computer programs embodied on a tangible medium, i.e., one or more modules of computer program instructions, encoded on one or more computer storage media for execution by, or to control the operation of, a data processing apparatus) (FIG. 6 col. 16, ll. 46-50, col. 19, ll. 2-7), cause the one or more processors to perform operations comprising: identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network (wherein Berenberg teaches that various backend service management platforms or tools can be used to manage network endpoint groups (NEGs) implemented in environments of virtualized computing instances. These tools operate as backend service orchestrators and expose public application programming interfaces (APIs), which can be used to manage the virtual machines or containers within the cloud computing environment. Example tools for backend service orchestration and management include Kubernetes, Apache Mesos, and Cloud Foundry. More specifically, these tools are designed to automate deploying, scaling and operation of backend services configured as application containers. For example, using Kubernetes, a load balancer may target container pods. A pod may include a group of one or more containers. A pod may encapsulate an application container or multiple application containers. Berenberg teaches that the services, or containers in a pod, may be deployed together, and may be started, stopped, and replicated as a group. See FIG. 1, illustrating backend services configured as application containers, part of virtualized environment. In addition, as further shown in FIG. 2, each container in a NEG receives and transmits control data with a NEG Controller so that the operational status and processing capacity of each container or virtual computing instance in the NEG are known. For example, if Container 4 experiences a faulty port, a buffer overload, an out-of-memory condition, or any other issue that may affect the operational status of the container, the NEG Controller may receive control data indicating the change of operational status of Container 4 and may transmit control data to the Load Balancer Controller. The Load Balance Controller may recalculate the load balancing of data requests and transmit updated forwarding rules to the Application 2 Load Balancer causing the Application 2 Load Balancer to redistribute the data requests from user devices (e.g., user device A and/or B)) (Berenberg, FIGS. 1 and 2, col. 6, l. 57-col.7, l. 6, col. 10, l. 58-col. 11, l. 5). Berenberg further notes that because the network endpoints in the NEG have unique IP or IP address:port tuples, the network endpoints within a NEG can be a specific target for a backend service application load balancer. In this way, the network endpoints that are grouped to form a NEG provide greater load balancing capabilities and require less redundant load balancing at different architectural levels than is commonly required in backend service architectures implemented on virtual machines which do not uniquely address each containerized instance of an application. For example, as shown in FIG. 2, NEG 1 includes two containers associated with Application 1. Container 1 is implemented with Application 1-1 and has a unique IP address:port tuple of 192.168.0.1:80. Container 2 is implemented with Application 1-2 and has a unique IP::Port address of 192.168.1.1:82. The two containers are implemented on different virtual machines. Container 1 is implemented on VM1, while Container 2 is implemented on VM2. The unique IP address:port tuples of each container are aliased from a range of IP addresses and ports that are available on each VM. Berenberg teaches that by using IP address aliasing, a container may be configured with a specific IP address:port tuple that can be used by the Application Load Balancer to distribute load balanced data requests or service traffic to the containers in the NEG. For example, Application Load Balancer 2 can distribute data requests and service traffic to Container 3 and Container 4, respectively based on the backend service (e.g., Application 2) that is associated with NEG 2. As further shown in FIG. 2, Container 3 is implemented on the same VM as Container 1 even though the two containers are servicing different backend service applications (See Berenberg, col. 10, ll.27-57). Berenberg does not explicitly disclose determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface of the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address. However in an analogous art, Wells discloses determining a first internet protocol (IP) address associated with a service endpoint device (wherein with reference to FIG. 2A, Wells discloses a server 210 configured to provide a service 212. As such, server 210 is a service endpoint device. Server 210 is configured with two interfaces, interface 214 and interface 216. As illustrated in FIG. 2, interface 214 and interface 216 both connect to router 220. According to other example embodiments, router 220 could be replaced by, e.g., a DHCP server, such as DHCP server 140 of FIG. 1. In addition, Wells suggests that router 220 could also be replaced by another device, such as a Software Defined Network (SDN) controller. When server 210 initially connects through router 220 to a network environment, it may use an automatic configuration protocol, such as DHCP, to configure interfaces 214 and 216 with appropriate Internet Protocol (IP) addresses. A message exchange implementing such a configuration is illustrated through pseudo code 250 and 252 for each of interfaces 214 and 216, respectively. Specifically, interface 214 broadcasts a discovery message 262 which indicates that server 210 has connected to the network environment via interface 214. A configuration controller will respond with an offer of an Internet Protocol (IP) address through message 264. According to the example embodiment of FIG. 2A, router 220 serves as a DHCP server, and therefore, router 220 sends offer 264, which includes a proposed IP address, in this case an address of “10.0.0.2.” Server 210 responds to offer 264 with request 266 via which server 210 requests that the address from offer 264 be assigned to interface 214. Router 220 responds with acknowledgement 268, at which point server 210 configures interface 214 with the address from offer 264. An analogous process for interface 216 is implemented through pseudocode 252 via messages 272-278. The IP address associated with offer message 274 is “10.0.1.2.” Accordingly, at the completion of the configurations, interface 214 is configured with an IP address of “10.0.0.2” and interface 216 is configured with an IP address of 10.0.1.2,” as illustrated in FIG. 2B) (Wells, FIGS. 2A and 2B, paragraphs [0015], [0018] and [0019]); determining an outgoing interface of the service endpoint device, the outgoing interface being configured to transmit network traffic to a service (wherein with continued reference to FIG. 2C, depicted therein is the configuration of server 210 with a loopback address for service 212 from router 220. Specifically, each of interfaces 214 and 216 sends a discovery message, discovery messages 282 and 292, respectively. Discovery messages 282 and 292 indicate that a loopback address is needed for service 212, as well as the lowest IP address associated with server 210, i.e., the lower of the two IP addresses associated with interfaces 214 and 216. This information may be communicated in discovery messages 282 and 292 through, e.g., a DHCP option value. As illustrated in the pseudocode of message exchange 280, discovery message 282 from interface 214 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which again, is the address associated with interface 214, not the address associated with interface 216) (Wells, FIG. 2C, paragraphs [0020] and [0023]); installing, by the network controller, a second IP address in association with the service (wherein as discussed and shown above, discovery message 282 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, not the address associated with interface 216. Both messages 282 and 292 include the lowest IP address associated with either of interfaces 214 and 216 to ensure that router 220 is aware that messages 282 and 292 are both originating from the same server 210. Wells teaches that because both of messages 282 and 292 indicate the same service 212 (i.e., function “XYZ”), router 220 provides the same address when it responds with offer messages 284 and 294, respectively. Router 220 responds to both interfaces 214 and 216 through offer messages 286 and 296, respectively, which offer the same IP address of “10.1.1.2” to both of interfaces 214 and 216 for assignment to service 212. Accordingly IP address “10.1.1.2” is the second IP address installed in association with service 212) (Wells, FIG. 2C, paragraph [0023]); and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address (wherein packets addressed to IP address “10.0.0.2” are routed through IP address “10.1.1.2” for service 212. That is, subsequent to the sending of acknowledgement messages 288 and 298, router 220 installs a route to service 212 through interface 214, which includes the address for interface 214 of “10.0.0.2” and the loopback address “10.1.1.2.,” and likewise installs a route to service 212 through interface 216, which includes the address for interface 216 of “10.0.1.2” and the same loopback address “10.1.1.2.”) (Wells, paragraph [0028]). Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface of the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address, as disclosed in Wells, with reasonable expectation that this would result in a system with additional redundancy techniques for traffic destined for a [backend] service. That is, because router 220 has two installed routes to service 212, one through interface 214 and another through interface 216, router 220 may implement Equal-Cost Multipath (ECMP) routing load balancing and link redundancy techniques for traffic destined for service 212. For example, Wells teaches that if both interfaces 214 and 216 are operational, traffic received at router 220 destined for service 212 may be sent through either of interfaces 214 and 216 according to ECMP techniques. Furthermore, if either of interfaces 214 or 216 becomes inoperable, the other interface may serve as a redundant path through which traffic may be transmitted from router 220 to server 210 (See Wells, paragraph [0029]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 1. Regarding claim 2, Berenberg-Wells discloses the system of claim 1, wherein the second IP address is configured as a loopback address associated with the service endpoint device (wherein again, the destination address of traffic sent to services 112/212 may be configured as a loopback address within servers 110/210) (Wells, FIGS. 1 and 2C, paragraphs [0015] and [0020]). As discussed and shown above, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitation of wherein the second IP address is configured as a loopback address associated with the service endpoint device, as disclosed in Wells, with reasonable expectation that this would result in a system that efficiently routed traffic from the server interfaces to the [backend] services, applications and/or or functions of the given endpoint (See Wells, paragraph [0003]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 2. Regarding claim 3, Berenberg-Wells discloses the system of claim 1, wherein the second IP address is provisioned as an endpoint executing behind the service on the service endpoint device (again, the second IP address is provisioned for the service endpoint of the [backend] service) (Wells, paragraph [0015]). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitation of wherein the second IP address is provisioned as an endpoint executing behind the service on the service endpoint device, as disclosed in Wells, with reasonable expectation that this would result in traffic traversing the network environment destined for a backend service will be addressed with the appropriate loopback address, thus ensuring that the loopback address utilized by the server does not conflict with other addresses within the network environment (See Wells, paragraph [0015]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 3. Regarding claim 4, Berenberg-Wells discloses the system of claim 1, wherein the route is a first route (again, multiple routes installed, one via interface 214, one via interface 216) (Wells, paragraph [0029]), and the operations further comprising: installing a third IP address in association with the service (wherein again, IP address “10.1.1.2” is also installed for interface 216) (Wells, paragraphs [0029] and [0030]); and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the first IP address through the outgoing interface and to the third IP address (wherein again, second route is provisioned to service 212 from interface 216. Examiner notes that while the claim language recites “installing a third IP address,” nevertheless, the claim fails to indicate that the third address is in fact a different loopback address. Accordingly, Wells does teach that two addresses are installed, one for each interface 214, 216, though they happen to be the same loopback address for service 212) (Wells, paragraphs [0029] and [0030]). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of wherein the route is a first route, and the operations further comprising: installing a third IP address in association with the service; and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the first IP address through the outgoing interface and to the third IP address, as disclosed in Wells, with reasonable expectation that this would result in implementing ECMP techniques through the two routes, the two routes providing increased bandwidth and the two routes providing redundancy in the event of a route failure (See Wells, paragraph [0030]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 4. Regarding claim 6, Berenberg-Wells discloses the system of claim 1, the operations further comprising: receiving route information from a customer device associated with the network traffic (wherein Berenberg further teaches that in some implementations, a user may create a NEG and/or delete a NEG by interacting with a user interface or a command line programming interface. The user interface or command line programming interface may be provided by the NEG Controller. In other implementations, the user interface or command line programming interface may be provided by the backend service orchestrator platform. In some implementations, a user may list all the NEGs in a project by interacting with the user interface or command line programming interface. Additionally, or alternatively, in some implementations, a user may interact with the user interface or command line programming interface to add network endpoints to an existing NEG. In other implementations, a user may interact with the user interface or command line programming interface to remove network endpoints from an existing NEG. In some implementations, a user may interact with the user interface or command line programming interface to list all the network endpoints in a NEG. In other implementations, a user may interact with the user interface or command line programming interface to attach a NEG as a backend service of an Application Load Balancer. With regard to attaching additional network endpoints, a list of endpoints may be attached to a specified NEG. According to some load balancing examples, one or more conditions may apply. Examples of such conditions include that the VM instance be specified with each network endpoint, that the IP address for a network endpoint belong to the specified VM instance, that the specified VM instance belong to a zone and network in the NEG resource, that the port be specified with each network endpoint or a default port be specified in the NEG, and that all IP:port values in the NEG are unique) (Berenberg, col. 15, ll. 43-52, col. 15, l. 58-col. 16, l. 2, col. 16, ll. 22-32), the route information indicating the outgoing interface (again, conditions include that the port be specified with each network endpoint or a default port be specified in the NEG) (Berenberg, col. 16, ll. 29-31), and the first IP address (again, conditions include that the IP address for a network endpoint belong to the specified VM instance, that the specified VM instance belong to a zone and network in the NEG resource) (Berenberg, col. 16, ll. 27-29). Berenberg does not expressly disclose that the route information indicates the second IP address; and based at least in part on receiving the route information: determining the outgoing interface; determining the first IP address; and installing the second IP address. However Wells discloses that route information indicates the second IP address (again, using DHCP messaging, the offer includes the loopback address) (Wells, paragraphs [0020] and [0023]); and based at least in part on receiving the route information: determining the outgoing interface (again, determining that interface 214 is that with the lowest IP address) (Wells, paragraph [0023]); determining the first IP address (again, first IP addresses are assigned with DHCP) (Wells, paragraph [0019]); and installing the second IP address (again, loopback address is installed) (Wells, paragraph [0023]). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of route information indicating the second IP address; and based at least in part on receiving the route information: determining the outgoing interface; determining the first IP address; and installing the second IP address, as disclosed in Wells, with reasonable expectation that this would result in the enabling dynamic configuration for both sides of the routes between server and router (See Wells, paragraph [0030]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 6. Regarding claim 8, Berenberg-Wells discloses the system of claim 1, wherein the first IP address is one of an IP version 4 (IPv4) address or an IP version 6 (IPv6) address (wherein the IP addresses are IPv4 addresses) (Berenberg, col. 10, ll. 40-42). The motivation regarding the obviousness of claim 1 is also applied to claim 8. 10. Claims 5, 9-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Berenberg-Wells and further in view of Schornig et al. (United States Patent No. US 12,047,425 B1), hereinafter “Schornig”. As to claim 5, Berenberg-Wells discloses the system of claim 1, but does not expressly disclose wherein the network traffic is received from a service hub communicatively coupled to the service endpoint device, and the second IP address is configured as an endpoint executing on the service hub. In an analogous art, however, Schornig discloses wherein network traffic is received from a service hub communicatively coupled to a service endpoint device (wherein with respect to a software-defined WAN (SD-WAN), Schornig teaches that a router 110 may provide connectivity to software-as-a-service (SaaS) provider(s) 308 via tunnels across any number of networks 306. This allows clients located in the LAN of a remote site 302 to access cloud applications (e.g., Office 365™, Dropbox™, etc.) served by SaaS provider(s) 308. Schornig further teaches that a third interface (Int 3) of a router 110, located at the edge of remote site 302, (See in particular, network deployment 310 illustrated in FIG. 3B) may establish a third path to SaaS provider(s) 308 via a private corporate network 306c (e.g., an MPLS network) to a private data center or regional hub 304 which, in turn, provides connectivity to SaaS provider(s) 308 via another network, such as a third ISP 306d) (Schornig, FIG. 3B, col. 7, ll. 41-46, col. 7, l. 65-col. 8, l. 3), and a second IP address is configured as an endpoint executing on the service hub (wherein Schornig further discloses that a SDN controller 408 (See also FIG. 4A) may oversee the operations of routers 110a-110b in SD-WAN service point 406 and SD-WAN fabric 404, and may be responsible for monitoring the operations thereof, promulgating policies (e.g., security policies, etc.), installing or adjusting IPsec routes/tunnels between LAN core 402 and remote destinations such as the regional hub 304 and/or SaaS provider(s) 308 in FIGS. 3A-3B, and the like. As such, Schornig teaches configuring IP addresses on regional hub 304, as installing IPsec routes/tunnels involves installing/configuring IP addresses, as readily understood by the skilled artisan) (Schornig, FIG. 4A, col. 8, ll. 21-32). Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells and Schornig are analogous art, because Wells and Schornig are from the same problem solving area, namely, programming and adding routes/paths for a service executing on a service endpoint device (See Wells, paragraph [0011], Schornig, col. 2, ll. 4-17). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg-Wells and Schornig before him or her, to modify the system of Berenberg-Wells to include the additional limitations of wherein the network traffic is received from a service hub communicatively coupled to the service endpoint device, and the second IP address is configured as an endpoint executing on the service hub, as disclosed in Schornig, with reasonable expectation that this would result in providing connectivity from yet an additional avenue, thus further adding to the robustness of the architecture (See Schornig, col. 7, l. 65-col. 8, l. 3). This method of improving the architecture of Berenberg-Wells was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Schornig. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg-Wells with Schornig to obtain the invention as specified in claim 5. Regarding claim 9, Berenberg discloses a method comprising: identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network (wherein as discussed and shown above with respect to independent claim 1, Berenberg teaches that various backend service management platforms or tools can be used to manage network endpoint groups (NEGs) implemented in environments of virtualized computing instances. These tools operate as backend service orchestrators and expose public application programming interfaces (APIs), which can be used to manage the virtual machines or containers within the cloud computing environment. Example tools for backend service orchestration and management include Kubernetes, Apache Mesos, and Cloud Foundry. More specifically, these tools are designed to automate deploying, scaling and operation of backend services configured as application containers. For example, using Kubernetes, a load balancer may target container pods. A pod may include a group of one or more containers. A pod may encapsulate an application container or multiple application containers. Berenberg teaches that the services, or containers in a pod, may be deployed together, and may be started, stopped, and replicated as a group. See FIG. 1, illustrating backend services configured as application containers, part of virtualized environment. In addition, as further shown in FIG. 2, each container in a NEG receives and transmits control data with a NEG Controller so that the operational status and processing capacity of each container or virtual computing instance in the NEG are known. For example, if Container 4 experiences a faulty port, a buffer overload, an out-of-memory condition, or any other issue that may affect the operational status of the container, the NEG Controller may receive control data indicating the change of operational status of Container 4 and may transmit control data to the Load Balancer Controller. The Load Balance Controller may recalculate the load balancing of data requests and transmit updated forwarding rules to the Application 2 Load Balancer causing the Application 2 Load Balancer to redistribute the data requests from user devices (e.g., user device A and/or B)) (Berenberg, FIGS. 1 and 2, col. 6, l. 57-col.7, l. 6, col. 10, l. 58-col. 11, l. 5). As further pointed out above, Berenberg further teaches that because the network endpoints in the NEG have unique IP or IP address:port tuples, the network endpoints within a NEG can be a specific target for a backend service application load balancer. In this way, the network endpoints that are grouped to form a NEG provide greater load balancing capabilities and require less redundant load balancing at different architectural levels than is commonly required in backend service architectures implemented on virtual machines which do not uniquely address each containerized instance of an application. For example, as shown in FIG. 2, NEG 1 includes two containers associated with Application 1. Container 1 is implemented with Application 1-1 and has a unique IP address:port tuple of 192.168.0.1:80. Container 2 is implemented with Application 1-2 and has a unique IP::Port address of 192.168.1.1:82. The two containers are implemented on different virtual machines. Container 1 is implemented on VM1, while Container 2 is implemented on VM2. The unique IP address:port tuples of each container are aliased from a range of IP addresses and ports that are available on each VM. Berenberg teaches that by using IP address aliasing, a container may be configured with a specific IP address:port tuple that can be used by the Application Load Balancer to distribute load balanced data requests or service traffic to the containers in the NEG. For example, Application Load Balancer 2 can distribute data requests and service traffic to Container 3 and Container 4, respectively based on the backend service (e.g., Application 2) that is associated with NEG 2. As further shown in FIG. 2, Container 3 is implemented on the same VM as Container 1 even though the two containers are servicing different backend service applications (See Berenberg, col. 10, ll.27-57). Berenberg does not explicitly disclose determining a tunnel interface associated with the service endpoint device, the tunnel interface configured to transmit network traffic to the service; installing, by the network controller, an IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the tunnel interface and to the IP address. However in an analogous art, Wells discloses determining an interface associated with the service endpoint device, the interface configured to transmit network traffic to the service (wherein with continued reference to FIG. 2C, depicted therein is the configuration of server 210 with a loopback address for service 212 from router 220. Specifically, each of interfaces 214 and 216 sends a discovery message, discovery messages 282 and 292, respectively. Discovery messages 282 and 292 indicate that a loopback address is needed for service 212, as well as the lowest IP address associated with server 210, i.e., the lower of the two IP addresses associated with interfaces 214 and 216. This information may be communicated in discovery messages 282 and 292 through, e.g., a DHCP option value. As illustrated in the pseudocode of message exchange 280, discovery message 282 from interface 214 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which again, is the address associated with interface 214, not the address associated with interface 216) (Wells, FIG. 2C, paragraphs [0020] and [0023]); installing, by the network controller, an IP address in association with the service (wherein as discussed and shown above, discovery message 282 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, not the address associated with interface 216. Both messages 282 and 292 include the lowest IP address associated with either of interfaces 214 and 216 to ensure that router 220 is aware that messages 282 and 292 are both originating from the same server 210. Wells teaches that because both of messages 282 and 292 indicate the same service 212 (i.e., function “XYZ”), router 220 provides the same address when it responds with offer messages 284 and 294, respectively. Router 220 responds to both interfaces 214 and 216 through offer messages 286 and 296, respectively, which offer the same IP address of “10.1.1.2” to both of interfaces 214 and 216 for assignment to service 212. Accordingly IP address “10.1.1.2” is the second IP address installed in association with service 212) (Wells, FIG. 2C, paragraph [0023]); and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the interface and to the IP address (wherein again, packets addressed to IP address “10.0.0.2” are routed through IP address “10.1.1.2” for service 212. That is, subsequent to the sending of acknowledgement messages 288 and 298, router 220 installs a route to service 212 through interface 214, which includes the address for interface 214 of “10.0.0.2” and the loopback address “10.1.1.2.,” and likewise installs a route to service 212 through interface 216, which includes the address for interface 216 of “10.0.1.2” and the same loopback address “10.1.1.2.”) (Wells, paragraph [0028]). As discussed and shown above, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of determining an interface associated with the service endpoint device, the interface configured to transmit network traffic to the service; installing, by the network controller, an IP address in association with the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the interface and to the IP address, as disclosed in Wells, with reasonable expectation that this would result in a system with additional redundancy techniques for traffic destined for a [backend] service. That is, because router 220 has two installed routes to service 212, one through interface 214 and another through interface 216, router 220 may implement Equal-Cost Multipath (ECMP) routing load balancing and link redundancy techniques for traffic destined for service 212. For example, Wells teaches that if both interfaces 214 and 216 are operational, traffic received at router 220 destined for service 212 may be sent through either of interfaces 214 and 216 according to ECMP techniques. Furthermore, if either of interfaces 214 or 216 becomes inoperable, the other interface may serve as a redundant path through which traffic may be transmitted from router 220 to server 210 (See Wells, paragraph [0029]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 9. Berenberg-Wells does not explicitly disclose determining a tunnel interface associated with the service endpoint device, the tunnel interface configured to transmit network traffic to the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the tunnel interface and to the IP address. However in an analogous art, Schornig discloses determining a tunnel interface associated with the service endpoint device, the tunnel interface configured to transmit network traffic to the service (wherein as discussed above with respect to dependent claim 5, in the case of a software-defined WAN (SD-WAN), Schornig teaches that a router 110 may provide connectivity to software-as-a-service (SaaS) provider(s) 308 via tunnels across any number of networks 306. This allows clients located in the LAN of a remote site 302 to access cloud applications (e.g., Office 365™, Dropbox™, etc.) served by SaaS provider(s) 308. More particularly, Schornig teaches that a third interface (Int 3) of a router 110, which is a tunnel interface, located at the edge of remote site 302, (See in particular, network deployment 310 illustrated in FIG. 3B) may establish a third path to SaaS provider(s) 308 via a private corporate network 306c (e.g., an MPLS network) to a private data center or regional hub 304 which, in turn, provides connectivity to SaaS provider(s) 308 via another network, such as a third ISP 306d) (Schornig, FIG. 3B, col. 7, ll. 41-46, col. 7, l. 65-col. 8, l. 3); and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the tunnel interface and to the IP address (wherein again, Schornig further discloses that a SDN controller 408 (See also FIG. 4A) may oversee the operations of routers 110a-110b in SD-WAN service point 406 and SD-WAN fabric 404, and may be responsible for monitoring the operations thereof, promulgating policies (e.g., security policies, etc.), installing or adjusting IPsec routes/tunnels between LAN core 402 and remote destinations such as the regional hub 304 and/or SaaS provider(s) 308 in FIGS. 3A-3B, and the like. As such, Schornig teaches configuring IP addresses on regional hub 304, as well as on the SaaS provider(s) themselves, which are the endpoints for providing the services, as installing IPsec routes/tunnels involves installing/configuring IP addresses, as readily understood by the skilled artisan) (Schornig, FIG. 4A, col. 8, ll. 21-32). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells and Schornig are analogous art, because Wells and Schornig are from the same problem solving area, namely, programming and adding routes/paths for a service executing on a service endpoint device (See Wells, paragraph [0011], Schornig, col. 2, ll. 4-17). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg-Wells and Schornig before him or her, to modify the system of Berenberg-Wells to include the additional limitations of determining a tunnel interface associated with the service endpoint device, the tunnel interface configured to transmit network traffic to the service; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the service endpoint device through the tunnel interface and to the IP address, as disclosed in Schornig, with reasonable expectation that this would result in providing connectivity from yet an additional avenue, thus further adding to the robustness of the architecture (See Schornig, col. 7, l. 65-col. 8, l. 3). This method of improving the architecture of Berenberg-Wells was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Schornig. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg-Wells with Schornig to obtain the invention as specified in claim 9. Claims 10, 11 and 12 include “method” claims that perform limitations substantially as recited in “system” claims 2, 3 and 5, respectively, and do not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, they are rejected under the same rationale. Regarding claim 13, Berenberg-Wells-Schornig discloses the method of claim 9, further comprising: receiving route information from a client device associated with the network traffic (wherein as discussed and shown above with respect to dependent claim 6, Berenberg further teaches that in some implementations, a user may create a NEG and/or delete a NEG by interacting with a user interface or a command line programming interface. The user interface or command line programming interface may be provided by the NEG Controller. In other implementations, the user interface or command line programming interface may be provided by the backend service orchestrator platform. In some implementations, a user may list all the NEGs in a project by interacting with the user interface or command line programming interface. Additionally, or alternatively, in some implementations, a user may interact with the user interface or command line programming interface to add network endpoints to an existing NEG. In other implementations, a user may interact with the user interface or command line programming interface to remove network endpoints from an existing NEG. In some implementations, a user may interact with the user interface or command line programming interface to list all the network endpoints in a NEG. In other implementations, a user may interact with the user interface or command line programming interface to attach a NEG as a backend service of an Application Load Balancer. With regard to attaching additional network endpoints, a list of endpoints may be attached to a specified NEG. According to some load balancing examples, one or more conditions may apply. Examples of such conditions include that the VM instance be specified with each network endpoint, that the IP address for a network endpoint belong to the specified VM instance, that the specified VM instance belong to a zone and network in the NEG resource, that the port be specified with each network endpoint or a default port be specified in the NEG, and that all IP:port values in the NEG are unique) (Berenberg, col. 15, ll. 43-52, col. 15, l. 58-col. 16, l. 2, col. 16, ll. 22-32), the route information indicating the interface and the IP address (again, conditions include that the port be specified with each network endpoint or a default port be specified in the NEG, and that the IP address for a network endpoint belong to the specified VM instance, that the specified VM instance belong to a zone and network in the NEG resource) (Berenberg, col. 16, ll. 27-31). Berenberg does not expressly disclose the route information indicating the tunnel interface; based at least in part on receiving the route information: determining the tunnel interface; and installing the IP address. However Wells discloses based at least in part on receiving the route information: determining the interface (again, determining that interface 214 is that with the lowest IP address) (Wells, paragraph [0023]); and installing the IP address (again, loopback address is installed) (Wells, paragraph [0023]). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of based at least in part on receiving the route information: determining the interface; and installing the IP address, as disclosed in Wells, with reasonable expectation that this would result in the enabling dynamic configuration for both sides of the routes between server and router (See Wells, paragraph [0030]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 13. Berenberg-Wells does not expressly disclose the route information indicating the tunnel interface; and based at least in part on receiving the route information: determining the tunnel interface. However Schornig discloses route information indicating a tunnel interface (wherein Schornig teaches that a Bypass Forwarding Protocol (BFP) is introduced to achieve tunnel-less path engineering between endpoint agents and applications (via MMO gateways) or, alternatively, tunneled to the MMO gateway. More particularly, with reference to FIG. 9, Schornig teaches that during a flow announcement phase 902 and using BFP, an endpoint agent 608 may advertise to MMO gateway 626 the existence of a new TCP or UDP flow that needs to be rerouted and include details such as: endpoint, flowID, application name, destination IP and Port, application metadata (e.g., appID, name, etc.)) (Schornig, FIG. 9, col. 19, ll. 18-22, col. 19, ll. 31-36); and based at least in part on receiving the route information: determining the tunnel interface (wherein responsive to the advertisement from the endpoint agent 608 during the flow announcement phase 902, MMO gateway 626 creates an internal flow record that consists of the public IP address of client endpoint 606, a locally designated port number and, optionally, a local IP address (Schornig notes that there should more than one be available). Once endpoint agent 608 receives the mapping details from MMO gateway 626, it is ready to rewrite the IP headers and forward traffic, tunneled through the gateway) (Schornig, FIG. 9, col. 19, ll. 36-43). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells and Schornig are analogous art, because Wells and Schornig are from the same problem solving area, namely, programming and adding routes/paths for a service executing on a service endpoint device (See Wells, paragraph [0011], Schornig, col. 2, ll. 4-17). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg-Wells and Schornig before him or her, to modify the system of Berenberg-Wells to include the additional limitations of the route information indicating the tunnel interface; and based at least in part on receiving the route information: determining the tunnel interface, as disclosed in Schornig, with reasonable expectation that this would result in reducing network overhead, as well as reducing CPU/memory overhead when a bypass path is required for improved QoE and addressing packet loss (See Schornig, col. 19, ll. 1-17). This method of improving the architecture of Berenberg-Wells was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Schornig. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg-Wells with Schornig to obtain the invention as specified in claim 13. Regarding claim 14, Berenberg-Wells-Schornig discloses the method of claim 9. Berenberg does not expressly disclose, but Wells discloses wherein the route is a first route (again, two routes to service 212 are installed, the first reachable to service 212 through interface 214, the second reachable through interface 216) (Wells, paragraph [0029]) and the IP address is a first IP address (again, the first assigned IP address is that of interface 214, namely, “10.0.0.1”) (Wells, paragraph [0019]), and the method further comprising: installing a second IP address associated with the computing resource network (again, IP address “10.1.1.2” is the loopback address) (Wells, FIG. 2C, paragraphs [0020] and [0023]); and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the service endpoint device through the interface and to the second IP address (wherein again, second route is provisioned to service 212 from interface 216) (Wells, paragraphs [0029] and [0030]). Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of wherein the route is a first route and the IP address is a first IP address, and the method further comprising: installing a second IP address associated with the computing resource network; and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the service endpoint device through the interface and to the second IP address, as disclosed in Wells, with reasonable expectation that this would result in implementing ECMP techniques through the two routes, the two routes providing increased bandwidth and the two routes providing redundancy in the event of a route failure (See Wells, paragraph [0030]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 14. Berenberg-Wells does not expressly disclose installing a second IP address in association with a service hub associated with the computing resource network; and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the service endpoint device through the tunnel interface and to the second IP address. However Schornig discloses installing a second IP address in association with a service hub associated with the computing resource network (wherein Schornig teaches that a Bypass Forwarding Protocol (BFP) is introduced to achieve tunnel-less path engineering between endpoint agents and applications (via MMO gateways) or, alternatively, tunneled to the MMO gateway. More particularly, with reference to FIG. 9, Schornig teaches that during a flow announcement phase 902 and using BFP, an endpoint agent 608 may advertise to MMO gateway 626 the existence of a new TCP or UDP flow that needs to be rerouted and include details such as: endpoint, flowID, application name, destination IP and Port, application metadata (e.g., appID, name, etc.). Responsive to the advertisement from the endpoint agent 608 during the flow announcement phase 902, MMO gateway 626 creates an internal flow record that consists of the public IP address of client endpoint 606, a locally designated port number and, optionally, a local [second] IP address (Schornig notes that there should more than one be available). Once endpoint agent 608 receives the mapping details from MMO gateway 626, it is ready to rewrite the IP headers and forward traffic, tunneled through the gateway) (Schornig, FIG. 9, col. 19, ll. 18-22, col. 19, ll. 31-43); and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the service endpoint device through the tunnel interface and to the second IP address (again, the traffic is forwarded through the tunnel of the MMO gateway) (Schornig, FIG. 9, col. 19, ll. 40-43). Again, Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells and Schornig are analogous art, because Wells and Schornig are from the same problem solving area, namely, programming and adding routes/paths for a service executing on a service endpoint device (See Wells, paragraph [0011], Schornig, col. 2, ll. 4-17). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg-Wells and Schornig before him or her, to modify the system of Berenberg-Wells to include the additional limitations of installing a second IP address in association with a service hub associated with the computing resource network; and installing a second route in association with the service, the second route being configured to transmit network traffic addressed to the service endpoint device through the tunnel interface and to the second IP address, as disclosed in Schornig, with reasonable expectation that this would result in reducing network overhead, as well as reducing CPU/memory overhead when a bypass path is required for improved QoE and addressing packet loss (See Schornig, col. 19, ll. 1-17). This method of improving the architecture of Berenberg-Wells was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Schornig. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg-Wells with Schornig to obtain the invention as specified in claim 14. Regarding claim 16, Berenberg discloses a system comprising: one or more processors (wherein as discussed and shown above with respect to independent claim 1, FIG. 6 illustrates a network endpoint group (NEG) controller and an example Load Balance Controller. Each controller may include, e.g., a memory storing data and instructions, and one or more processors) (Berenberg, FIG. 6 col. 16, ll. 46-50); and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors (again, the memory storing data and instructions. In addition, Berenberg teaches that the subject matter described in this specification can be implemented as one or more computer programs embodied on a tangible medium, i.e., one or more modules of computer program instructions, encoded on one or more computer storage media for execution by, or to control the operation of, a data processing apparatus) (FIG. 6 col. 16, ll. 46-50, col. 19, ll. 2-7), cause the one or more processors to perform operations comprising: identifying, by a network controller associated with a computing resource network, a service executing on a service endpoint device associated with the computing resource network (wherein Berenberg teaches that various backend service management platforms or tools can be used to manage network endpoint groups (NEGs) implemented in environments of virtualized computing instances. These tools operate as backend service orchestrators and expose public application programming interfaces (APIs), which can be used to manage the virtual machines or containers within the cloud computing environment. Example tools for backend service orchestration and management include Kubernetes, Apache Mesos, and Cloud Foundry. More specifically, these tools are designed to automate deploying, scaling and operation of backend services configured as application containers. For example, using Kubernetes, a load balancer may target container pods. A pod may include a group of one or more containers. A pod may encapsulate an application container or multiple application containers. Berenberg teaches that the services, or containers in a pod, may be deployed together, and may be started, stopped, and replicated as a group. See FIG. 1, illustrating backend services configured as application containers, part of virtualized environment. In addition, as further shown in FIG. 2, each container in a NEG receives and transmits control data with a NEG Controller so that the operational status and processing capacity of each container or virtual computing instance in the NEG are known. For example, if Container 4 experiences a faulty port, a buffer overload, an out-of-memory condition, or any other issue that may affect the operational status of the container, the NEG Controller may receive control data indicating the change of operational status of Container 4 and may transmit control data to the Load Balancer Controller. The Load Balance Controller may recalculate the load balancing of data requests and transmit updated forwarding rules to the Application 2 Load Balancer causing the Application 2 Load Balancer to redistribute the data requests from user devices (e.g., user device A and/or B)) (Berenberg, FIGS. 1 and 2, col. 6, l. 57-col.7, l. 6, col. 10, l. 58-col. 11, l. 5). As further discussed supra, Berenberg further notes that because the network endpoints in the NEG have unique IP or IP address:port tuples, the network endpoints within a NEG can be a specific target for a backend service application load balancer. In this way, the network endpoints that are grouped to form a NEG provide greater load balancing capabilities and require less redundant load balancing at different architectural levels than is commonly required in backend service architectures implemented on virtual machines which do not uniquely address each containerized instance of an application. For example, as shown in FIG. 2, NEG 1 includes two containers associated with Application 1. Container 1 is implemented with Application 1-1 and has a unique IP address:port tuple of 192.168.0.1:80. Container 2 is implemented with Application 1-2 and has a unique IP::Port address of 192.168.1.1:82. The two containers are implemented on different virtual machines. Container 1 is implemented on VM1, while Container 2 is implemented on VM2. The unique IP address:port tuples of each container are aliased from a range of IP addresses and ports that are available on each VM. Berenberg teaches that by using IP address aliasing, a container may be configured with a specific IP address:port tuple that can be used by the Application Load Balancer to distribute load balanced data requests or service traffic to the containers in the NEG. For example, Application Load Balancer 2 can distribute data requests and service traffic to Container 3 and Container 4, respectively based on the backend service (e.g., Application 2) that is associated with NEG 2. As further shown in FIG. 2, Container 3 is implemented on the same VM as Container 1 even though the two containers are servicing different backend service applications (See Berenberg, col. 10, ll.27-57). Berenberg does not explicitly disclose determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface associated with the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address in association with a service hub associated with the computing resource network; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address. In an analogous art, however, Wells discloses determining a first internet protocol (IP) address associated with the service endpoint device (wherein with reference to FIG. 2A, Wells discloses a server 210 configured to provide a service 212. As such, server 210 is a service endpoint device. Server 210 is configured with two interfaces, interface 214 and interface 216. As illustrated in FIG. 2, interface 214 and interface 216 both connect to router 220. According to other example embodiments, router 220 could be replaced by, e.g., a DHCP server, such as DHCP server 140 of FIG. 1. In addition, Wells suggests that router 220 could also be replaced by another device, such as a Software Defined Network (SDN) controller. When server 210 initially connects through router 220 to a network environment, it may use an automatic configuration protocol, such as DHCP, to configure interfaces 214 and 216 with appropriate Internet Protocol (IP) addresses. A message exchange implementing such a configuration is illustrated through pseudo code 250 and 252 for each of interfaces 214 and 216, respectively. Specifically, interface 214 broadcasts a discovery message 262 which indicates that server 210 has connected to the network environment via interface 214. A configuration controller will respond with an offer of an Internet Protocol (IP) address through message 264. According to the example embodiment of FIG. 2A, router 220 serves as a DHCP server, and therefore, router 220 sends offer 264, which includes a proposed IP address, in this case an address of “10.0.0.2.” Server 210 responds to offer 264 with request 266 via which server 210 requests that the address from offer 264 be assigned to interface 214. Router 220 responds with acknowledgement 268, at which point server 210 configures interface 214 with the address from offer 264. An analogous process for interface 216 is implemented through pseudocode 252 via messages 272-278. The IP address associated with offer message 274 is “10.0.1.2.” Accordingly, at the completion of the configurations, interface 214 is configured with an IP address of “10.0.0.2” and interface 216 is configured with an IP address of 10.0.1.2,” as illustrated in FIG. 2B) (Wells, FIGS. 2A and 2B, paragraphs [0015], [0018] and [0019]); determining an outgoing interface associated with the service endpoint device, the outgoing interface being configured to transmit network traffic to the service (wherein with continued reference to FIG. 2C, depicted therein is the configuration of server 210 with a loopback address for service 212 from router 220. Specifically, each of interfaces 214 and 216 sends a discovery message, discovery messages 282 and 292, respectively. Discovery messages 282 and 292 indicate that a loopback address is needed for service 212, as well as the lowest IP address associated with server 210, i.e., the lower of the two IP addresses associated with interfaces 214 and 216. This information may be communicated in discovery messages 282 and 292 through, e.g., a DHCP option value. As illustrated in the pseudocode of message exchange 280, discovery message 282 from interface 214 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which again, is the address associated with interface 214, not the address associated with interface 216) (Wells, FIG. 2C, paragraphs [0020] and [0023]); installing, by the network controller, a second IP address associated with the computing resource network (wherein as discussed and shown above, discovery message 282 indicates that the discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, the source of message 282. In addition, message 292 of message exchange 290 also indicates that that discovery message is intended to receive an offer for a loopback address for service 212 (i.e., function “XYZ”) and that the lowest IP address associated with the interfaces of server 210 is “10.0.0.2,” which is the address associated with interface 214, not the address associated with interface 216. Both messages 282 and 292 include the lowest IP address associated with either of interfaces 214 and 216 to ensure that router 220 is aware that messages 282 and 292 are both originating from the same server 210. Wells teaches that because both of messages 282 and 292 indicate the same service 212 (i.e., function “XYZ”), router 220 provides the same address when it responds with offer messages 284 and 294, respectively. Router 220 responds to both interfaces 214 and 216 through offer messages 286 and 296, respectively, which offer the same IP address of “10.1.1.2” to both of interfaces 214 and 216 for assignment to service 212. Accordingly IP address “10.1.1.2” is the second IP address installed in association with service 212) (Wells, FIG. 2C, paragraph [0023]); and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address (wherein packets addressed to IP address “10.0.0.2” are routed through IP address “10.1.1.2” for service 212. That is, subsequent to the sending of acknowledgement messages 288 and 298, router 220 installs a route to service 212 through interface 214, which includes the address for interface 214 of “10.0.0.2” and the loopback address “10.1.1.2.,” and likewise installs a route to service 212 through interface 216, which includes the address for interface 216 of “10.0.1.2” and the same loopback address “10.1.1.2.”) (Wells, paragraph [0028]). Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells is analogous art, because Wells is from the same problem solving area, namely, programming routes for a service executing on a service endpoint device using a loopback address (See Wells, paragraph [0011]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg and Wells before him or her, to modify the system of Berenberg to include the additional limitations of determining a first internet protocol (IP) address associated with the service endpoint device; determining an outgoing interface associated with the service endpoint device, the outgoing interface being configured to transmit network traffic to the service; installing, by the network controller, a second IP address associated with the computing resource network; and installing, by the network controller, a route in association with the service, the route being configured to transmit packets addressed to the first IP address through the outgoing interface and to the second IP address, as disclosed in Wells, with reasonable expectation that this would result in a system with additional redundancy techniques for traffic destined for a [backend] service. That is, because router 220 has two installed routes to service 212, one through interface 214 and another through interface 216, router 220 may implement Equal-Cost Multipath (ECMP) routing load balancing and link redundancy techniques for traffic destined for service 212. For example, Wells teaches that if both interfaces 214 and 216 are operational, traffic received at router 220 destined for service 212 may be sent through either of interfaces 214 and 216 according to ECMP techniques. Furthermore, if either of interfaces 214 or 216 becomes inoperable, the other interface may serve as a redundant path through which traffic may be transmitted from router 220 to server 210 (See Wells, paragraph [0029]). This method of improving the architecture of Berenberg was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Wells. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg with Wells to obtain the invention as specified in claim 16. Wells does not explicitly disclose installing, by the network controller, a second IP address in association with a service hub associated with the computing resource network. However in an analogous art, Schornig discloses installing, by a network controller, a second IP address in association with a service hub associated with a computing resource network (wherein with respect to a software-defined WAN (SD-WAN), Schornig teaches that a router 110 may provide connectivity to software-as-a-service (SaaS) provider(s) 308 via tunnels across any number of networks 306. This allows clients located in the LAN of a remote site 302 to access cloud applications (e.g., Office 365™, Dropbox™, etc.) served by SaaS provider(s) 308. Schornig further teaches that a third interface (Int 3) of a router 110, located at the edge of remote site 302, (See in particular, network deployment 310 illustrated in FIG. 3B) may establish a third path to SaaS provider(s) 308 via a private corporate network 306c (e.g., an MPLS network) to a private data center or regional hub 304 which, in turn, provides connectivity to SaaS provider(s) 308 via another network, such as a third ISP 306d. Schornig further discloses that a SDN controller 408 (See also FIG. 4A) may oversee the operations of routers 110a-110b in SD-WAN service point 406 and SD-WAN fabric 404, and may be responsible for monitoring the operations thereof, promulgating policies (e.g., security policies, etc.), installing or adjusting IPsec routes/tunnels between LAN core 402 and remote destinations such as the regional hub 304 and/or SaaS provider(s) 308 in FIGS. 3A-3B, and the like. As such, Schornig teaches configuring IP addresses on regional hub 304, as installing IPsec routes/tunnels involves installing/configuring IP addresses, as readily understood by the skilled artisan) (Schornig, FIGS. 3B and 4A, col. 7, ll. 41-46, col. 7, l. 65-col. 8, l. 3, col. 8, ll. 21-32). Berenberg is analogous art because Berenberg is reasonably pertinent to the particular problem with which the inventor was concerned, as one of the aspects of the disclosure of Berenberg is directed towards scaling network endpoint groups (NEGs), particularly by adding a network endpoint, as well as assigning an associated IP address thereto, and updating forwarding rules based on the assigned IP address (See Berenberg, col. 3, ll. 7-21), while Wells and Schornig are analogous art, because Wells and Schornig are from the same problem solving area, namely, programming and adding routes/paths for a service executing on a service endpoint device (See Wells, paragraph [0011], Schornig, col. 2, ll. 4-17). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Berenberg-Wells and Schornig before him or her, to modify the system of Berenberg-Wells to include the additional limitation of installing, by a network controller, a second IP address in association with a service hub associated with a computing resource network, as disclosed in Schornig, with reasonable expectation that this would result in providing connectivity from yet an additional avenue, thus further adding to the robustness of the architecture (See Schornig, col. 7, l. 65-col. 8, l. 3). This method of improving the architecture of Berenberg-Wells was well within the ordinary ability of one of ordinary skill in the art based on the teachings of Schornig. Therefore, it would have been obvious to one having ordinary skill in the art to combine the teachings of Berenberg-Wells with Schornig to obtain the invention as specified in claim 16. Claims 17, 18, 19 and 20 include “system” claims that perform limitations substantially as recited in “system” claims 2, 4, 6 and 5, respectively, and do not appear to contain any additional features with regard to novelty and/or nonobviousness; therefore, they are rejected under the same rationale. Allowable Subject Matter 11. Claims 7 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 12. Further references of interest are cited on Form PTO-892, which is an attachment to this Office Action. For instance, NAVON (USPGPUB 2023/0117218) discloses a system and method for cloud-edge forwarding in a network. A packet is received via a first network interface of a first network device in an underlay network, the packet having been originated by a first endpoint device and including a first network address indicating a destination of the first packet. The first network device, without analyzing the first network address in the first packet, adds, to the first packet, a second network address corresponding to a cloud edge network device implemented at the cloud edge and information identifying the first network interface via which the first packet was received by the first network device. The first network device transmits the packet, via an overlay network layered over the underlay network, to the cloud edge network device to enable forwarding of the packet to the destination of the packet, based on the first network address included in the packet, by the cloud edge network device (See Abstract). 13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KOSTAS J. KATSIKIS whose telephone number is (571)270-5434. The examiner can normally be reached Monday-Friday, 9:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian J. Gillis can be reached at 571-272-7952. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KOSTAS J KATSIKIS/Primary Examiner, Art Unit 2441
Read full office action

Prosecution Timeline

Mar 18, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596780
TECHNIQUES TO PERFORM DYNAMIC CALL CENTER AUTHENTICATION UTILIZING A CONTACTLESS CARD
2y 5m to grant Granted Apr 07, 2026
Patent 12592840
BLOCKCHAIN-BASED DATA PROCESSING METHOD, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12566624
COMMUNICATION BETWEEN CONTROL PLANES IN A VIRTUALIZED COMPUTING SYSTEM HAVING AN AUTONOMOUS CLUSTER
2y 5m to grant Granted Mar 03, 2026
Patent 12568112
DISTRIBUTED DENIAL OF SERVICE (DDOS) BASED ACCELERATED SOLUTION
2y 5m to grant Granted Mar 03, 2026
Patent 12563051
Assistance method for managing a cyber attack, and device and system thereof
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+28.9%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 758 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month