6DETAILED ACTION
Claims 1, 11 and 17 have been amended.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/25 has been entered.
Response to Arguments
Applicant’s arguments with respect to the 103 rejection of claims 1, 11 and 17 (see applicant’s remarks; pages 7-9) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In particular, the examiner has introduced Tracy to disclose the amended limitations, as shown in the rejection below.
The applicant states the same argument for the corresponding dependent claims (see applicant’s remarks; page 9). As such, the argument is considered moot for the same reason discussed above.
Claim Interpretation
Regarding claim 18, the claims recite alternative language, i.e. using the term “or”, and as such, the Examiner interprets certain features to not be required due to the claim language listing the features in the alternative. The rejection below specifies the particular limitations.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 10, the limitation “wherein the public load balancer and the gateway load balancer are implemented within a single network device” (emphasis added) is recited. However, claim 1 from which claim 10 depends recites "providing the unprocessed data packets from the public load balancer of the first virtual network to a gateway load balancer of a second virtual network...via an external encapsulation tunnel" (emphasis added).
As such, claim 10 is rendered indefinite and unclear since the "encapsulation tunnel" cannot be "external" when the public load balancer and gateway load balancer are a "single network device" as recited in claim 10.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rolando et al. (U.S. 2021/0314277 A1) (applicant admitted prior art; see IDS filed 05/08/23) in view of ManKad et al. “Centralized inspection architecture with AWS Gateway Load Balancer and AWS Transit Gateway” (NPL) and Tracy et al. (U.S. 2023/0396539 A1), and further in view of Shen et al. (U.S. 2021/0314239 A1).
Regarding claims 1 and 17, Rolando discloses a computer-implemented method and system for transparently inserting network virtual appliances into a networking service chain comprising:
identifying unprocessed data packets at a public load balancer of a first virtual network that provides data packets to one or more virtual machines of a cloud computing system (see Rolando; paragraphs 0055, 0060, 0064, 0069, 0221 and 0222; Rolando discloses a data message in the form of IP packets received based on a load balancing operation from an external network, i.e. “public load balancer of a first virtual network”, as part of a processing pipeline for service to virtual machines implemented in a cloud environment, i.e. “provides data packets to one or more virtual machines”. It is determined, i.e. “identifying”, that the data message requires a service and is sent for processing. Therefore, the data message is not processed yet, i.e. “unprocessed data packets”); and
sending the processed data packets from the public load balancer of the first virtual network to the one or more virtual machines (see Rolando; paragraphs 0069, 0221, 0222 and 0246; Rolando discloses identifying the virtual machine, and after the data message is sent to be processed, i.e. “processed data packets”, the load balancer sends the encapsulated data message over the network, i.e. “the first virtual network”, to the client virtual machine).
While Rolando discloses the data message being encapsulated via a tunnel (see Rolando; paragraphs 0076 and 0219- 0221), Rolando does not explicitly disclose intercepting, from the public load balancer, the unprocessed data packets; providing the encapsulated data packets from the gateway load balancer to a network virtual appliance to generate processed data packets; and causing the processed data packets to be transmitted to the public load balancer via the external encapsulation tunnel.
In analogous art, Mankad discloses intercepting, from the public load balancer, the unprocessed data packets (see Mankad; page 3 steps 1, 4, 5 and 5a; Mankad discloses a VPC communicates with a resource on the internet and sends traffic, i.e. “data packets”, to a transit gateway, i.e. “public load balancer”. A GWLB, e.g. gateway load balancer, receives, i.e. “intercepting...”, and encapsulates the traffic using GENEVE, e.g. generic network virtualization encapsulation, i.e. “via an external encapsulation tunnel”, before sending the traffic to an appliance, such as a firewall. Therefore, the traffic is “unprocessed” since it has not been sent to the firewall yet);
providing the encapsulated data packets from the gateway load balancer to a network virtual appliance to generate processed data packets (see Mankad; pages 3 and 4 steps 5, 5a and 6; Mankad discloses the GWLB, i.e. “the gateway load balancer”, forwards the encapsulated traffic to a virtual appliance, such as a firewall, to make a decision on the traffic, i.e. “generate processed data packets”); and
causing the processed data packets to be transmitted to the public load balancer via the external encapsulation tunnel (see Mankad; page 4 steps 7-10; Mankad discloses the virtual appliance, e.g. firewall, re-encapsulates, i.e. “via the external encapsulation tunnel”, the traffic, i.e. “the processed data packets”, and forwards the traffic to the GWLB, which further forwards the traffic to an endpoint, e.g. GWLBE, and then the traffic is routed to the gateway, i.e. “public load balancer”. The examiner notes this interpretation is supported by the applicant’s specification where it states that the appliance system returns the processed data packets to the public load balancer via the gateway load balancer; see applicant’s specification as filed; paragraph 0058).
One of ordinary skill in the art would have been motivated to combine Rolando and Mankad because they both disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of a gateway load balancer as taught by Mankad into the system of Rolando in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a gateway load balancer thereby allowing deploying, scaling and running virtual appliances easier (see Mankad; page 1).
While Mankad discloses “intercepting, from the public load balancer of the first virtual network, the unprocessed data packets”, “providing the encapsulated data packets from the gateway load balancer to a network virtual appliance…” and “causing the processed data packets to be transmitted to the public load balancer via the external encapsulation tunnel”, as discussed above, the combination of Rolando and Mankad does not explicitly disclose encapsulating the unprocessed data packets intercepted at the public load balancer; and providing the unprocessed data packets from the public load balancer of the first virtual network to a gateway load balancer of a second virtual network as encapsulated data packets via an external encapsulation tunnel.
In analogous art, Tracy discloses encapsulating the unprocessed data packets intercepted at the public load balancer (see Tracy; paragraphs 0055, 0121 and 0242; Tracy discloses a customer network includes public compute instances, such as a load balancer, i.e. “public load balancer”. Intercepting customer network packets, i.e. “unprocessed data packets” since no function has been done to the packets yet, then encapsulating the customer network packets before they are traversed. The network packets are associated with the compute instances, such as, the load balancer, i.e. “intercepted at the public load balancer”);
providing the unprocessed data packets from the public load balancer of the first virtual network to a gateway load balancer of a second virtual network as encapsulated data packets via an external encapsulation tunnel (see Tracy; paragraphs 0055, 0077, 0121, 0169 and 0207; Tracy discloses the public load balancer of the customer network, i.e. “first virtual network”, providing packets and intercepting the customer’s network packets, i.e. “unprocessed data packets from the public load balancer of the first virtual network”, using tunneling for encapsulation, such as generic routing encapsulation, i.e. “external encapsulation tunnel”. For example, sending the packets to a load balancer that may be a gateway, i.e. “gateway load balancer”, of a service virtual cloud network, i.e. “a second virtual network”, using the encapsulation, i.e. “…as encapsulated data packets via an external encapsulation tunnel”).
One of ordinary skill in the art would have been motivated to combine Rolando, Mankad and Tracy because they all disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of providing packets from a customer network to a service network as taught by Tracy into the combined system of Rolando and Mankad in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a compute instance that can communicate with various different endpoints, such as a compute instance in a customer network to an endpoint in a service network (see Tracy; paragraphs 0055, 0077 and 0207), thus providing communication across different networks.
While Mankad discloses “…to generate processed data packets”, and Tracy discloses “the public load balancer of the first virtual network”, as discussed above, the combination of Rolando, Mankad and Tracy does not explicitly disclose causing the processed data packets to be transmitted from the second virtual network to the public load balancer of the first virtual network.
In analogous art, Shen discloses causing the processed data packets to be transmitted from the second virtual network to the public load balancer of the first virtual network (see Shen; paragraphs 0074, 0083, 0091 and 0097; Shen discloses packet processing at the logical networks and the VPC gateway forwarding packets directly to the other gateway, such as, from the default virtual network, i.e. “the second virtual network”, to the public virtual network, i.e. “the first virtual network”).
One of ordinary skill in the art would have been motivated to combine Rolando, Mankad, Tracy and Shen because they all disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of a gateway load balancer as taught by Shen into the combined system of Rolando, Mankad and Tracy in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a gateway load balancer thereby allowing deploying, scaling and running virtual appliances easier (see Mankad; page 1) across different virtual network types to achieve the most efficiency by robustly redistributing across the different virtual networks (see Shen; paragraphs 0097 and 0114).
Further, Rolando discloses the additional limitations of claim 17, at least one processor (Rolando; paragraph 0251; Rolando discloses single or multi-core processors); and a non-transitory computer memory comprising instructions (see Rolando; paragraph 0252; Rolando discloses a system memory stores some instructions and data that the processor needs at runtime).
Regarding claim 2, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses unencapsulating the processed data packets transmitted to the public load balancer to generate unencapsulated processed data packets (see Mankad; page 4 steps 6 and 7; Mankad discloses decapsulating, i.e. “unencapsulating”, and re-encapsulating the traffic, i.e. “processed data packets”),
wherein sending the processed data packets from the public load balancer to the one or more virtual machines comprises sending the unencapsulated processed data packets from the public load balancer to the one or more virtual machines without the one or more virtual machines detecting that the processed data packets were processed by the network virtual appliance (see Rolando; paragraphs 0069, 0222, 0223 and 0246; Rolando discloses identifying the virtual machine, and after the data message is sent to be processed, i.e. “processed data packets”, a tag is removed once the data message has completed processing then the load balancer sends the data message to the client virtual machine. In other words, by the tag being removed after completed processing and before being sent to the virtual machine, the virtual machine would not “detect that the processed data packets were processed by the network virtual appliance”).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 3, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses wherein intercepting the unprocessed data packets comprises providing the unprocessed data packets from the public load balancer to a private network address of the gateway load balancer via the external encapsulation tunnel (see Mankad; page 3 steps 3 and 4; Mankad discloses receiving the traffic before it is sent to an appliance, i.e. the traffic is “unprocessed data”, and using a route table and private link, i.e. “private network address”, to provide the traffic to the GWLB).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 4, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses providing an additional set of unprocessed data packets from the gateway load balancer to the network virtual appliance via the external encapsulation tunnel (see Mankad; page 3 and 4 steps 5, 5a and 6 and Figure 2; Mankad discloses the GWLB, i.e. “the gateway load balancer”, forwards the encapsulated traffic, which can come from different availability zones, i.e. therefore “an additional set of unprocessed data packets”, to a virtual appliance, such as a firewall, to make a decision on the traffic, i.e. “generate processed data packets”); and
determining to drop the additional set of unprocessed data packets based on the network virtual appliance processing the additional set of unprocessed data packets (see Rolando; paragraphs 0083, 0129 and 0154, ; Rolando discloses dropping subsequent data messages, i.e. “unprocessed data packets”, after processing).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 5, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses receiving the processed data packets from the network virtual appliance at the gateway load balancer (see Mankad; page 4 steps 6 and 7; Mankad discloses the GWLB receiving the traffic from the virtual appliance); and
providing the processed data packets from the gateway load balancer to an additional network virtual appliance for additional processing, wherein the additional network virtual appliance provides different packet processing from the network virtual appliance (see Mankad; page 4 steps 8 and 9; Mankad discloses the GWLB routes the traffic, i.e. “the processed data packets”, to a NAT gateway, i.e. “an additional network virtual appliance”. And as known a NAT provides “different packet processing” then a firewall).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 6, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses generating an internal encapsulation tunnel for encapsulating sets of data packets between the public load balancer and the gateway load balancer for the sets of data packets initiated at a virtual machine of the one or more virtual machines (see Rolando; paragraphs 0076, 0077, 0110, 0243, 0246; Rolando discloses redirecting the data message, i.e. “data packets”, from a virtual machine, as encapsulated for transport, i.e. “via an internal encapsulation tunnel”, across an intervening network to a load balancer, i.e. “gateway load balancer”).
Regarding claim 7, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses wherein intercepting the unprocessed data packets comprises redirecting sets of unprocessed data packets from a plurality of public load balancers associated with one or more public IP addresses to the gateway load balancer (see Rolando; paragraphs 0076, 0077, 0110, 0243, 0246; Rolando discloses redirecting data messages, i.e. “sets of unprocessed data packets”, as encapsulated for transport, i.e. “via an internal encapsulation tunnel”, across an intervening network to a load balancer, i.e. “gateway load balancer”).
Regarding claim 8, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses identifying additional unprocessed data packets at an additional public load balancer of an additional cloud computing system that differs from the cloud computing system (see Rolando; paragraphs 0055, 0060, 0064, 0069, 0221 and 0222; Rolando discloses a data messages from different sources, i.e. “additional unprocessed data”, in the form of IP packets are received based on a load balancing operation from an external network, i.e. “public load balancer”, as part of a processing pipeline for service to virtual machines implemented in a cloud environment, i.e. “provides data packets to one or more virtual machines”. It is determined, i.e. “identifying”, that the data messages require a service and is sent for processing. Therefore, the data messages are not processed yet, i.e. “unprocessed data packets”);
intercepting the additional unprocessed data packets from the additional public load balancer at an additional gateway load balancer (see Mankad; page 3 steps 1, 4, 5 and 5a; Mankad discloses a VPC communicates with a resource on the internet and sends traffic, i.e. “data packets”, to a transit gateway, i.e. “public load balancer”. A GWLB, e.g. gateway load balancer, receives, i.e. “intercepting...”, and encapsulates the traffic using GENEVE, e.g. generic network virtualization encapsulation, i.e. “via an external encapsulation tunnel”, before sending the traffic to an appliance, such as a firewall. Therefore, the traffic is “unprocessed” since it has not been sent to the firewall yet);
providing the additional unprocessed data packets to the network virtual appliance for processing of the data packets to generate additional processed data packets (see Mankad; pages 3 and 4 steps 5, 5a and 6; Mankad discloses the GWLB, i.e. “the gateway load balancer”, forwards the encapsulated traffic to a virtual appliance, such as a firewall, to make a decision on the traffic, i.e. “generate additional processed data packets”);
causing the additional processed data packets to be transmitted to the additional public load balancer (see Mankad; page 4 steps 7-10; Mankad discloses the virtual appliance, e.g. firewall, re-encapsulates, i.e. “via the external encapsulation tunnel”, the traffic, i.e. “the additional processed data packets”, and forwards the traffic to the GWLB, which further forwards the traffic to an endpoint, e.g. GWLBE, and then the traffic is routed to the gateway, i.e. “public load balancer”); and
sending the additional processed data packets from the additional public load balancer to one or more additional virtual machines of the additional cloud computing system (see Rolando; paragraphs 0069, 0222 and 0246; Rolando discloses identifying the virtual machine, and after the data message is sent to be processed, i.e. “additional processed data packets”, the load balancer sends the data message to the client virtual machine).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 9, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses reconfiguring the network virtual appliance via an administrator device that is separate from the cloud computing system, wherein reconfiguring the network virtual appliance does not reconfigure the public load balancer and the one or more virtual machines (see Rolando; paragraphs 0100, 0120, 0129 and 0187; Rolando discloses an administrator configures, through a user interface, i.e. “via an administrator device”, a firewall to be skipped and provides service chain definitions. Further, the load balancing mechanism is not reconfigured, i.e. “wherein reconfiguring…does not reconfigure the public load balancer…”).
Regarding claim 10, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 1, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses wherein the public load balancer and the gateway load balancer are implemented within a single network device (see Mankad; page 3 Figure 2; Mankad discloses the transit gateway and GWLB can be within same device).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 1.
Regarding claim 11, Rolando discloses a computer-implemented method for transparently inserting network virtual appliances into a networking service chain comprising:
identifying data packets at a public load balancer from a virtual machine of a cloud computing system to be sent to an external computing device that is external to the cloud computing system (see Rolando; paragraphs 0055, 0060, 0064, 0069, 0221 and 0222; Rolando discloses a data message in the form of IP packets received based on a load balancing operation from an external network, i.e. “public load balancer”, as part of a processing pipeline for service to virtual machines implemented in a cloud environment, i.e. “a virtual machine of a cloud computing system”);
sending the processed data packets from the gateway load balancer of the second virtual network to the external computing device (see Rolando; paragraphs 0069, 0222 and 0246; Rolando discloses identifying the client and after the data message is sent to be processed, i.e. “processed data packets”, the load balancer of the network, i.e. “second virtual network”, sends the data message to the client, i.e. “external computing device”).
While Rolando discloses the data message being encapsulated via a tunnel (see Rolando; paragraphs 0076 and 0219- 0221), Rolando does not explicitly disclose providing the encapsulated data packets from the gateway load balancer to a network virtual appliance to generate processed data packets; and causing the processed data packets to be transmitted to the gateway load balancer via the internal encapsulation tunnel.
In analogous art, Mankad discloses providing the encapsulated data packets from the gateway load balancer to a network virtual appliance to generate processed data packets (see Mankad; pages 3 and 4 steps 5, 5a and 6; Mankad discloses a GWLB, i.e. “the gateway load balancer”, forwards the encapsulated traffic to a virtual appliance, such as a firewall, to make a decision on the traffic, i.e. “generate processed data packets”); and
causing the processed data packets to be transmitted to the gateway load balancer via the internal encapsulation tunnel (see Mankad; page 4 steps 7-10; Mankad discloses the virtual appliance, e.g. firewall, re-encapsulates, i.e. “via the internal external encapsulation tunnel”, the traffic, i.e. “the processed data packets”, and forwards the traffic to the GWLB, which further forwards the traffic to an endpoint, e.g. GWLBE, and then the traffic is routed to the gateway, i.e. “public load balancer”. The examiner notes this interpretation is supported by the applicant’s specification where it states that the appliance system returns the processed data packets to the public load balancer via the gateway load balancer; see applicant’s specification as filed; paragraph 0058).
One of ordinary skill in the art would have been motivated to combine Rolando and Mankad because they both disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of a gateway load balancer as taught by Mankad into the system of Rolando in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a gateway load balancer thereby allowing deploying, scaling and running virtual appliances easier (see Mankad; page 1).
While Rolando discloses redirecting the data message as encapsulated for transport across an intervening network to a load balancer of another network (see Rolando; paragraphs 0076, 0077, 0110, 0243, and 0246), the combination of Rolando and Mankad does not explicitly disclose encapsulating the data packets intercepted at the public load balancer; and redirecting the encapsulated data packets via an internal encapsulation tunnel from the public load balancer of the first virtual network to a gateway load balancer of a second virtual network.
In analogous art, Tracy discloses encapsulating the data packets intercepted at the public load balancer (see Tracy; paragraphs 0055, 0121 and 0242; Tracy discloses a customer network includes public compute instances, such as a load balancer, i.e. “public load balancer”. Intercepting customer network packets, i.e. “data packets”, then encapsulating the customer network packets before they are traversed. The network packets are associated with the compute instances, such as, the load balancer, i.e. “intercepted at the public load balancer”); and
redirecting the encapsulated data packets via an internal encapsulation tunnel from the public load balancer of the first virtual network to a gateway load balancer of a second virtual network ((see Tracy; paragraphs 0055, 0077, 0121, 0169 and 0207; Tracy discloses the public load balancer of the customer network, i.e. “first virtual network”, providing packets and intercepting the customer’s network packets, i.e. “unprocessed data packets from the public load balancer of the first virtual network”, using tunneling for encapsulation, such as generic routing encapsulation, i.e. “external encapsulation tunnel”. For example, sending the packets to a load balancer that may be a gateway, i.e. “gateway load balancer”, of a service virtual cloud network, i.e. “a second virtual network”, using the encapsulation, i.e. “…as encapsulated data packets via an external encapsulation tunnel”).
One of ordinary skill in the art would have been motivated to combine Rolando, Mankad and Tracy because they all disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of providing packets from a customer network to a service network as taught by Tracy into the combined system of Rolando and Mankad in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a compute instance that can communicate with various different endpoints, such as a compute instance in a customer network to an endpoint in a service network (see Tracy; paragraphs 0055, 0077 and 0207), thus providing communication across different networks.
While Mankad discloses “…to generate processed data packets”, and Tracy discloses “the gateway load balancer of the second virtual network”, as discussed above the combination of Rolando and Mankad does not explicitly disclose the public load balancer of the first virtual network; a gateway load balancer of a second virtual network; and causing the processed data packets to be transmitted to the gateway load balancer of the second virtual network.
In analogous art, Shen discloses causing the processed data packets to be transmitted to the gateway load balancer of the second virtual network (see Shen; paragraphs 0074, 0083, 0091 and 0097; Shen discloses packet processing at the logical networks and forwarding packets directly to the other gateway, such as, to the default virtual network, i.e. “the second virtual network”).
One of ordinary skill in the art would have been motivated to combine Rolando, Mankad, Tracy and Shen because they all disclose features of load balancing, and as such are within the same environment.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the feature of a gateway load balancer as taught by Shen into the combined system of Rolando, Mankad and Tracy in order to provide the benefit of efficiency by allowing the service insertion pre-processor which intercepts the data message for forwarding to service nodes for processing (see Rolando; paragraphs 0221-0223 and 0246) to be implemented as a gateway load balancer thereby allowing deploying, scaling and running virtual appliances easier (see Mankad; page 1) across different virtual network types to achieve the most efficiency by robustly redistributing across the different virtual networks (see Shen; paragraphs 0097 and 0114).
Regarding claim 12, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 11, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses identifying an additional set of data packets at the public load balancer from the virtual machine to be sent to the external computing device (see Rolando; paragraphs 0055, 0060, 0064, 0069, 0221 and 0222; Rolando discloses a data messages from different sources, i.e. “additional set of data”, in the form of IP packets are received based on a load balancing operation from an external network, i.e. “public load balancer”, as part of a processing pipeline for service to virtual machines implemented in a cloud environment, i.e. “provides data packets to one or more virtual machines”. It is determined, i.e. “identifying”, that the data messages require a service and is sent for processing);
providing the additional set of data packets from the gateway load balancer that intercepts the additional set of data packets to the network virtual appliance (see Mankad; page 3 steps 1, 4, 5 and 5a; Mankad discloses a couple of VPC can communicate with a resource on the internet and sends traffic, i.e. “additional set of data packets”, to a transit gateway, i.e. “public load balancer”. A GWLB, e.g. gateway load balancer, receives, i.e. “intercepting...”, and encapsulates the traffic using GENEVE, e.g. generic network virtualization encapsulation before sending the traffic to an appliance, such as a firewall);
retrieving requested content from a local storage device based on the network virtual appliance processing the additional set of data packets (see Rolando; paragraph 0245; Rolando discloses the GVM provides content, i.e. “retrieving requested content…”); and
returning the requested content to the virtual machine without sending the processed data packets to the external computing device (see Rolando; paragraphs 0245; Rolando discloses the content is provided, i.e. “returning the requested content…”, by the GVM).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 11.
Regarding claim 13, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 11, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses receiving the processed data packets from the network virtual appliance at the gateway load balancer (see Mankad; page 4 steps 6 and 7; Mankad discloses the GWLB receiving the traffic from the virtual appliance); and
providing the processed data packets to an additional network virtual appliance for additional processing, wherein the additional network virtual appliance provides different packet processing from the network virtual appliance (see Mankad; page 4 steps 8 and 9; Mankad discloses the GWLB routes the traffic, i.e. “the processed data packets”, to a NAT gateway, i.e. “an additional network virtual appliance”. And as known a NAT provides “different packet processing” then a firewall).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 11.
Regarding claim 14, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 11, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses wherein sending the processed data packets comprises sending the processed data packets from the gateway load balancer via the public load balancer (see Rolando; paragraphs 0069, 0222 and 0246; Rolando discloses identifying the virtual machine, and after the data message is sent to be processed, i.e. “processed data packets”, the load balancer sends the data message to the client virtual machine).
Regarding claim 15, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 11, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses generating an external encapsulation tunnel for encapsulating sets of data packets between the public load balancer and the gateway load balancer for the sets of data packets received at the public load balancer from computing devices that are external to the cloud computing system (see Mankad; page 3 steps 1, 4, 5 and 5a; Mankad discloses a VPC communicates with a resource on the internet and sends traffic, i.e. “data packets”, to a transit gateway, i.e. “public load balancer”. A GWLB, e.g. gateway load balancer, receives, i.e. “intercepting...”, and encapsulates the traffic using GENEVE, e.g. generic network virtualization encapsulation, i.e. “via an external encapsulation tunnel”, before sending the traffic to an appliance, such as a firewall. Therefore, the traffic is “unprocessed” since it has not been sent to the firewall yet).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 11.
Regarding claim 16, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 11, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses removing the gateway load balancer from intercepting sets of data packets without disrupting data packet traffic flow between the public load balancer and the virtual machine (see Mankad; page 3 steps 2 and 3, and pages 5-7 and Figure 4; Mankad discloses the GWLB can and cannot be a routable target. In other words, the GWLB can be removed from being routed to).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 11.
Regarding claim 18, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 17, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses wherein the one or more network virtual appliances comprise a firewall (see Mankad; page 4 step 6; Mankad discloses the virtual appliance as a firewall), a cache, a packet duplicator, a threat detector, or a deep packet inspector. (The claim list features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art. The Examiner has chosen the “firewall” alternative).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 17.
Regarding claim 19, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 17, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses additional instructions that, when executed by the at least one processor, cause the system to redirect sets of data packets from a plurality of public load balancers associated with one or more public internet protocol (IP) addresses to the gateway load balancer (see Rolando; paragraphs 0076, 0077, 0110, 0243, 0246; Rolando discloses redirecting data messages, i.e. “sets of data packets”, as encapsulated for transport, i.e. “via an internal encapsulation tunnel”, across an intervening network to a load balancer, i.e. “gateway load balancer”).
Regarding claim 20, Rolando, Mankad, Tracy and Shen disclose all the limitations of claim 17, as discussed above, and further the combination of Rolando, Mankad, Tracy and Shen clearly discloses additional instructions that, when executed by the at least one processor, cause the system to provide data packets from a plurality of gateway load balancers associated with a plurality of cloud computing systems to the one or more network virtual appliances (see Mankad; pages 3 and 4 steps 5, 5a and 6 and Figure 2; Mankad discloses multiple GWLBs, i.e. “plurality of gateway load balancers”, forwards the encapsulated traffic, i.e. “data packets”, to a virtual appliance, such as a firewall, to make a decision on the traffic).
The prior art used in the rejection of the current claim is combined using the same motivation as was applied in claim 17.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Gadgil et al. (U.S. 2020/0159634 A1) discloses load balancing for traffic in one or more virtual networks and encapsulating the traffic.
K N et al. (U.S. 11,336,570 B1) discloses creating service chains and load balancing using tunnel encapsulation.
Vaidya et al. (U.S. 2020/0076685 A1) discloses tunnel encapsulation in virtual networks.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM A COONEY whose telephone number is (571)270-5653. The examiner can normally be reached M-F 7:30am-5:00pm (every other Fri off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A.C/Examiner, Art Unit 2458 12/23/25
/ALINA A BOUTAH/Primary Examiner, Art Unit 2458