Prosecution Insights
Last updated: April 19, 2026
Application No. 18/645,120

ARCHITECTURE OF A MULTICLOUD NETWORK LINK

Final Rejection §102
Filed
Apr 24, 2024
Examiner
REYNOLDS, DEBORAH J
Art Unit
2400
Tech Center
2400 — Computer Networks
Assignee
Oracle International Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
111 granted / 166 resolved
+8.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
80 currently pending
Career history
246
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to Applicant’s amendment filed January 7, 2026. Claims 1, 13, and 20 have been amended. Claims 2 and 14 have been cancelled. Claims 1, 3-13, and 15-20 are pending. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3-13, and 15-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Choi et al., (US 20230246962, hereinafter referred to as “Choi”). The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Regarding claim 1, Choi teaches a method comprising: receiving, by a multi-cloud infrastructure included in a first cloud environment, a request to create a network link between a first customer virtual network in the first cloud environment and a second customer virtual network in a second cloud environment (figure 12C, step 1250); deploying, in the second cloud environment, a first link-enabling virtual network (abstract - the network-link is created based on one or more link-enabling virtual networks being deployed in the first cloud environment and the second cloud environment.) including a plurality of virtual network interface cards (VNICS), wherein the first link-enabling virtual network is communicatively coupled: (a) via a first VNIC to a first data plane hub virtual network ([0178] The multi-cloud infrastructure 720B includes a multi-cloud platform data plane 722 and multi-cloud platform data plane 728.), (b) via a second VNIC to a first control plane hub virtual network ([0178] The first cloud infrastructure 720A includes a control plane 724), and (c) via a third VNIC to the second customer virtual network, wherein the first data plane hub virtual network and the first control plane hub virtual network are deployed in the second cloud environment ([0178] the multi-cloud infrastructure 720B provisions for users of other cloud environments (e.g., the second cloud environment 710) to access services provided by the first cloud environment with a user experience as close as possible to that of the native cloud environments of the users (e.g., the second cloud environment 710), while providing simple integration between the cloud environments.); and enabling traffic to be communicated from the second customer virtual network in the second cloud environment to the first customer virtual network in the first cloud environment via the first link-enabling virtual network, wherein the first link-enabling virtual network receives network configuration information from the first control plane hub virtual network and transmits traffic to first cloud environment via the first data plane hub virtual network based on the network configuration information ([0139] The control plane functions include functions used for configuring a network (e.g., setting up routes and route tables, configuring VNICs, etc.) that controls how data is to be forwarded. In certain embodiments, a VCN Control Plane is provided that computes all the overlay-to-substrate mappings centrally and publishes them to the NVDs and to the virtual network edge devices such as various gateways such as the DRG, the SGW, the IGW, etc. Firewall rules may also be published using the same mechanism. In certain embodiments, an NVD only gets the mappings that are relevant for that NVD. The data plane functions include functions for the actual routing/forwarding of a packet based upon configuration set up using control plane. A VCN data plane is implemented by encapsulating the customer’s network packets before they traverse the substrate network. The encapsulation/decapsulation functionality is implemented on the NVDs. In certain embodiments, an NVD is configured to intercept all network packets in and out of host machines and perform network virtualization functions.). Regarding claim 3, Choi teaches the method of claim 1, wherein the first link-enabling virtual network includes a first pair of virtual network adaptors, each of which is configured to encapsulate traffic received from the second customer virtual network to generate encapsulated traffic ([0180] The multi-cloud infrastructure 720B included in the first cloud infrastructure 720A includes a plurality of microservices such as an authority module 722A, a proxy module 722B, a platform services module 722C, a cloud-link adaptor 722D, a pool of adaptors 722E including adaptor 1, adaptor 2, adaptor 3, and adaptor 4, and a network link adaptor 722F. The pool of adaptors 722E can include adaptors such as an Exa-data cloud service adaptor, an autonomous database-shared adaptor, an autonomous database-dedicated adaptor, and a virtual machine database adaptor.). Regarding claim 4, Choi teaches the method of claim 1, further comprising: encapsulating, based on the network configuration information, by the first link-enabling virtual network in the second cloud environment, traffic received from the second customer virtual network to generate encapsulated traffic ([0020] encapsulating, by a first link-enabling virtual network in the second cloud environment, traffic received from the second virtual network to generate encapsulated traffic); and forwarding, by the first link-enabling virtual network in the second cloud environment the encapsulated traffic to the first data plane hub virtual network ([0020] forwarding, by a second link-enabling virtual network in the second cloud environment, the encapsulated traffic received from the first link-enabling virtual network to a hub virtual network included in the second cloud environment). Regarding claim 5, Choi teaches the method of claim 1, further comprising: deploying in the first cloud environment, a second control plane hub virtual network including one or more distribution service nodes, wherein the second control plane hub virtual network is communicatively coupled to the first control plane hub virtual network in the second cloud environment via a public interconnect link ([0236] According to some embodiments, encapsulated traffic received by the VNIC (e.g., VNIC 1122A) included in the Hub VCN 1122 is forwarded to a third link-enabling virtual network 1123 (labeled as Spoke VCN) included in the region of the first cloud environment 1135. The third link-enabling virtual network 1123 includes a pair of virtual network adaptors 1123A (labeled as local virtual network adaptors (LVNAs)), each of which is configured to decapsulate, the encapsulated traffic received from the VNIC 1122A included in the Hub VCN 1122.). Regarding claim 6, Choi teaches the method of claim 5, wherein at least one of the one or more distribution service nodes is configured to transmit the network configuration information to the first control plane hub virtual network in the second cloud environment, wherein the network configuration information is forwarded to the first link-enabling virtual network via the first control plane hub virtual network, and wherein the network configuration information includes at least (i) tunnel encapsulation-decapsulation parameters ([0264] Further, the multi-cloud service VCN 1560 may setup a communication channel e.g., a tunnel (e.g., IPsec tunnel) with the multi-cloud service VNET 1510 in order to transmit the network configuration information to the outpost 1510A (included in the multi-cloud service VNET 1510), so that the network configuration information may be eventually obtained by the VNF 1505A and RVNA 1507A (via their respective private endpoints) that are each located in the second cloud environment.), and health-status information of a second link-enabling virtual network deployed in the first cloud environment ([0264] It is appreciated that the distribution service 1560D may receive information indicative of a health status corresponding to each of the RVNAs, LVNAs, and VNFs in the polling requests issued by the respective packet processors.). Regarding claim 7, Choi teaches the method of claim 1, wherein the second cloud environment includes a plurality of customer tenancies ([0324] the data plane VCN 2018 can be integrated with customer tenancies 2070.), and wherein for each customer tenancy included in the plurality of customer tenancies:—(i) the first link-enabling virtual network is created in the second cloud environment, and (ii) the second link-enabling virtual network is created in the first cloud environment (claim 3 of Choi). Regarding claim 8, Choi teaches The method of claim 1, further comprising: deploying, in the first cloud environment, a second link-enabling virtual network that is communicatively coupled to the first customer virtual network, the second link-enabling virtual network including a second pair of virtual network adaptors, each of which is configured to decapsulate, encapsulated traffic received from the first link-enabling virtual network included in the second cloud environment ([0236] According to some embodiments, encapsulated traffic received by the VNIC (e.g., VNIC 1122A) included in the Hub VCN 1122 is forwarded to a third link-enabling virtual network 1123 (labeled as Spoke VCN) included in the region of the first cloud environment 1135. The third link-enabling virtual network 1123 includes a pair of virtual network adaptors 1123A (labeled as local virtual network adaptors (LVNAs)), each of which is configured to decapsulate, the encapsulated traffic received from the VNIC 1122A included in the Hub VCN 1122. Further, as shown in FIG. 11, the pair of virtual network adaptors 1123A included in the third link-enabling virtual network 1123 of the first cloud environment 1135 transmit the decapsulated traffic to the customer 1 VCN 1131 (e.g., to a resource 1131A that is deployed in the customer 1 VCN 1131) via a dynamic routing gateway (DRG) attachment.). Regarding claim 9, Choi teaches the method of claim 8, further comprising: decapsulating, by the second link-enabling virtual network in the first cloud environment, the encapsulated traffic received from the first data plane hub virtual network included in the second cloud environment to generate decapsulated traffic, and transmitting, by the second link-enabling virtual network in the first cloud environment, the decapsulated traffic to the first customer virtual network in the first cloud environment ([0236] According to some embodiments, encapsulated traffic received by the VNIC (e.g., VNIC 1122A) included in the Hub VCN 1122 is forwarded to a third link-enabling virtual network 1123 (labeled as Spoke VCN) included in the region of the first cloud environment 1135. The third link-enabling virtual network 1123 includes a pair of virtual network adaptors 1123A (labeled as local virtual network adaptors (LVNAs)), each of which is configured to decapsulate, the encapsulated traffic received from the VNIC 1122A included in the Hub VCN 1122. Further, as shown in FIG. 11, the pair of virtual network adaptors 1123A included in the third link-enabling virtual network 1123 of the first cloud environment 1135 transmit the decapsulated traffic to the customer 1 VCN 1131 (e.g., to a resource 1131A that is deployed in the customer 1 VCN 1131) via a dynamic routing gateway (DRG) attachment.). Regarding claim 10, Choi teaches the method of claim 9, wherein the second link-enabling virtual network in the first cloud environment receives the encapsulated traffic from a second data plane hub virtual network included in the first cloud environment, the second data plane hub virtual network being communicatively coupled to the first data plane hub virtual network via a private high-bandwidth interconnect that couples the second cloud environment to the first cloud environment ([0230] Further, the region of the second cloud environment 1105 includes a Hub VNET 1110 that is shared between the different customer virtual networks included in the second cloud environment. In other words, Hub VNET 1110 processes traffic for the plurality of customer tenancies included in the region of the second cloud environment 1105. Similarly, the region of the first cloud environment 1135 includes a Hub VNET 1122 that is shared between the different customer’s virtual cloud networks included in the first cloud environment i.e., Hub VNET 1122 processes traffic for the plurality of customer VCNs included in the region of the first cloud environment 1135. As shown in FIG. 11, the region of the second cloud environment 1105 is communicatively connected to the region of the first cloud environment 1135 by a high-bandwidth network interconnect 1115). Regarding claim 11, Choi teaches the method of claim 8, wherein each of the first link-enabling virtual network and the second link-enabling virtual network is assigned a unique classless inter-domain routing IP address ([0237] it is noted that although customer virtual networks in the first cloud environment and the second cloud environment may share an IP address space, each of the first link-enabling virtual network, the second link-enabling virtual network, the third link-enabling virtual network, and the HUB virtual networks included in the first cloud environment and the second cloud environment are assigned a unique classless inter-domain routing IP address space to avoid traffic collision.). Regarding claim 12, Choi teaches the method of claim 1, wherein the multi-cloud infrastructure includes a first portion deployed in the first cloud environment and a second portion deployed in the second cloud environment, each of the first portion and the second portion of the multi-cloud infrastructure being controlled by a first cloud services provider of the first cloud environment that is different than a second cloud services provider of the second cloud environment ([0265] Turning to FIG. 16A, there is depicted a swim diagram illustrating interactions of the multi-cloud service control plane with different cloud environments, according to certain embodiments. The swim diagram of FIG. 16A depicts the interactions between the following entities: a client 1601, a multi-cloud platform control plane 1602, a second cloud environment IaaS API 1603, and an IaaS API corresponding to the first cloud environment 1604.). Claims 13, and 15-19 are similar to claims 1, and 3-7, respectively, therefore are rejected under the same rationale. Claim 20 is similar to claim 1, therefore is rejected under the same rationale. Response to Arguments Applicant's arguments filed January 7, 2026 have been fully considered but they are not persuasive. In response to Applicant’s argument that Choi does not describe a first link-enabling virtual network including a plurality of virtual network interface cards that is communicatively coupled via different VNICs to the respective claimed entities, the Patent Office respectfully disagrees and submits that Choi does teach this limitation. Choi teaches one or more link-enabling virtual networks being deployed in the first cloud environment and second cloud environment (abstract). This lines up with the claimed “deploying, in the second environment, a first link-enabling virtual network…” Figure 7 of Choi teaches the link-enabling virtual network is communicatively coupled to (a) the customer virtual network, (b) the data plane hub virtual network, and (c) the control plane hub virtual network. These networks are distinct virtual networks, not subnets of the same virtual cloud network. In order for one virtual network to connect to different virtual networks, it must include a plurality of VNICs. Therefore, even if Choi does not explicitly teach a first VNIC, a second VNIC, or a third VNIC, Choi implicitly teaches a plurality of VNICs which can reasonably be interpreted as first, second and third as claimed. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALINA N BOUTAH whose telephone number is (571)272-3908. The examiner can normally be reached M-F 7:00 AM - 3:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at (571) 270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. ALINA BOUTAH Primary Examiner Art Unit 2458 /ALINA A BOUTAH/Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Apr 24, 2024
Application Filed
Sep 24, 2025
Non-Final Rejection — §102
Jan 07, 2026
Response Filed
Feb 07, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12534225
SATELLITE DISPENSING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12441265
Mechanisms for moving a pod out of a vehicle
2y 5m to grant Granted Oct 14, 2025
Patent 12434638
VEHICLE INTERIOR PANEL WITH ONE OR MORE DAMPING PADS
2y 5m to grant Granted Oct 07, 2025
Patent 12372654
Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data
2y 5m to grant Granted Jul 29, 2025
Patent 12365469
AIRCRAFT PROPULSION SYSTEM WITH INTERMITTENT COMBUSTION ENGINE(S)
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
80%
With Interview (+13.6%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month