Prosecution Insights
Last updated: April 19, 2026
Application No. 18/137,532

DYNAMIC SCALING OF CLOUD GATEWAYS CONNECTING CLOUD MACHINES TO ON-PREMISES MACHINES BASED ON STATISTICS OF THE CLOUD GATEWAYS

Final Rejection §101§103
Filed
Apr 21, 2023
Examiner
DASCOMB, JACOB D
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
379 granted / 440 resolved
+31.1% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
43 currently pending
Career history
483
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 440 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1, 3-15, and 17-22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21 and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claim 21 is directed software per-se. See MPEP 2106.03. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the system of claim 21, under its broadest reasonable interpretation, may be entirely directed toward software. For example, the “computer system in a first network that is programmed to execute a method” may be comprised entirely by software programmed to execute a method. Claim 22 depends from claim 21; therefore, they are rejected for the same reason. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-7, 9-12, 14, 15, and 17-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nandoori (US 10,721,097) and further in view of Towster (US 2009/0046583) and further in view of Raether (US 2017/0373938). Regarding claim 1, Nandoori teaches: A method for deploying cloud gateways between a set of cloud machines in a first network and a set of on-premises machines in an external network, the method comprising: collecting a set of statistics for a first cloud gateway (col. 11:31-33, “The monitor component 120 can be configured to monitor one or more operating parameters of the first and/or second instance 114a and 114b of the VPN gateway 114”) used to connect the set of cloud machines (col. 8:45-48, “the on-premise gateway 105 is connected to the first instance 114a via the VPN connection 116 to transmit/receive network traffic from the virtual network 146 at the cloud computing system 110”) and the set of on-premises machines (col. 6:64-67, “the private network 102 can be an on-premise network that includes a local area network 107 interconnecting one or more endpoints such as servers 104 and client devices 103”); analyzing the set of statistics (col. 11:52-54, “The monitor component 120 can then forward the obtained operating parameters to the analysis component 122 for further processing”) to determine that a second cloud gateway is needed to connect the set of cloud machines and the set of on-premises machines (col. 11:63-66, “In response to determining that the total number of packets 111 exceeds the threshold, the analysis component 122 can indicate that the first instance 114a is overloaded” and col. 3:3-7, “In response to determining that the currently used instance of the VPN gateway is overloaded, the gateway scaler can be configured to create one or more new instances of VPN gateway and/or scale the data channels to use additional existing instances of the VPN gateway”); Nandoori does not teach; however, Towster discloses: identifying a subset of the set of cloud machines (¶ 18, “The routing tables implemented by the example switches 120A-D and/or the example routers 125A-D and/or 130A-B associate each virtual circuit with a particular physical route through the packet-switched communication system”); and distributing a set of one or more forwarding rules to the subset of cloud machines to forward a set of data message flows from the subset of cloud machines to the set of on-premises machines through the second cloud gateway (¶ 20, “If the example circuit manager 135 determines that a particular virtual circuit should be assigned to a different route through the packet-switched network 115, the circuit manager 135: i) selects the different route (e.g., by selecting a route with a lower load and/or cost), and ii) updates the routing tables of the switches 120A-D and/or the routers 125A-D and/or 130A-B to begin routing the virtual circuit via the new route (e.g., updates the routing table of one or more of the switches 120A-D and/or the routers 125A-D and/or 130A-B)”). It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of identifying a subset of the set of cloud machines; and distributing a set of one or more forwarding rules to the subset of cloud machines to forward a set of data message flows from the subset of cloud machines to the set of on-premises machines through the second cloud gateway, as taught by Towster, in the same way to the method, as taught by Nandoori. Both inventions are in the field of routing data in a networked system, and combining them would have predictably resulted in a system configured to “control traffic in [a] network in real-time,” as indicated by Towster (¶ 12). Nandoori and Towster do not teach; however, Raether discloses: predict that the first cloud gateway will exceed a load threshold at a future time (¶ 36, “The steps of method 400 may be described as phases of predictive auto-scaling” and “A trigger may be a specific date, a day each year, a network condition typified by resource usage, network delays, etc”) and that a second cloud gateway is needed to connect the set of cloud machines and the set of on-premises machines (¶ 70, “With a validated and tested rule set in its repository, adaptive engine 200 may now use the set of auto-scaling rules 500 to scale-up or scale-down one or more VNFs 321-325 in network 320” and ¶ 2, “a physical network architecture to deploy network functions, such as routers, switches, gateways, servers, etc”); and deploying the second cloud gateway in the first network to connect the set of cloud machines and the set of on-premises machines prior to the future time at which the first cloud gateway is predicted to exceed the load threshold (¶ 32, “A scale-up refers to deploying more resources (e.g., CPU, memory, hard disk, and network) for one or more VNFs, while scale-out refers to adding more nodes to the system”); It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of predict that the first cloud gateway will exceed a load threshold at a future time and that a second cloud gateway is needed to connect the set of cloud machines and the set of on-premises machines; and deploying the second cloud gateway in the first network to connect the set of cloud machines and the set of on-premises machines prior to the future time at which the first cloud gateway is predicted to exceed the load threshold, as taught by Raether, in the same way to the method, as taught by Nandoori and Towster. Both inventions are in the field of scaling network resources, and combining them would have predictably resulted in a method that “predicts a future event in the network that will activate scaling of the VNF, and auto-scales the VNF for the network based on the set of auto-scaling rules before occurrence of the future event,” as indicated by Raether (abstract). Regarding claim 3, Nandoori teaches: The method of claim 1, wherein the set of data message flows is a first set of data message flows (col. 8:60-64, “the load balancer 112 can identify the received packets 111 as belonging to one or more outer flows and forward the packets 111 of certain outer flows to a suitable corresponding destination, for instance, the first VPN instance 114a or the second VPN instance 114b”), and cloud machines in the set of cloud machines not in the identified subset of cloud machines continue to forward a second set of data message flows to the set of on-premises machines through the first cloud gateway (col. 12:60-62, “During creation of the new SA, the network traffic corresponding to the second inner flow 111b can still be processed by the first instance 114a”). Regarding claim 4, Nandoori teaches: The method of claim 1, wherein the set of statistics comprises statistics specifying bandwidth use of the first cloud gateway by the set of cloud machines at different times (col. 2:40-42, “Example operating parameters can include a network bandwidth consumed”). Regarding claim 5, Nandoori teaches: The method of claim 4, wherein the statistics specify bandwidth use of a particular uplink interface of the first cloud gateway by the set of cloud machines to connect to the set of on-premises machines (col. 2:45-50, “the gateway scalar can also be configured to identify one or more inner flows between pairs of computers from the on-premise network and VMs at the virtual network, a high bandwidth consuming by each inner flow, or a longevity of one or more of the inner flows”). Regarding claim 6, Nandoori teaches: The method of claim 4, wherein analyzing the set of statistics comprises determining that a predicted future bandwidth use of the first cloud gateway by the set of cloud machines exceeds a particular threshold (col. 2:56-62, “when a processor load on the host exceeds 90% or other suitable values, the gateway scaler can indicate that the first instance is overloaded. In another example, when an instantaneous or averaged bandwidth consumed by the VPN tunnel exceeds another threshold, the gateway scaler can indicate that the first instance is overloaded”). Regarding claim 7, Nandoori teaches: The method of claim 4, wherein the set of statistics is a first set of statistics, the method further comprising collecting a second set of statistics for the set of cloud machines (col. 11:31-33, “The monitor component 120 can be configured to monitor one or more operating parameters of the first and/or second instance 114a and 114b of the VPN gateway 114”), wherein analyzing the first set of statistics comprises analyzing the first and second set of statistics to determine that the second cloud gateway is needed (col. 12:14-19, “In accordance with embodiments of the disclosed technology, the control component 124 can be configured to create new data channels by, for example, establishing additional security logic groups for the inner flows in response to receiving the indication that the first instance 114a is overloaded”). Regarding claim 9, Towster teaches: The method of claim 1, wherein collecting the set of statistics comprises retrieving the set of statistics from a data store (¶ 29, “The example performance database 230 of FIG. 2 stores current and/or historical data and/or information concerning bandwidth used by various applications, and/or the performance of communication paths and/or virtual circuits of one or more packet-switched networks”). Regarding claim 10, Towster teaches: The method of claim 9 further comprising: before retrieving the set of statistics, iteratively collecting subsets of statistics for the first cloud gateway at specified time intervals (¶ 23, “The example SLA probes 140 aggregate the data collected over a time period (e.g., a fifteen minute time period), and provide the aggregated data to the example circuit manager 135”); and storing the collected subsets of statistics in the data store, wherein the set of statistics comprises different subsets of statistics (¶ 28, “The example probe interface 225 stores data and/or information received from the SLA probes 140 in a performance database 230”). Regarding claim 11, Towster teaches: The method of claim 9, wherein the data store is a time-series database (¶ 23, “The example SLA probes 140 aggregate the data collected over a time period (e.g., a fifteen minute time period), and provide the aggregated data to the example circuit manager 135”). Regarding claim 12, Nandoori teaches: The method of claim 1, wherein the first network is a cloud datacenter and the external network is an on-premises datacenter (col. 6:55-58, “As shown in FIG. 1, the computing framework 100 can include a private network 102 interconnected to a cloud computing system 110 via a public network 108” and col 6:64-67, “the private network 102 can be an on-premise network that includes a local area network 107 interconnecting one or more endpoints such as servers 104 and client devices 103”). Regarding claim 14, Nandoori teaches: The method of claim 1, wherein the cloud machines are virtual machines (VMs) operating on host computers in the first network (col. 5:4-6, “A “host” generally refers to a computing device configured to implement, for instance, one or more virtual machines or other suitable virtualized components”). Claims 15 and 17-22 recite commensurate subject matter as claims 1, 3, 4, 6, 7, 12, and 14. Therefore, they are rejected for the same reasons. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nandoori, Towster, and Raether, as applied above, and further in view of Bozek (US 8,352,953). Regarding claim 8, Nandoori and Towster do not teach; however, Bozek discloses: analyzing the second set of statistics to determine that the subset of cloud machines consumes more bandwidth of the first cloud gateway than other cloud machines in the set of cloud machines (col. 6:51-55, “the bandwidth required by VM1-VM5 totals 1150 Mbps. However, Physical server #2 with a similar number of virtual machines has extra bandwidth capacity of 450 Mbps (i.e., 1 Gbps-550 Mbps)”). It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of analyzing the second set of statistics to determine that the subset of cloud machines consumes more bandwidth of the first cloud gateway than other cloud machines in the set of cloud machines, as taught by Bozek, in the same way to the identifying the subset of cloud machines, as taught by Nandoori, Towster, and Raether. Both inventions are in the field of managing cloud resources, and combining them would have predictably resulted in “management of the network bandwidth in a virtual machine environment,” as indicated by Bozek (col. 1:8-9). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nandoori, Towster, and Raether, as applied above, and further in view of Hall (US 2021/0117175). Regarding claim 13, Nandoori, Towster, and Raether do not teach; however, Hall discloses: the cloud datacenter and the on-premises datacenter are respectively first and second software defined datacenters (SDDCs) (¶ 18, “Each on-premise hyper-converged system is a complete network infrastructure with the necessary software components already installed and configured to support and operate one or more software-defined data centers (SDDCs) at any on-premise sites”). It would have been obvious to a person having ordinary skill in the art, at the effective filing date of the invention, to have applied the known technique of the cloud datacenter and the on-premises are respectively first and second software defined datacenters (SDDCs), as taught by Hall, in the same way to the identifying the subset of cloud machines, as taught by Nandoori, Towster, and Raether. Both inventions are in the field of managing cloud resources, and combining them would have predictably resulted in “supplying and managing on-premise hyper-converged systems,” as indicated by Hall (¶ 18). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB D DASCOMB whose telephone number is (571)272-9993. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB D DASCOMB/Primary Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Apr 21, 2023
Application Filed
Sep 05, 2025
Non-Final Rejection — §101, §103
Dec 01, 2025
Interview Requested
Dec 08, 2025
Examiner Interview Summary
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Response Filed
Jan 13, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591462
INFERENCE SERVICE DEPLOYMENT METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585487
CANCELLATION OF A MIGRATION-BASED UPGRADE USING A NETWORK SWAP WORKFLOW
2y 5m to grant Granted Mar 24, 2026
Patent 12578906
STORAGE VIRTUALIZATION DEVICE SUPPORTING VIRTUAL MACHINE, OPERATION METHOD THEREOF, AND OPERATION METHOD OF SYSTEM HAVING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12578985
HYBRID VIRTUAL MACHINE ALLOCATION OPTIMIZATION SYSTEM AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566645
PREDICTED-TEMPERATURE-BASED VIRTUAL MACHINE MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+20.5%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 440 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month