DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Claim 14 recites “A computer-readable storage medium having computer-executable instructions for allocating computing and network capacity in a computing environment provided by a virtualized computing service provider, the computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by one or more processors of a computing device, cause the computing device to perform operations…”
The Examiner is reading “computer-readable storage medium” to be non-transitory, because the specification states, “For purposes of the claims, the phrase ‘computer storage medium,’ ‘computer-readable storage medium’ and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media, per se,” ¶ 0084.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 7, 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1) and Mada (US 12009974 B1).
Regarding Claim 1, Tamvada teaches a method for allocating computing and network capacity in a telecommunications network environment provided by a virtualized computing service provider (
Tamvada discloses, “As indicated above, users can connect to virtualized computing devices and other cloud provider network 203 resources and services, and configure and manage telecommunications networks such as 5G networks, using various interfaces 206 (e.g., APIs) via intermediate network(s) 212,” ¶ 0045,
“A provider substrate extension 224 (“PSE”) provides resources and services of the cloud provider network 203 within a separate network, such as a telecommunications network, thereby extending functionality of the cloud provider network 203 to new locations (e.g., for reasons related to latency in communications with customer devices, legal compliance, security, etc.). In some implementations, a PSE 224 can be configured to provide capacity for cloud-based workloads to run within the telecommunications network. In some implementations, a PSE 224 can be configured to provide the core and/or RAN functions of the telecommunications network, and may be configured with additional hardware (e.g., radio access hardware),” ¶ 0054.
The claimed “virtualized computing service provider” is mapped to the provider of the disclosed “virtualized computing devices” that provides the environment of the disclosed “telecommunications network”.), the method comprising: receiving a call model and a user service type (
Tamvada discloses, “These components may be based on the 3GPP specifications by following an application architecture in which control plane and user plane processing is separated (CUPS Architecture),” ¶ 0023,
”In some embodiments, messages (e.g., packets) sent over the cloud provider network 203 include a flag to indicate whether the traffic is control plane traffic or data plane traffic. In some embodiments, the payload of traffic may be inspected to determine its type (e.g., whether control or data plane). Other techniques for distinguishing traffic types are possible,” ¶ 0050,
“The data stored in the data store 415 includes, for example, one or more network plans 439, one or more cellular topologies 442, one or more spectrum assignments 445, device data 448, one or more RBN metrics 451…,” ¶ 0099,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100,
“The RBN metrics 451 include various metrics or statistics that indicate the performance or health of the radio-based network 103. Such RBN metrics 451 may include bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on. The RBN metrics 451 may be aggregated on a per-device basis, a per-cell basis, a per-customer basis, etc.,” ¶ 0104.
The claimed “call model” is mapped to the disclosed “RBN metrics 451”, which is a representation of the amount/size and types of the disclosed “traffic” of the cloud provider network 203, measurable via the messages / packets sent over the network over a period of time. It is aggregated and then received by a data store.
This is consistent with paragraph 16 of the present application’s specification, which states “The call model is generally a representation of user behavior at a given site, and models and mimics the traffic type and volume during a given period of time.”
The claimed “user service type” is mapped to the combination of the disclosed “CUPS Architecture” and a disclosed “network plan” consisting of parameters such as “a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices”. Said “network plan” is received by a data store.
This is consistent with paragraph 18 of the present application’s specification, which states “the customer or user service type 115 includes customer specific requirements such as targeted number of sessions and throughput as well as the type of deployment (e.g., Control and User Plane Separation of EPC nodes (CUPS) / Integrated).”);
running (
Tamvada discloses, “For live migration, the disclosed techniques can dynamically determine an amount of memory state data to pre-copy (e.g., while the instance is still running on the source host) and to post-copy (e.g., after the instance begins running on the destination host), based for example on latency between the locations, network bandwidth/usage patterns, and/or on which memory pages are used most frequently by the instance. Further, a particular time at which the memory state data is transferred can be dynamically determined based on conditions of the network between the locations. This analysis may be performed by a migration management component in the region, or by a migration management component running locally in the source edge location,” ¶ 0061.
Here, processing and storage usage patterns (network bandwith/usage patterns) are used to measure network traffic via a migration management component.
Said migration management component is an optimization model because it migrates data between nodes of a network in order to optimize performance of the nodes.);
(
Tamvada discloses, “The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices…,” ¶ 0100.),
[ing] a user plane and control plane to implement a network service based on the call model and user service type (
Tamvada discloses, “The described ‘elastic 5G’ service provides and manages all the hardware, software, and network functions, required to build a network, and can orchestrate network functions across different physical sites as described herein. In some embodiments the network functions may be developed and managed by the cloud service provider, however the described control plane can manage network functions across a range of providers so that customers can use a single set of APIs to call and manage their choice of network functions on cloud infrastructure,” ¶ 0021,
“The core network typically aggregates data traffic from end devices, authenticates subscribers and devices, applies personalized policies, and manages the mobility of the devices before routing the traffic to operator services or the Internet. A 5G Core for example can be decomposed into a number of microservice elements with control and user plane separation,” ¶ 0026,
“The UPF-U.sub.rs 320 correspond to a user plane component of a UPF 286 (FIG. 2B). … The UPF-U.sub.rs 320 then route or forward the processed network traffic to the network 121 or to other wireless devices 106 on the RAN 143,” ¶ 0087,
“UPF-C.sub.c 328, which is a control plane component of the UPF 286, operates in the core network 118,” ¶ 0089,
“The computing environment 403 as part of a cloud provider network offering utility computing services includes computing devices 418 and other types of computing devices 418. The computing devices 418 may correspond to different types of computing devices 418 and may have different computing architectures… The computing devices 418 may differ also in hardware resources available, such as local storage, graphics processing units (GPUs), machine learning extensions, and other characteristics,” ¶ 0095,
“The data stored in the data store 415 includes, for example, one or more network plans 439, one or more cellular topologies 442, one or more spectrum assignments 445, device data 448, one or more RBN metrics 451,” ¶ 0099,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100.
Here, the computing environment 403 can be used to deploy a user plane and control plane for implementing a network service based on the network plan 439, which is associated with the RBN metrics 451 (call model) via “bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on”, and desired network bandwidth and latency (part of user service type). The radio network access networks 143, the computing environment 403, and the UPF 286 (which contains a user plane component and a control plane component) are connected to each other in this environment.),
and sending instructions for allocating computing and network capacity in the telecommunications network environment (
Tamvada discloses, “The computing devices 418 may have various forms of allocated computing capacity 421, which may include virtual machine (VM) instances, containers, serverless functions, and so forth,” ¶ 0096, “The data stored in the data store 415 includes, for example … one or more network function workloads 466,” ¶ 0099, and “The network function workloads 466 correspond to machine images, containers, or functions to be launched in the allocated computing capacity 421 to perform one or more of the network functions,” ¶ 0108.
Here, the different types of computing devices are used to allocate computing and network capacity. Instructions for allocating the computing and network capacity are stored in the network function workloads, which are stored in the disclosed “data store 415”.).
Tamvada does not teach running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs),
calculating, using the quantified current network traffic by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost,
wherein the number, types, and sizes of disk storage and processing resources are usable to deploy a user plane and control plane to implement a network service based on the call model and user service type,
wherein the sizing and capacity model operates in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources,
and sending instructions for allocating, in accordance with the calculated number, types, and sizes of disk storage and processing resources, computing and network capacity in the telecommunications network environment.
However, Gu teaches calculating, using specification information of a virtual machine, (
Gu discloses, “Virtual machine description parameter list: each item in the description parameter list includes a requirement for creating a virtual machine… specification information of a virtual machine can be obtained, such as central processing unit (CPU) main frequency of the virtual machine, the number of CPUs, memory size, the number of disks, storage space of each disk, QoS level of disk access (used to select a disk storage type: solid state disk, directly-connected disk, external storage, and so on)… physical computing resource or virtual computing resource…,” ¶ 0201,
“network resource specification parameters: network name, subnet name, subnet CIDR, subnet IP address version number, subnet gateway IP address, whether to enable DHCP to allocate an IP address, DHCP IP address pool, list of external network IP addresses, subnet QoS parameters (including network bandwidth lower limit, network bandwidth upper limit, network jitter upper limit, network jitter lower limit, network delay upper limit, network delay lower limit, upper limit of network packet loss rate, and lower limit of network packet loss rate),” ¶ 0202,
“If there are sufficient computing resources and storage resources, the intelligent resource routing module checks whether there are network resource specification parameters… According to the computing resource, storage resource, and network resource requirements, whether the virtual machine needs to be created in a specific host cluster during allocation can be analyzed. If the data centers in a data center list meet the parameter requirements of computing resources, storage resources, and networks between data centers, the data list is a data center list that meets requirements. If any parameter of the computing resource, storage resource, or network resource specification parameters does not meet requirements, the data center list is not a data center list that meets requirements,” ¶ 0206,
“The foregoing analysis on computing resources, storage resources, and network resources is repeated until all data center lists in the set of data center lists are analyzed, and data center list sets that meet requirements are sorted in a descending order by a fulfillment degree to obtain a destination data center list set that meets resource provisioning requirements,” ¶ 0207.
The claimed “number” of disk storage and processing resources is mapped to the disclosed “number of disks” and “number of CPUs”.
The claimed “types” of disk storage and processing resources is mapped to the disclosed “disk storage type” and a resource type of either “physical computing resource or virtual computing resource”.
The claimed “sizes” of disk storage and processing resources is mapped to the disclosed “storage size of each disk” and “physical computing resource or virtual computing resource”.
The claimed “estimated cost” is mapped to the disclosed “resource provisioning requirements”, which is the computer resources expenditure/cost estimated to be needed to complete a task, that is determined by “analysis on computing resources, storage resources, and network resources”, which is used to select a host that meets each and every condition. This is a cost, because if any parameters such as the subnet QoS parameters (e.g. network bandwidth upper limit) or “number of CPUs, memory size, the number of disks, storage space of each disk” cannot be satisfied, then a data center cannot be selected.
After the combination of Tamvada with Gu, the “quantified current network traffic” from Tamvada replaces Gu’s “specification information of a virtual machine”; said “quantified current network traffic” is now used to calculate Gu’s parameters based on an estimated cost.),
wherein the number, types, and sizes of disk storage and processing resources are usable to deploy a user plane and control plane to implement a network service based on the call model and user service type (
Gu discloses, “The computing resource scheduler in data center 2 schedules computing resources: obtain virtual machine specifications according to a virtual machine specification identifier, including CPU main frequency of the virtual machine, the number of CPUs, memory size, the number of disks, and storage size of each disk; determine the resource conditions for creating a virtual machine, including QoS level of disk access, physical computing resource or virtual computing resource, CPU type (for physical computing resources, a CPU type of a physical machine is designated), and HyperVisor CPU type (if virtual computing resources are requested, a HyperVisor type is designated); and, select a host that meets all the conditions,” ¶ 0251.
Here, a host is selected using each of the storage and resource-related parameters. After the combination of Tamvada with Gu, Tamvada’s user plane and control plane are deployed in a similar manner using each of Gu’s storage and resource-related parameters.),
and sending instructions for allocating, in accordance with the calculated number, types, and sizes of disk storage and processing resources, computing and network capacity in the telecommunications network environment (
Gu discloses, “The computing resource scheduler in data center 2 schedules computing resources: obtain virtual machine specifications according to a virtual machine specification identifier, including CPU main frequency of the virtual machine, the number of CPUs, memory size, the number of disks, and storage size of each disk; determine the resource conditions for creating a virtual machine, including QoS level of disk access, physical computing resource or virtual computing resource, CPU type (for physical computing resources, a CPU type of a physical machine is designated), and HyperVisor CPU type (if virtual computing resources are requested, a HyperVisor type is designated); and, select a host that meets all the conditions,” ¶ 0251.
Here, a host is selected using each of the storage and resource-related parameters. After the combination of Tamvada with Gu, Tamvada’s computing and network capacity is allocated in a similar manner using each of Gu’s storage and resource-related parameters.).
Tamvada and Gu are both considered to be analogous to the claimed invention because they are in the same field of computer resource scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada to incorporate the teachings of Gu and provide calculating, using the quantified current network traffic, a number, types, and sizes of disk storage and processing resources based on estimated cost, wherein the number, types, and sizes of disk storage and processing resources are usable to deploy a user plane and control plane to implement a network service based on the call model and user service type, and sending instructions for allocating, in accordance with the calculated number, types, and sizes of disk storage and processing resources, computing and network capacity in the telecommunications network environment. Doing so would help allow for increasing efficiency of the computer network (Gu discloses, “By implementing the method, the resource utilization of a data center is improved, administration, maintenance, and operation are simplified, and an occurrence probability of network connection fault or traffic congestion is reduced,” Abstract.).
Tamvada in view of Gu does not teach running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs),
calculating, using the quantified current network traffic by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost,
or wherein the sizing and capacity model operates in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources.
However, Mada teaches running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs) (
Mada discloses, “In some implementations, predictions of the machine learning model 132 include one or more key performance indicators (KPIs),” Col 6, Lines 38-40.
The claimed “AI-based optimization model” is mapped to the disclosed machine learning model.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to quantify network traffic as specified by Tamvada in view of Gu.),
calculating, using the quantified current network traffic by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost (
Mada discloses, “In some implementations, the machine learning model 132 generates one or more suggested resource adjustments. For example, the control unit 110 can provide a snapshot, such as the snapshot 130. The machine learning model 132 can generate a prediction that indicates a predicted resource adjustment, e.g., the T2 prediction 134. The predicted resource adjustment can be based on averting one or more negative KPIs, such as element failures, network delays, among others. Network adjustments can include starting or initiating computing resources, turning off or deleting resources, changing an allocation of processing tasks, changing an allocation of performance tasks to general or vice versa, among others,” Col 8, Lines 7-19.
The claimed “sizing and capacity model” is mapped to the aspect of the disclosed “machine learning model” that determines the storage and resource related parameters based on predicted costs.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to predict the number, types, and sizes of the disk storage and processing resources from Tamvdada in view of Gu, in order to determine the predicted resource adjustment.),
and wherein the sizing and capacity model operates in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources (
Mada discloses, “The machine-learning model can be optimized, for example, using feedback based retraining, using subsequently obtained network information indicating, e.g., whether or not a predicted failure and corresponding adjustment successfully alleviated network disruptions,” Col 2, Lines 18-23.
The claimed “resource predictor” is mapped to the aspect of the disclosed “machine learning model” that determines the resources to add based on a predicted resource adjustment.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to predict the number, types, and sizes of the disk storage and processing resources from Tamvada in view of Gu, in order to determine the resources to add as part of the predicted resource adjustment.).
Tamvada in view of Gu, and Mada are both considered to be analogous to the claimed invention because they are in the same field of computer networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu, to incorporate the teachings of Mada and provide running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs), calculating, using the quantified current network traffic by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost, and wherein the sizing and capacity model operates in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources. Doing so would help allow for increasing efficiency of the computer network (Mada discloses, “By optimizing the computing resources of a communication network, the techniques described in this document, compared to traditional techniques, can help reduce power consumption in the sector, reduce data or voice request failures or blackouts, reduce latency, enable low-latency applications of the communication network, increase lifespan of computing elements, among others,” Col 2, Lines 35-41.).
Regarding Claim 2, Tamvada in view of Gu and Mada teaches the method of claim 1, wherein the AI-based optimization model includes a seasonality of the call model and user service type (
Tamvada discloses, “For live migration, the disclosed techniques can dynamically determine an amount of memory state data to pre-copy (e.g., while the instance is still running on the source host) and to post-copy (e.g., after the instance begins running on the destination host), based for example on latency between the locations, network bandwidth/usage patterns, and/or on which memory pages are used most frequently by the instance… This analysis may be performed by a migration management component in the region, or by a migration management component running locally in the source edge location,” ¶ 0061,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100,
“The RBN metrics 451 include various metrics or statistics that indicate the performance or health of the radio-based network 103. Such RBN metrics 451 may include bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on. The RBN metrics 451 may be aggregated on a per-device basis, a per-cell basis, a per-customer basis, etc.,” ¶ 0104.
The claimed “seasonality of the call model and user service type” is mapped to the disclosed “network bandwidth/usage patterns” of the disclosed “RBN metrics 451” (call model) and disclosed “network plan” (part of the user service type). In Tamvada, the disclosed “migration management component” (optimization model) uses the “network bandwidth/usage patterns” for optimization.
After the combination of Tamvada in view of Gu, with Mada, Mada’s AI-based optimization model now includes Tamvada in view of Gu’s seasonality (network bandwith/usage patterns) of the call model and user service type.).
Regarding Claim 3, Tamvada in view of Gu and Mada teaches the method of claim 1, wherein the user service type includes one or more of targeted number of sessions, throughput, or type of deployment (
Tamvada discloses, “The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100.
Here, a desired maximum network latency, desired bandwidth or network throughput, and quality of service parameters are targeted.).
Regarding Claim 7, Tamvada in view of Gu and Mada teaches the method of claim 1, wherein the processing resources include virtual machines (VMs) (
Tamvada discloses, “Hardware virtualization technology can enable multiple operating systems to run concurrently on a host computer, for example as virtual machines (VMs) on a compute server. A hypervisor, or virtual machine monitor (VMM), on a host allocates the host's hardware resources amongst various VMs on the host and monitors the execution of VMs,” ¶ 0048.).
Regarding Claim 11, Tamvada teaches a system for allocating computing and network capacity in a computing environment provided by a virtualized computing service provider (
Tamvada discloses, “As indicated above, users can connect to virtualized computing devices and other cloud provider network 203 resources and services, and configure and manage telecommunications networks such as 5G networks, using various interfaces 206 (e.g., APIs) via intermediate network(s) 212,” ¶ 0045,
“A provider substrate extension 224 (“PSE”) provides resources and services of the cloud provider network 203 within a separate network, such as a telecommunications network, thereby extending functionality of the cloud provider network 203 to new locations (e.g., for reasons related to latency in communications with customer devices, legal compliance, security, etc.). In some implementations, a PSE 224 can be configured to provide capacity for cloud-based workloads to run within the telecommunications network. In some implementations, a PSE 224 can be configured to provide the core and/or RAN functions of the telecommunications network, and may be configured with additional hardware (e.g., radio access hardware),” ¶ 0054.
The claimed “virtualized computing service provider” is mapped to the provider of the disclosed “virtualized computing devices” that provides the environment of the disclosed “telecommunications network”.),
the system comprising: one or more processors; and a memory in communication with the one or more processors, the memory having computer-readable instructions stored thereupon that, when executed by the one or more processors, cause the system to perform operations comprising (
Tamvada discloses, “With reference to FIG. 7, shown is a schematic block diagram of the computing environment 403 according to an embodiment of the present disclosure. The computing environment 403 includes one or more computing devices 700. Each computing device 700 includes at least one processor circuit, for example, having a processor 703 and a memory 706, both of which are coupled to a local interface 709,” ¶ 0126.):
receiving a call model and a user service type (
Tamvada discloses, “These components may be based on the 3GPP specifications by following an application architecture in which control plane and user plane processing is separated (CUPS Architecture),” ¶ 0023,
”In some embodiments, messages (e.g., packets) sent over the cloud provider network 203 include a flag to indicate whether the traffic is control plane traffic or data plane traffic. In some embodiments, the payload of traffic may be inspected to determine its type (e.g., whether control or data plane). Other techniques for distinguishing traffic types are possible,” ¶ 0050,
“The data stored in the data store 415 includes, for example, one or more network plans 439, one or more cellular topologies 442, one or more spectrum assignments 445, device data 448, one or more RBN metrics 451…,” ¶ 0099,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100,
“The RBN metrics 451 include various metrics or statistics that indicate the performance or health of the radio-based network 103. Such RBN metrics 451 may include bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on. The RBN metrics 451 may be aggregated on a per-device basis, a per-cell basis, a per-customer basis, etc.,” ¶ 0104.
The claimed “call model” is mapped to the disclosed “RBN metrics 451”, which is a representation of the amount/size and types of the disclosed “traffic” of the cloud provider network 203, measurable via the messages / packets sent over the network over a period of time. It is aggregated and then received by a data store.
This is consistent with paragraph 16 of the present application’s specification, which states “The call model is generally a representation of user behavior at a given site, and models and mimics the traffic type and volume during a given period of time.”
The claimed “user service type” is mapped to the combination of the disclosed “CUPS Architecture” and a disclosed “network plan” consisting of parameters such as “a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices”. Said “network plan” is received by a data store.
This is consistent with paragraph 18 of the present application’s specification, which states “the customer or user service type 115 includes customer specific requirements such as targeted number of sessions and throughput as well as the type of deployment (e.g., Control and User Plane Separation of EPC nodes (CUPS) / Integrated).”);
running (
Tamvada discloses, “For live migration, the disclosed techniques can dynamically determine an amount of memory state data to pre-copy (e.g., while the instance is still running on the source host) and to post-copy (e.g., after the instance begins running on the destination host), based for example on latency between the locations, network bandwidth/usage patterns, and/or on which memory pages are used most frequently by the instance. Further, a particular time at which the memory state data is transferred can be dynamically determined based on conditions of the network between the locations. This analysis may be performed by a migration management component in the region, or by a migration management component running locally in the source edge location,” ¶ 0061.
Here, processing and storage usage patterns (network bandwith/usage patterns) are used to measure network traffic via a migration management component.
Said migration management component is an optimization model because it migrates data between nodes of a network in order to optimize performance of the nodes.);
using the quantified current network traffic to [ing] a user plane and control plane to implement a network service based on the call model and a user service type (
Tamvada discloses, “The described ‘elastic 5G’ service provides and manages all the hardware, software, and network functions, required to build a network, and can orchestrate network functions across different physical sites as described herein. In some embodiments the network functions may be developed and managed by the cloud service provider, however the described control plane can manage network functions across a range of providers so that customers can use a single set of APIs to call and manage their choice of network functions on cloud infrastructure,” ¶ 0021,
“The core network typically aggregates data traffic from end devices, authenticates subscribers and devices, applies personalized policies, and manages the mobility of the devices before routing the traffic to operator services or the Internet. A 5G Core for example can be decomposed into a number of microservice elements with control and user plane separation,” ¶ 0026,
“The UPF-U.sub.rs 320 correspond to a user plane component of a UPF 286 (FIG. 2B). … The UPF-U.sub.rs 320 then route or forward the processed network traffic to the network 121 or to other wireless devices 106 on the RAN 143,” ¶ 0087,
“UPF-C.sub.c 328, which is a control plane component of the UPF 286, operates in the core network 118,” ¶ 0089,
“The computing environment 403 as part of a cloud provider network offering utility computing services includes computing devices 418 and other types of computing devices 418. The computing devices 418 may correspond to different types of computing devices 418 and may have different computing architectures… The computing devices 418 may differ also in hardware resources available, such as local storage, graphics processing units (GPUs), machine learning extensions, and other characteristics,” ¶ 0095,
“The data stored in the data store 415 includes, for example, one or more network plans 439, one or more cellular topologies 442, one or more spectrum assignments 445, device data 448, one or more RBN metrics 451,” ¶ 0099,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100.
Here, the computing environment 403 can be used to deploy a user plane and control plane for implementing a network service based on the network plan 439, which is associated with the RBN metrics 451 (call model) via “bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on”, and desired network bandwidth and latency (part of user service type). The radio network access networks 143, the computing environment 403, and the UPF 286 (which contains a user plane component and a control plane component) are connected to each other in this environment.),
and sending instructions for allocating(
Tamvada discloses, “The computing devices 418 may have various forms of allocated computing capacity 421, which may include virtual machine (VM) instances, containers, serverless functions, and so forth,” ¶ 0096, “The data stored in the data store 415 includes, for example … one or more network function workloads 466,” ¶ 0099, and “The network function workloads 466 correspond to machine images, containers, or functions to be launched in the allocated computing capacity 421 to perform one or more of the network functions,” ¶ 0108.
Here, the different types of computing devices are used to allocate computing and network capacity. Instructions for allocating the computing and network capacity are stored in the network function workloads, which are stored in the disclosed “data store 415”.).
Tamvada does not teach running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs),
using the quantified current network traffic to calculate, by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost to deploy a user plane and control plane to implement a network service based on the call model and a user service type,
and sending instructions for allocating, in accordance with the number, types, and sizes of disk storage and processing resources, computing and network capacity in the computing environment provided by the virtualized computing service provider.
However, Gu teaches using the quantified current network traffic to calculatedeploy a user plane and control plane to implement a network service based on the call model and a user service type (
Gu discloses, “Virtual machine description parameter list: each item in the description parameter list includes a requirement for creating a virtual machine… specification information of a virtual machine can be obtained, such as central processing unit (CPU) main frequency of the virtual machine, the number of CPUs, memory size, the number of disks, storage space of each disk, QoS level of disk access (used to select a disk storage type: solid state disk, directly-connected disk, external storage, and so on)… physical computing resource or virtual computing resource…,” ¶ 0201,
“network resource specification parameters: network name, subnet name, subnet CIDR, subnet IP address version number, subnet gateway IP address, whether to enable DHCP to allocate an IP address, DHCP IP address pool, list of external network IP addresses, subnet QoS parameters (including network bandwidth lower limit, network bandwidth upper limit, network jitter upper limit, network jitter lower limit, network delay upper limit, network delay lower limit, upper limit of network packet loss rate, and lower limit of network packet loss rate),” ¶ 0202,
“If there are sufficient computing resources and storage resources, the intelligent resource routing module checks whether there are network resource specification parameters… According to the computing resource, storage resource, and network resource requirements, whether the virtual machine needs to be created in a specific host cluster during allocation can be analyzed. If the data centers in a data center list meet the parameter requirements of computing resources, storage resources, and networks between data centers, the data list is a data center list that meets requirements. If any parameter of the computing resource, storage resource, or network resource specification parameters does not meet requirements, the data center list is not a data center list that meets requirements,” ¶ 0206,
“The foregoing analysis on computing resources, storage resources, and network resources is repeated until all data center lists in the set of data center lists are analyzed, and data center list sets that meet requirements are sorted in a descending order by a fulfillment degree to obtain a destination data center list set that meets resource provisioning requirements,” ¶ 0207.
The claimed “number” of disk storage and processing resources is mapped to the disclosed “number of disks” and “number of CPUs”.
The claimed “types” of disk storage and processing resources is mapped to the disclosed “disk storage type” and a resource type of either “physical computing resource or virtual computing resource”.
The claimed “sizes” of disk storage and processing resources is mapped to the disclosed “storage size of each disk” and “physical computing resource or virtual computing resource”.
The claimed “estimated cost” is mapped to the disclosed “resource provisioning requirements”, which is the computer resources expenditure/cost needed to complete a task, that is determined by “analysis on computing resources, storage resources, and network resources”, which is used to select a host that meets each and every condition. This is a cost, because if any parameters such as the subnet QoS parameters (e.g. network bandwidth upper limit) or “number of CPUs, memory size, the number of disks, storage space of each disk” cannot be satisfied, then a data center cannot be selected.
After the combination of Tamvada with Gu, the “quantified current network traffic” from Tamvada replaces Gu’s “specification information of a virtual machine”; said “quantified current network traffic” is now used to calculate Gu’s parameters based on an estimated cost, in order to deploy a user plane and control plane as specified by Tamvada.),
and sending instructions for allocating, in accordance with the number, types, and sizes of disk storage and processing resources, computing and network capacity in the computing environment provided by the virtualized computing service provider (
Gu discloses, “The computing resource scheduler in data center 2 schedules computing resources: obtain virtual machine specifications according to a virtual machine specification identifier, including CPU main frequency of the virtual machine, the number of CPUs, memory size, the number of disks, and storage size of each disk; determine the resource conditions for creating a virtual machine, including QoS level of disk access, physical computing resource or virtual computing resource, CPU type (for physical computing resources, a CPU type of a physical machine is designated), and HyperVisor CPU type (if virtual computing resources are requested, a HyperVisor type is designated); and, select a host that meets all the conditions,” ¶ 0251.
Here, a host is selected using each of the storage and resource-related parameters. After the combination of Tamvada with Gu, Tamvada’s computing and network capacity is allocated in a similar manner using each of Gu’s storage and resource-related parameters.).
Tamvada and Gu are both considered to be analogous to the claimed invention because they are in the same field of computer resource scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada to incorporate the teachings of Gu and provide using the quantified current network traffic to calculate a number, types, and sizes of disk storage and processing resources based on estimated cost to deploy a user plane and control plane to implement a network service based on the call model and a user service type, and sending instructions for allocating, in accordance with the number, types, and sizes of disk storage and processing resources, computing and network capacity in the computing environment provided by the virtualized computing service provider. Doing so would help allow for increasing efficiency of the computer network (Gu discloses, “By implementing the method, the resource utilization of a data center is improved, administration, maintenance, and operation are simplified, and an occurrence probability of network connection fault or traffic congestion is reduced,” Abstract.).
Tamvada in view of Gu does not teach running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs),
or using the quantified current network traffic to calculate, by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost to deploy a user plane and control plane to implement a network service based on the call model and a user service type.
However, Mada teaches running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs) (
Mada discloses, “In some implementations, predictions of the machine learning model 132 include one or more key performance indicators (KPIs),” Col 6, Lines 38-40.
The claimed “AI-based optimization model” is mapped to the disclosed machine learning model.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to quantify network traffic as specified by Tamvada in view of Gu.),
and using the quantified current network traffic to calculate, by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost to deploy a user plane and control plane to implement a network service based on the call model and a user service type (
Mada discloses, “In some implementations, the machine learning model 132 generates one or more suggested resource adjustments. For example, the control unit 110 can provide a snapshot, such as the snapshot 130. The machine learning model 132 can generate a prediction that indicates a predicted resource adjustment, e.g., the T2 prediction 134. The predicted resource adjustment can be based on averting one or more negative KPIs, such as element failures, network delays, among others. Network adjustments can include starting or initiating computing resources, turning off or deleting resources, changing an allocation of processing tasks, changing an allocation of performance tasks to general or vice versa, among others,” Col 8, Lines 7-19.
The claimed “sizing and capacity model” is mapped to the aspect of the disclosed “machine learning model” that determines the storage and resource related parameters based on predicted costs.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to predict the number, types, and sizes of the disk storage and processing resources from Tamvdada in view of Gu, in order to determine the predicted resource adjustment.)
Tamvada in view of Gu, and Mada are both considered to be analogous to the claimed invention because they are in the same field of computer networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu to incorporate the teachings of Mada and provide running an AI-based optimization model to quantify current network traffic in the telecommunications network environment based on processing and storage usage patterns using key performance indicators (KPIs), and using the quantified current network traffic to calculate, by a sizing and capacity model of the AI-based optimization model, a number, types, and sizes of disk storage and processing resources based on estimated cost to deploy a user plane and control plane to implement a network service based on the call model and a user service type. Doing so would help allow for increasing efficiency of the computer network (Mada discloses, “By optimizing the computing resources of a communication network, the techniques described in this document, compared to traditional techniques, can help reduce power consumption in the sector, reduce data or voice request failures or blackouts, reduce latency, enable low-latency applications of the communication network, increase lifespan of computing elements, among others,” Col 2, Lines 35-41.).
Claim 14 is a computer-readable storage medium claim corresponding to the system Claim 11. Therefore, Claim 14 is rejected for the same reasons set forth in the rejection of Claim 11.
Regarding Claim 12, Tamvada in view of Gu and Mada teaches the system of claim 11, wherein the sizing and capacity model works in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources (
Mada discloses, “The machine-learning model can be optimized, for example, using feedback based retraining, using subsequently obtained network information indicating, e.g., whether or not a predicted failure and corresponding adjustment successfully alleviated network disruptions,” Col 2, Lines 18-23.
The claimed “resource predictor” is mapped to the component of the disclosed “machine learning model” that determines the resources to add based on a predicted resource adjustment.
After the combination of Tamvada in view of Gu, with Mada, the machine learning model from Mada is used to predict the number, types, and sizes of the disk storage and processing resources from Tamvdada in view of Gu, in order to determine the resources to add as part of the predicted resource adjustment.).
Tamvada in view of Gu, and Mada are both considered to be analogous to the claimed invention because they are in the same field of computer networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu to incorporate the teachings of Mada and provide wherein the sizing and capacity model operates in a feedback cycle with a resource predictor to dynamically compute the number, types, and sizes of disk storage and processing resources. Doing so would help allow for increasing efficiency of the computer network (Mada discloses, “By optimizing the computing resources of a communication network, the techniques described in this document, compared to traditional techniques, can help reduce power consumption in the sector, reduce data or voice request failures or blackouts, reduce latency, enable low-latency applications of the communication network, increase lifespan of computing elements, among others,” Col 2, Lines 35-41.).
Claim 15 is a computer-readable storage medium claim corresponding to the system Claim 12. Therefore, Claim 15 is rejected for the same reasons set forth in the rejection of Claim 12.
Regarding Claim 13, Tamvada in view of Gu and Mada teaches the system of claim 11, wherein the AI-based optimization model includes a seasonality of the call model and user service type (
Tamvada discloses, “For live migration, the disclosed techniques can dynamically determine an amount of memory state data to pre-copy (e.g., while the instance is still running on the source host) and to post-copy (e.g., after the instance begins running on the destination host), based for example on latency between the locations, network bandwidth/usage patterns, and/or on which memory pages are used most frequently by the instance… This analysis may be performed by a migration management component in the region, or by a migration management component running locally in the source edge location,” ¶ 0061,
“The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100,
“The RBN metrics 451 include various metrics or statistics that indicate the performance or health of the radio-based network 103. Such RBN metrics 451 may include bandwidth metrics, dropped packet metrics, signal strength metrics, latency metrics, and so on. The RBN metrics 451 may be aggregated on a per-device basis, a per-cell basis, a per-customer basis, etc.,” ¶ 0104.
The claimed “seasonality of the call model and user service type” is mapped to the disclosed “network bandwidth/usage patterns” of the disclosed “RBN metrics 451” (call model) and disclosed “network plan” (part of the user service type). In Tamvada, the disclosed “migration management component” (optimization model) uses the “network bandwidth/usage patterns” for optimization.
After the combination of Tamvada in view of Gu, with Mada, Mada’s AI-based optimization model now includes Tamvada in view of Gu’s seasonality (network bandwith/usage patterns) of the call model and user service type.).
Claim 16 is a computer-readable storage medium claim corresponding to the system Claim 13. Therefore, Claim 16 is rejected for the same reasons set forth in the rejection of Claim 13.
Regarding Claim 17, Tamvada in view of Gu and Mada teaches the computer-readable storage medium of claim 14, wherein the user service type includes one or more of targeted number of sessions, throughput, or type of deployment (
Tamvada discloses, “The network plan 439 is a specification of a radio-based network 103 to be deployed for a customer. For example, a network plan 439 may include premises locations or geographic areas to be covered, a number of cells, device identification information and permissions, a desired maximum network latency, a desired bandwidth or network throughput for one or more classes of devices, one or more quality of service parameters for applications or services, and/or other parameters that can be used to create a radio-based network 103,” ¶ 0100.
Here, a desired maximum network latency, desired bandwidth or network throughput, and quality of service parameters are targeted.).
Claims 4-5 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1), Mada (US 12009974 B1), and Sakai (US 20220027760 A1).
Regarding Claim 4, Tamvada in view of Gu and Mada teaches the method of claim 1. Tamvada in view of Gu and Mada does not teach wherein the sizing and capacity model uses multi-output regression.
However, Sakai teaches wherein the sizing and capacity model uses multi-output regression (
Sakai discloses, “Assume that as the predictor in operation, a predictor has been obtained that has learned a correspondence between input x and output y for each of seen tasks or seen classes, for example, by any statistical learning method or heuristics such as multi-output regression or deep learning,” ¶ 0025.
After the combination of Tamvada in view of Gu and Mada, with Sakai, the sizing and capacity model from Tamvada in view of Gu and Mada now uses multi-output regression as specified by Sakai.).
Tamvada in view of Gu and Mada, and Sakai are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Sakai and provide wherein the sizing and capacity model uses multi-output regression. Doing so would help provide more information into outputs for more efficient learning based on the output information. (Sakai discloses, “Assume that as the predictor in operation, a predictor has been obtained that has learned a correspondence between input x and output y for each of seen tasks or seen classes, for example, by any statistical learning method or heuristics such as multi-output regression or deep learning,” ¶ 0025.).
Regarding Claim 18, Tamvada in view of Gu and Mada teaches the computer-readable storage medium of claim 14. Tamvada in view of Gu and Mada does not teach wherein the sizing and capacity model uses multi-output regression.
However, Sakai teaches wherein the sizing and capacity model uses multi-output regression (
Sakai discloses, “Assume that as the predictor in operation, a predictor has been obtained that has learned a correspondence between input x and output y for each of seen tasks or seen classes, for example, by any statistical learning method or heuristics such as multi-output regression or deep learning,” ¶ 0025.
After the combination of Tamvada in view of Gu and Mada, with Sakai, the sizing and capacity model from Tamvada in view of Gu and Mada now uses multi-output regression as specified by Sakai.).
Tamvada in view of Gu and Mada, and Sakai are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Sakai and provide wherein the sizing and capacity model uses multi-output regression. Doing so would help provide more information into outputs for more efficient learning based on the output information. (Sakai discloses, “Assume that as the predictor in operation, a predictor has been obtained that has learned a correspondence between input x and output y for each of seen tasks or seen classes, for example, by any statistical learning method or heuristics such as multi-output regression or deep learning,” ¶ 0025.).
Regarding Claim 5, Tamvada in view of Gu and Mada teaches the method of claim 1. Tamvada in view of Gu and Mada does not teach wherein the resource predictor uses multi-class classification.
However, Sakai teaches wherein the resource predictor uses multi-class classification (
Sakai discloses, “The predictor h.sub.t in operation receives an input x. The predictor h.sub.t that implements real value prediction is a function that outputs a predicted value itself. The predictor h.sub.t that implements multi-class classification is a function that outputs a score (prediction score) that represents the degree to which the input x belongs to class t. The predictor that implements multi-class classification outputs a class with the highest score as the predicted class y,” ¶ 0024.).
Tamvada in view of Gu and Mada, and Sakai are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Sakai and provide wherein the resource predictor uses multi-class classification. Doing so would help allow for more flexibility in classifying data (Sakai discloses, “The predictor h.sub.t in operation receives an input x. The predictor h.sub.t that implements real value prediction is a function that outputs a predicted value itself. The predictor h.sub.t that implements multi-class classification is a function that outputs a score (prediction score) that represents the degree to which the input x belongs to class t. The predictor that implements multi-class classification outputs a class with the highest score as the predicted class y,” ¶ 0024.).
Regarding Claim 19, Tamvada in view of Gu and Mada teaches the computer-readable storage medium of claim 15. Tamvada in view of Gu and Mada does not teach wherein the resource predictor uses multi-class classification.
However, Sakai teaches wherein the resource predictor uses multi-class classification (
Sakai discloses, “The predictor h.sub.t in operation receives an input x. The predictor h.sub.t that implements real value prediction is a function that outputs a predicted value itself. The predictor h.sub.t that implements multi-class classification is a function that outputs a score (prediction score) that represents the degree to which the input x belongs to class t. The predictor that implements multi-class classification outputs a class with the highest score as the predicted class y,” ¶ 0024.).
Tamvada in view of Gu and Mada, and Sakai are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Sakai and provide wherein the resource predictor uses multi-class classification. Doing so would help allow for more flexibility in classifying data (Sakai discloses, “The predictor h.sub.t in operation receives an input x. The predictor h.sub.t that implements real value prediction is a function that outputs a predicted value itself. The predictor h.sub.t that implements multi-class classification is a function that outputs a score (prediction score) that represents the degree to which the input x belongs to class t. The predictor that implements multi-class classification outputs a class with the highest score as the predicted class y,” ¶ 0024.).
Claims 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1), Mada (US 12009974 B1), Sakai (US 20220027760 A1), and Hushchyn (US 20180373564 A1).
Regarding Claim 6, Tamvada in view of Gu, Mada, and Sakai teaches the method of claim 5. Tamvada in view of Gu, Mada, and Sakai does not teach wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks.
However, Hushchyn teaches wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks (
Hushchyn discloses, “In some embodiments, the inventive ‘Scheduler’ module (102) may be configured/programmed to utilize a map-reduce paradigm on a variety of machine learning algorithms, including, but not limited to, linear and logistic regression, k-means, naive Bayes, SVM, PCA, Gaussian discriminant analysis, and artificial neural networks,” ¶ 0056.
After the combination of Tamvada in view of Gu, Mada, and Sakai, with Hushchyn, Hushchyn’s Gaussian discriminant analysis is used for multi-class classification.).
Tamvada in view of Gu, Mada, and Sakai, and Hushchyn are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu, Mada, and Sakai to incorporate the teachings of Hushchyn and provide wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks. Doing so would help allow classification tasks to be conducted with greater efficiency for certain scenarios (e.g. high efficiency with small datasets, the ability to model complex decision boundaries, and providing probabilistic predictions that measure uncertainty) (Hushchyn discloses, “In some embodiments, the inventive “Scheduler” module (102) may be configured/programmed to utilize a map-reduce paradigm on a variety of machine learning algorithms, including, but not limited to, linear and logistic regression, k-means, naive Bayes, SVM, PCA, Gaussian discriminant analysis, and artificial neural networks. For example, the inventive “Scheduler” module (102) may be configured/programmed to utilize the same allocation and rebalancing method in pairs of subtasks, while utilize algorithm-specific subtasks,” ¶ 0056.).
Regarding Claim 20, Tamvada in view of Gu, Mada, and Sakai teaches the computer-readable storage medium of claim 19. Tamvada in view of Gu, Mada, and Sakai does not teach wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks.
However, Hushchyn teaches wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks (
Hushchyn discloses, “In some embodiments, the inventive ‘Scheduler’ module (102) may be configured/programmed to utilize a map-reduce paradigm on a variety of machine learning algorithms, including, but not limited to, linear and logistic regression, k-means, naive Bayes, SVM, PCA, Gaussian discriminant analysis, and artificial neural networks,” ¶ 0056.
After the combination of Tamvada in view of Gu, Mada, and Sakai, with Hushchyn, Hushchyn’s Gaussian discriminant analysis is used for multi-class classification.).
Tamvada in view of Gu, Mada, and Sakai, and Hushchyn are both considered to be analogous to the claimed invention because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu, Mada, and Sakai to incorporate the teachings of Hushchyn and provide wherein the multi-class classification is one of support vector machines, Gaussian discriminant analysis, or convolutional neural networks. Doing so would help allow classification tasks to be conducted with greater efficiency for certain scenarios (e.g. high efficiency with small datasets, the ability to model complex decision boundaries, and providing probabilistic predictions that measure uncertainty) (Hushchyn discloses, “In some embodiments, the inventive “Scheduler” module (102) may be configured/programmed to utilize a map-reduce paradigm on a variety of machine learning algorithms, including, but not limited to, linear and logistic regression, k-means, naive Bayes, SVM, PCA, Gaussian discriminant analysis, and artificial neural networks. For example, the inventive “Scheduler” module (102) may be configured/programmed to utilize the same allocation and rebalancing method in pairs of subtasks, while utilize algorithm-specific subtasks,” ¶ 0056.).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1), Mada (US 12009974 B1), and Zhao (US 20240020144 A1).
Regarding Claim 8, Tamvada in view of Gu and Mada teaches the method of claim 7. Tamvada in view of Gu and Mada does not teach wherein the VMs are optimized for processing and storage consumption based on a VM type comprising one or more of general purpose, compute optimized, or memory optimized.
However, Zhao teaches wherein the VMs are optimized for processing and storage consumption based on a VM type comprising one or more of general purpose, compute optimized, or memory optimized (
Zhao discloses, “Some cloud service providers provide options for different types of VM provisioning. For example, some cloud service providers provide general purpose, compute optimized, memory optimized, and accelerated computing provisioning,” ¶ 0045.).
Tamvada in view of Gu and Mada, and Zhao are both considered to be analogous to the claimed invention because they are in the same field of computer resources. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Zhao and provide wherein the VMs are optimized for processing and storage consumption based on a VM type comprising one or more of general purpose, compute optimized, or memory optimized. Doing so would help provide different choices of virtual machines for users to select (Zhao discloses, “Some cloud service providers provide options for different types of VM provisioning. For example, some cloud service providers provide general purpose, compute optimized, memory optimized, and accelerated computing provisioning,” ¶ 0045.).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1), Mada (US 12009974 B1) and Kwan (US 12112287 B1).
Regarding Claim 9, Tamvada in view of Gu and Mada teaches the method of claim 1. Tamvada in view of Gu and Mada does not teach wherein an automated total cost optimizer (TCO) module receives an output from the resource predictor to determine the estimated cost.
However, Kwan teaches wherein an automated total cost optimizer (TCO) module receives an output from the resource predictor to determine the estimated cost (
Kwan discloses, “At 212, the estimation API 114 may consume the endpoint, e.g., the API uses the output from the first ML model, to provide an estimation or prediction of ‘soft costs,’ e.g., the needed resources of the service provider network 100, needed infrastructure of the service provider network 100, latency caused by testing, etc., and provide a prediction for needed resources and infrastructure, along with associated costs,” Col 12, Lines 31-38.
The claimed “an automated total cost optimizer (TCO) module” is mapped to the disclosed “estimation API 114”, which uses an output from a machine learning model to determine an estimated cost.).
Tamvada in view of Gu and Mada, and Kwan are both considered to be analogous to the claimed invention because they are in the same field of computer resources. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu and Mada to incorporate the teachings of Kwan and provide wherein an automated total cost optimizer (TCO) module receives an output from the resource predictor to determine the estimated cost. Doing so would help provide predicting the estimated cost automatically without a need for manual intervention (Kwan discloses, “At 212, the estimation API 114 may consume the endpoint, e.g., the API uses the output from the first ML model, to provide an estimation or prediction of ‘soft costs,’ e.g., the needed resources of the service provider network 100, needed infrastructure of the service provider network 100, latency caused by testing, etc., and provide a prediction for needed resources and infrastructure, along with associated costs,” Col 12, Lines 31-38.).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Tamvada (US 20220386393 A1) in view of Gu (US 20150334696 A1), Mada (US 12009974 B1), Kwan (US 12112287 B1), and Mathukumar (US 20220318124 A1).
Regarding Claim 10, Tamvada in view of Gu, Mada, and Kwan teaches the method of claim 9. Tamvada in view of Gu, Mada, and Kwan does not teach wherein the estimated cost is one of a fixed cost or a seasonal cost.
However, Mathukumar teaches wherein the estimated cost is one of a fixed cost or a seasonal cost (
Mathukumar discloses, “Further, in one embodiment, the cost prediction subsystem 204 may analyse a growth associated with the one or more users to predict a future cost of the one or more cloud computing assets 108A-N associated with the one or more new users. The cost prediction subsystem 204 primarily uses date and time of the year to predict the future cost. Hence, any weekly or monthly patterns are predicted along with seasonal spikes,” ¶ 0042.).
Tamvada in view of Gu, Mada, and Kwan, and Mathhukumar are both considered to be analogous to the claimed invention because they are in the same field of computer resources. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Tamvada in view of Gu, Mada, and Kwan to incorporate the teachings of Mathhukumar and provide wherein the estimated cost is one of a fixed cost or a seasonal cost. Doing so would help allow for more accurate prediction by taking the seasonal variations into account (Mathukumar discloses, “Further, in one embodiment, the cost prediction subsystem 204 may analyse a growth associated with the one or more users to predict a future cost of the one or more cloud computing assets 108A-N associated with the one or more new users. The cost prediction subsystem 204 primarily uses date and time of the year to predict the future cost. Hence, any weekly or monthly patterns are predicted along with seasonal spikes,” ¶ 0042.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Nanda et al. (US 20240031863 A1): Dynamic Traffic Control
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SUN whose telephone number is (571)272-6735. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW NMN SUN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195