Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-25 are currently pending for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 7, 9-12, 14-16, 19-20, and 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1).
As per claim 1, Kairali discloses:
A method for deploying network management services for a plurality of tenants, the method comprising: at a multi-tenant service executing in a container cluster ("The present disclosure relates generally to the field of microservice architecture, and more specifically to service meshes, communication between microservices and intelligent compression of data routed through microservice chains by the service mesh.", 0001 ;" Administrators of the service mesh 211 may be able to obtain an overview of applications 203 running on the service mesh 211, including a view of applications on each cluster, create or modify computing resources of the service mesh 211; deploy instances 213a-213n of services 215 which may be instantiated as part of a pod, container or cluster; scale service mesh 211 deployments; instances 213 of service 215; restart pods or containers and/or deploy new applications or services 215.", 0074 ; "Embodiments of the service mesh control plane 205 may organize instances 213 (such as one or more pods, containers or clusters), services 215, and/or proxies 217 into one or more networks or namespaces. The service mesh control plane 205 may enroll a set of namespaces to a service mesh 211 and upon enrolling a namespace, the service mesh control plane 205 may enable monitoring of resources within the namespace, including the monitoring of any applications deployed as pods, services 215 or other types of instances 213, and traffic policies", 0078 ; Examiner Note: the set of namespaces equates to a plurality of tenants)
Kairali may disclose a method for deploying network management services for a plurality of tenants, but does not explicitly disclose the cluster being implemented in the public cloud or on a group of datacenters.
However, Alagna discloses:
a container cluster implemented in a public cloud (“It is further to be understood that the foregoing architecture may be implemented in a public cloud as well, where data centers similar to data centers 210a-c are owned and operated by a public cloud vendor. In such an implementation, hosts (e.g., 121a-n) may be virtual machines and data centers 210a-c may be provided with or without a container environment, similar to container platform 220, to the organization as Infrastructure-as-a-Service, Platform-as-a-Service, or Software-as-a-Service.”, 0054)
for a first tenant, deploying a first set of network management services in the container cluster for managing a first group of datacenters of the first tenant (“As described further below, in embodiments, a system is provided including a cloud having multiple hosts that are part of a stretch cluster spanning multiple data centers. Each host of a first subset of the hosts may be operable to run multiple instances of a component of a Security Information and Event Management (SIEM) application, such as an indexer, within respective containers.", 0019 ; “To achieve such performance increases, various embodiments may, for example and among other things, optimize the configuration of processing, memory, and storage resources of a host (i.e., a server) so that multiple instances of containerized SIEM application components may be packed on the host and run efficiently and performantly. Additionally, exemplary systems described here may run ingress gateways (implementing service mesh functionality) on dedicated hosts, which may beneficially provide highly available traffic routing even in the presence of large amounts of traffic.”, 0021; Examiner Note: the services of the SIEM application equate to a first set of network management services, the first subset of hosts equates to a first tenant)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali with the container cluster of Alagna in order to provide the benefits of a service mesh, such as highly available traffic routing even in the presence of large amounts of traffic, to a cluster of datacenters managed by a network management service, thereby improving the communication speed of the system (Alagna, [0021]).
Kairali in view of Alagna discloses the above limitations of claim 1, but does not explicitly disclose deploying a second set of network management services to a second tenant.
However, Hockey discloses:
for a second tenant, deploying a second set of network management services in the container cluster for managing a second group of datacenters of the second tenant. ( “FIG. 2 shows a first cluster 206 and a second cluster 218. The first cluster 206 has a control plane 208 and a plurality of units 214 together forming a first service mesh. Each of the units 214 has a proxy and a client or server. At least one of the units 214 comprises a client 210 and a proxy 212. The second cluster 218 also has a control plane 216 and a plurality of units 224 together forming a second service mesh. Each of the units 224 has a proxy 220 and a client or server 222. The first and second service meshes are independent of one another since they do not share information except for a root certificate 200 as now explained.”, 0031 ; Examiner Note: the second cluster equates to a second tenant, and the second service mesh equates to a second set of network management services)
The combination of Kairali in view of Alagna in further view of Hockey would provide a system capable of deploying a second set of network management services for managing a second group of datacenters of a second tenant. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali in view of the container cluster of Alagna with the second set of network management services of Hockey in order to provide the improvements to the functioning of the underlying communication network, such as increased security, to the network management service clusters (Hockey, [0043]).
As per claim 2, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Kairali discloses:
Each of the network management services of the first and second sets of network management services is deployed in a separate namespace of the container cluster. ("Embodiments of the service mesh control plane 205 may organize instances 213 (such as one or more pods, containers or clusters), services 215, and/or proxies 217 into one or more networks or namespaces.", 0078)
As per claim 3, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 2.
Furthermore, Kairali discloses:
each of the network management services of the first and second sets of network management services is implemented as a respective plurality of microservices deployed in a respective namespace. (“In currently available microservice architectures, microservices can communicate through a service mesh comprising a plurality of microservices conducting service-to-service communication.”, 0023 ; “In some situations, a Friendly Neighbor Compression Protocol may be enabled by the service mesh and/or service mesh control plane. Friendly Neighbor Compression protocols route data through trusted services of the service mesh within the same network and namespace as a microservice chain being invoked. These trusted services may be microservices and/or proxies having the same, similar and/or equivalent standards and/or security requirements as the services within the microservice chain requested to receive the data being sent.”, 0025 )
As per claim 5, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 3.
Furthermore, Hockey discloses:
the container cluster enforces rules allowing the microservices of different services in the first set of network management services to communicate with each other but not with any microservices of the second set of network management services. (“The service mesh at the first cluster is used to ensure 300 traffic within the first cluster is communicated within the first cluster using a secure communications protocol with mutual authentication accomplished using a root certificate. The service mesh achieves this by enforcing all traffic entering a unit to be processed by the proxy of the unit. The control plane of the service mesh configures the proxies so that the proxies process the traffic they receive according to rules. Thus, the proxies encrypt the traffic, or block traffic which is not encrypted already.", 0046)
The combination of Kairali in view of Alagna in further view of Hockey would provide a system capable of enforcing rules allowing the microservices of an NMS to communicate with each other, but not the microservices of a second NMS.
As per claim 7, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Hockey discloses:
the first and second sets of network management services have at least one type of network management service in common. ("the first cluster provides the service in the first cluster, in addition to the second cluster providing the service, and wherein the service is hidden from the first service mesh such that a control plane of the first service mesh does not configure the proxy with potentially conflicting rules.", 0096)
As per claim 9, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Hockey discloses:
deploying a set of multi-tenant services in the container cluster for managing registration of the first and second groups of datacenters with the network management services ("FIG. 2 is a schematic diagram of two clusters of the communications network of FIG. 1. FIG. 2 shows a first cluster 206 and a second cluster 218. The first cluster 206 has a control plane 208 and a plurality of units 214 together forming a first service mesh. Each of the units 214 has a proxy and a client or server. At least one of the units 214 comprises a client 210 and a proxy 212. The second cluster 218 also has a control plane 216 and a plurality of units 224 together forming a second service mesh. Each of the units 224 has a proxy 220 and a client or server 222. The first and second service meshes are independent of one another since they do not share information except for a root certificate 200 as now explained.", 0031 )
The combination of Kairali in view of Alagna in further view of Hockey would provide a system capable of deploying a set of multi-tenant services in the container cluster for managing registration of the first and second groups of datacenters within the Network management services (see Alagna, [0059]: “For example, in the context of Kubernetes® the CSI drivers may be deployed on Kubernetes® by registering them via the kubelet plugin registration mechanism. )
As per claim 10, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Kairali discloses:
the container cluster is a Kubernetes cluster. (“The services 215 of the service mesh may run on infrastructure via a scheduling system (e.g., Kubernetes®), and the workload scheduler may be responsible for bootstrapping a service 215 along with a sidecar or proxy 217a-217n (referred to generally herein as proxy 217).", 0077)
As per claim 11, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Alagna discloses:
a set of ingress services of the container cluster receives a plurality of user requests for the network management services and routes each of the user requests to a particular network management service to which the user request is directed. ("Ingress gateways 233a-b may be responsible for guarding and controlling access to cluster 230 from traffic that originates outside of cluster 230 from external source 215. External source 215 may be a SIEM user issuing a search request via client 140a-x or monitored infrastructure (e.g., infrastructure 110a-x) providing event data (e.g., event data 111). With respect to monitored infrastructure, the event data may be provided directly by the monitored infrastructure or indirectly by an intermediate entity (e.g., a forwarder that may monitor log data and forward as appropriate). When traffic is accepted by one of ingress gateways 233a-b from external source 215, it may further handle, among other things, routing and balancing of the ingress request to appropriate SIEM application component instances (e.g., indexer applications 237a-b or search applications 235a-b) within the cluster 230", 0044)
As per claim 12, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 11.
Furthermore, Alagna discloses:
the set of ingress services also receives a plurality of messages from the datacenters of the first and second groups of datacenters and routes each of the messages to a particular network management service to which the message is requested. ("Ingress gateways 233a-b may be responsible for guarding and controlling access to cluster 230 from traffic that originates outside of cluster 230 from external source 215. External source 215 may be a SIEM user issuing a search request via client 140a-x or monitored infrastructure (e.g., infrastructure 110a-x) providing event data (e.g., event data 111). With respect to monitored infrastructure, the event data may be provided directly by the monitored infrastructure or indirectly by an intermediate entity (e.g., a forwarder that may monitor log data and forward as appropriate). When traffic is accepted by one of ingress gateways 233a-b from external source 215, it may further handle, among other things, routing and balancing of the ingress request to appropriate SIEM application component instances (e.g., indexer applications 237a-b or search applications 235a-b) within the cluster 230", 0044)
As per claim 14, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 12.
Furthermore, Hockey discloses:
the set of ingress services ensures that the user requests and messages are authenticated for the network management services to which the user requests and messages are directed ("A client in the first cluster originates traffic to a second cluster for processing, the second cluster having access to the root certificate. Using the first service mesh, routing the traffic to the second cluster is done using a secure communications protocol with mutual authentication. Mutual authentication is carried out between the first cluster and the second cluster using certificate chains having the root certificate; and in response to the mutual authentication being successful, application data is routed to the second cluster using the secure communications protocol such that the application data may be processed at the second cluster to provide the service.", 0006)
As per claim 15, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1.
Furthermore, Alagna discloses:
the container cluster spans a plurality of geographic locations of the public cloud (“Cluster 230 may be a stretch cluster spanning data centers 210a-n in which the multiple hosts are geographically separated and distributed across data centers 210a-n. In the context of the present example, data centers 210a-n may include additional equipment, infrastructure and hosts (e.g., separate and apart from hosts 121a-n of cluster 230) to support container platform 220 and object stores (e.g., object stores 240a-b).", 0037)
As per claim 16, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 15.
Furthermore, Alagna discloses:
for a particular one of the network management services, each of a plurality of microservices is replicated across each of the geographic locations of the public cloud (“wherein the plurality of hosts is part of a stretch cluster spanning a plurality of data centers, wherein each host of a first subset of the plurality of hosts runs a plurality of containerized instances of a component of a Security Information and Event Management (SIEM) application within respective containers", clm.1 ; Examiner Note: a component of a SIEM application equates to a microservice of a network management service)
As per claim 19, Kairali discloses:
A non-transitory machine-readable medium storing a multi-tenant service which when executed by at least one processing unit deploys network management services for a plurality of tenants, the multi-tenant service executing in a container cluster (“Accordingly, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached Figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments.”, 0070 ; "The present disclosure relates generally to the field of microservice architecture, and more specifically to service meshes, communication between microservices and intelligent compression of data routed through microservice chains by the service mesh.", 0001 ;" Administrators of the service mesh 211 may be able to obtain an overview of applications 203 running on the service mesh 211, including a view of applications on each cluster, create or modify computing resources of the service mesh 211; deploy instances 213a-213n of services 215 which may be instantiated as part of a pod, container or cluster; scale service mesh 211 deployments; instances 213 of service 215; restart pods or containers and/or deploy new applications or services 215.", 0074 ; "Embodiments of the service mesh control plane 205 may organize instances 213 (such as one or more pods, containers or clusters), services 215, and/or proxies 217 into one or more networks or namespaces. The service mesh control plane 205 may enroll a set of namespaces to a service mesh 211 and upon enrolling a namespace, the service mesh control plane 205 may enable monitoring of resources within the namespace, including the monitoring of any applications deployed as pods, services 215 or other types of instances 213, and traffic policies", 0078 ; Examiner Note: the set of namespaces equates to a plurality of tenants)
Kairali may disclose a method for deploying network management services for a plurality of tenants, but does not explicitly disclose the cluster being implemented in the public cloud or on a group of datacenters.
However, Alagna discloses:
a container cluster implemented in a public cloud (“It is further to be understood that the foregoing architecture may be implemented in a public cloud as well, where data centers similar to data centers 210a-c are owned and operated by a public cloud vendor. In such an implementation, hosts (e.g., 121a-n) may be virtual machines and data centers 210a-c may be provided with or without a container environment, similar to container platform 220, to the organization as Infrastructure-as-a-Service, Platform-as-a-Service, or Software-as-a-Service.”, 0054)
for a first tenant, deploying a first set of network management services in the container cluster for managing a first group of datacenters of the first tenant (“"As described further below, in embodiments, a system is provided including a cloud having multiple hosts that are part of a stretch cluster spanning multiple data centers. Each host of a first subset of the hosts may be operable to run multiple instances of a component of a Security Information and Event Management (SIEM) application, such as an indexer, within respective containers.", 0019 ; “To achieve such performance increases, various embodiments may, for example and among other things, optimize the configuration of processing, memory, and storage resources of a host (i.e., a server) so that multiple instances of containerized SIEM application components may be packed on the host and run efficiently and performantly. Additionally, exemplary systems described here may run ingress gateways (implementing service mesh functionality) on dedicated hosts, which may beneficially provide highly available traffic routing even in the presence of large amounts of traffic.”, 0021; Examiner Note: the services of the SIEM application equate to a first set of network management services, and the first subset of hosts equates to a first tenant)
Kairali in view of Alagna discloses the above limitations of claim 19, but does not explicitly disclose deploying a second set of network management services to a second tenant.
However, Hockey discloses:
for a second tenant, deploying a second set of network management services in the container cluster for managing a second group of datacenters of the second tenant. ( “FIG. 2 shows a first cluster 206 and a second cluster 218. The first cluster 206 has a control plane 208 and a plurality of units 214 together forming a first service mesh. Each of the units 214 has a proxy and a client or server. At least one of the units 214 comprises a client 210 and a proxy 212. The second cluster 218 also has a control plane 216 and a plurality of units 224 together forming a second service mesh. Each of the units 224 has a proxy 220 and a client or server 222. The first and second service meshes are independent of one another since they do not share information except for a root certificate 200 as now explained.”, 0031 ; Examiner Note: the second cluster equates to a second tenant, and the service mesh equates to a second set of network management services)
As per claim 20, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claims 2 and 3, and as such, it is rejected for substantially the same reasons.
As per claim 22, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claim 5, and as such, it is rejected for substantially the same reasons.
As per claim 23, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claims 11 and 12, and as such, it is rejected for substantially the same reasons.
As per claim 24, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claim 14, and as such, it is rejected for substantially the same reasons.
As per claim 25, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claims 15 and 16, and as such, it is rejected for substantially the same reasons.
Claims 4 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Szigeti (US 20230081708 A1).
As per claim 4, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 3, but does not disclose defining a set of rules allowing or blocking communication between microservices of network management services.
However, Szigeti discloses:
defining a first set of firewall rules allowing microservices of a first network management service of the first set of network management services deployed in a first namespace to communicate with microservices of a second network management service of the first set of network management services deployed in a second namespace ("Generally, the first set of access policies defines whether first applications 110 are allowed or restricted from communicating in the network service mesh 108 with second applications 110, or first microservices 110 are allowed or restricted from communicating with second microservices 110 (e.g., mesh access policies 118).", 0053)
defining a second set of firewall rules blocking communication between microservices of the first network management service and microservices of a third network management service of the third set of network management services deployed in a third namespace. ("Generally, the first set of access policies defines whether first applications 110 are allowed or restricted from communicating in the network service mesh 108 with second applications 110, or first microservices 110 are allowed or restricted from communicating with second microservices 110 (e.g., mesh access policies 118).", 0053)
The combination of Kairali in view of Alagna in further view of Hockey in further view of Szigeti would provide a system capable of defining a set of access policies, or firewall rules (see Alagna, [0043]), allowing or restricting microservices of a first NMS in a first namespace (see Kairali, [0078]) to communicate with a second or third NMS in a second or third namespace (Kairali, [0078]). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali (0074) in view of Alagna (0019) in further view of Hockey (0031) with those of Szigeti (0053) in order to provide a means for enforcing consistent access policies between architectures, thereby improving the reliability of the NMS system (Szigeti, [0019]).
As per claim 21, it is a non-transitory C.R.S.M. claim with substantially the same limitations as claim 4, and as such, it is rejected for substantially the same reasons.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Taft (US 20210392477 A1).
As per claim 6, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1, but does not disclose different services being in different NMS instances.
However, Taft discloses:
the first set of network management services comprises at least one type of network management service not in the second set of network management services ("Different service meshes may be associated with different geographic regions, different network slices, different wireless communication networks, different providers of wireless communication services, different enterprises, etc.", 0024)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali (0074) in view of Alagna (0019) in further view of Hockey (0031) with those of Taft (0024) in order to provide more diverse network management services which are different in different applications, and leverage an analytics engine to balance traffic between service mesh systems- thereby reducing congestion, improving latency, and increasing packet delivery rates (see Taft, [0093])
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Patel (US 20220311707 A1).
As per claim 8, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1, but does not disclose a group of datacenters comprising at least one virtual data center and at least one physical on-premises datacenter.
However, Patel discloses:
the first group of datacenters comprises at least one virtual datacenter operated in a public cloud for an entity and at least one physical on-premises datacenter of the entity. ("In some embodiments, the cloud management platform also allows the user to connect the virtual datacenters in a group to (i) native VPCs in a public cloud and/or (ii) on-premises datacenters. These native VPCs, in some embodiments, are not virtual datacenters in that they do not include management components and, in many cases, are not implemented on fully isolated hardware (e.g., the host computers that host DCNs of the VPC may also host DCNs for other VPCs of other public cloud tenants)", 0009 ; ". In some embodiments, an additional gateway router (e.g., a specialized on-premises connection gateway) is defined in the public cloud in order to connect the on-premises datacenter to the one or more virtual datacenters in a connectivity group.", 0010)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali in view of the container cluster of Alagna in further view of the second cluster and second set of NMS of Hockey with the mix of datacenter types of Patel in order to provide a means for utilizing and coordinating communication between a cluster comprising both virtual and physical datacenters (see Patel, [0010]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Jasperson (US 20150304236 A1).
As per claim 13, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 12, but does not disclose using layer 7 data to route each of the user requests.
However, Jasperson discloses:
the set of ingress services uses layer 7 data to route each of the user requests and each of the messages from the datacenters to the network management services. ("In another embodiment, the system may include a physical or virtual routing server configured to route messages based on layer 7 of the Open Systems Interconnection (OSI) communication model. In this manner, the routing server reads incoming requests for the content in the application layer (layer 7) of an OSI packet in the request. The routing server may route the message according to any of the layer 7 data, such as a URL, host identifier, HTTP header, and the like. In one implementation, some or all of the hosting servers being served by the routing server may be application servers, and the routing server may route requests to the appropriate application server based on the application requested.", 0098)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali in view of the container cluster of Alagna in further view of the second cluster and second set of NMS of Hockey with the ingress services which use layer 7 data to route requests of Jasperson in order to provide a means for routing user requests which is configured for improved performance and density (see Jasperson, [0101]).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Obeso Duque (US 20250133439 A1).
As per claim 17, Kairali in view of Alagna in further view of Hockey fully discloses the limitations of claim 1, but does not explicitly disclose a third set of network management services in the cluster for managing a third group of datacenters of the first tenant.
However, Obeso Duque discloses:
deploying a third set of network management services in the container cluster for managing a third group of datacenters of the first tenant. ("A domain may be understood to comprise hardware and software resources supporting a group of applications that may be running on a group of compute machines and interconnected by a group of service meshes comprised in a group of connected networks. A domain may comprise a mobile network (MN), and an edge cloud (EC) network, such as a service mesh-based edge cloud network.", 0006 ; "The telecommunications system may further support other technologies, such as Wideband Code Division Multiple Access (WCDMA), ...., any 3rd Generation Partnership Project (3GPP) cellular network, Wireless Local Area Network/s (WLAN) or WiFi network/s, Worldwide Interoperability for Microwave Access (WiMax), IEEE 802.15.4-based low-power short-range networks such as IPv6 over Low-Power Wireless Personal Area Networks (6LowPAN), Zigbee, Z-Wave, Bluetooth Low Energy (BLE), or any cellular network or system.", 0059 ; Examiner Note: the service meshes for 3GPP, WLAN, and WiFi networks are necessarily unique, thus the group of service meshes (i.e., Network management services) may contain more than three service meshes)
It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali in view of the container cluster of Alagna in further view of the second cluster and second set of NMS of Hockey with the third set of network management services for the third group of datacenters of Obeso Duque, in order to provide advantages such as reducing cross-domain signaling in a service mesh/NMS system (Obeso Duque, [0033]).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Kairali (US 20230217343 A1) in view of Alagna (US 20220342707 A1) in further view of Hockey (US 20230353392 A1) in further view of Obeso Duque (US 20250133439 A1) in further view of Kairali.2 (US 11570279 B1).
As per claim 18, Kairali in view of Alagna in further view of Hockey in further view of Obeso Duque fully discloses the limitations of claim 17, but does not explicitly disclose a first and third group of datacenters being managed separately by the NMS of the first tenant.
However, Kairali.2 discloses:
the first and third groups of datacenters are managed separately by the network management services for the first tenant. ("Likewise, the second microservice chain comprising M7, M2 and M5 may communicate separately from the first microservice chain and be independently managed despite sharing a common microservice (M2)", 0090 ; Examiner Note: a microservice chain is necessarily distributed amongst hosts)
The combination of Kairali in view of Alagna in further view of Hockey in further view of Obeso Duque in further view of Kairali.2 would provide a system which manages a first and third group of datacenters separately (Alagna, [0019]). It would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Kairali in view of the container cluster of Alagna in further view of the second cluster and second set of NMS of Hockey in further view of the third set of network management services for the third group of datacenters of Obeso Duque, with the independent management of groups of datacenters of Kairali.2 in order to provide improvements to the security of the service mesh communications between microservices by avoiding unnecessary exposure of the ports (Kairali.2, [0016]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Sood (US 20240205198 A1) – discloses a system and method for securely managing, generating, and controlling access keys in service mesh; such as signing key protection and communication key protection.
Vohra (US 20230231912 A1) – discloses a storage system proxy associated with a storage system which may receive a service mesh policy. The service mesh may include a control plane and a data plane.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROSS MICHAEL VINCENT whose telephone number is (703)756-1408. The examiner can normally be reached Mon-Fri 8:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.M.V./
Examiner, Art Unit 2196
/APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196