DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s amendment filed on 02/12/2026.
Claims 1-20 are pending and examined.
Response to Arguments
Applicant's arguments filed on 02/12/2026 with respect to 35 U.S.C. 101 have been fully considered and are persuasive. The 35 U.S.C. 101 rejections for claims 10-12 have been withdrawn.
Applicant's arguments filed on 02/12/2026 with respect to 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant argues that “the combination of references fail to teach or suggest creating "a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions, wherein each respective routing service of the plurality of routing services is configured to identify the node of at least one pod associated with the partition identified by the respective partition number based, at least in part, on a request" and "a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number" as recited in amended claim 1.” Examiner respectfully disagrees, see 35 U.S.C. 103 rejections below for a detailed analysis. Examiner interprets McVeigh’s computing system configuring an API corresponding to a defined tenant by means of a tenant-specific virtual partition for the multiple tenants correlates to creating a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions. The API router component redirecting a function call to a customized defined service for a particular tenant of the multiple tenants such as tenant J or tenant K correlates to each respective routing service of the plurality of routing services identifying the node of at least one pod associated with the partition. The API router component receiving a message invoking a function call to a customized defined service for a particular tenant which includes an attribute identifying the particular tenant correlates to the partition being identified by the respective partition number based at least in part on a request.
Examiner further interprets McVeigh’s computing system configuring an API router component that exposes the API which corresponds to a defined tenant as creating a routing API object. The API router component utilizing a multitenant SaaS model of access to a service to redirect function calls to tenant specific customized defined services using attributes which identify a particular tenant in the request, such as a tenant ID, involves logic to match a particular attribute to a particular tenant service and therefore correlates to creating a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number. Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with McVeigh because the additional partitions can also be represented in the graph by applying primitives to indicate the order in which partitions are made. Tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-9, 13-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nainar et al. (U.S. Patent No. US 20210328913 A1), hereinafter “Nainar” in view of McVeigh et al. (U.S. Patent No. US 20230409415 A1), hereinafter “McVeigh.”
With regards to Claim 1, Nainar teaches:
A method comprising:
receiving, by a processing device, a description of a data grid topology of a containerized computing cluster, wherein the containerized computing cluster comprises a plurality of nodes (Paragraphs 32, 57 and 59, “Referring to FIG. 2, in configurations, a cluster topology 200 for nodes 202 of the DC/WAN 100 may be derived. The cluster topology 200 provides information about which service container pod 102 runs on which node 202… The computing resources may be provided by the cloud computing networks and can be data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, and others. For example, the computing resources may include instantiating service container pods 102… In some examples, the server computers 602 may each execute one or more virtual resources that support a service or application provisioned across a set or cluster of servers 602.” The cluster topology for pods running on nodes in the cloud computing network which may be provisioned across a cluster of servers correlates to receiving a description of a data grid topology of a containerized computing cluster comprising a plurality of nodes),
wherein the plurality of nodes are divided into a plurality of partitions (Paragraph 32, “The cluster topology 200 provides information about which service container pod 102 runs on which node 202, e.g., worker node, that is connected to which ToRs edge endpoint such as virtual extensible local area network (VxLAN) tunnel end points (VTEPs), MPLS tunnel end points, segment routing tunnel end points, etc. For example, service A container pod 102a may run on worker node 202a, service B container pod 102b and service B′ container pod 102c may run on worker node 202b, and service C container pod 102d and service C′ container pod 102e may run on worker node 202c.” Service B container pod 102b and service B’ container pod 102c running on worker node 202b and service C container pod 102d and service C′ container pod 102e running on worker node 202c correlate to the plurality of nodes being divided into a plurality of partitions) and each partition of the plurality of partitions is identified by a respective partition number of a plurality of partition numbers (Fig. 2, paragraphs 32 and 34, “The cluster topology 200 provides information about which service container pod 102 runs on which node 202, e.g., worker node, that is connected to which ToRs edge endpoint such as virtual extensible local area network (VxLAN) tunnel end points (VTEPs), MPLS tunnel end points, segment routing tunnel end points, etc. For example, service A container pod 102a may run on worker node 202a, service B container pod 102b and service B′ container pod 102c may run on worker node 202b, and service C container pod 102d and service C′ container pod 102e may run on worker node 202c.” The service names such as Service B container pod 102b and service B’ container pod 102c running on worker node 202b correlates to each partition being identified by a respective partition number of a plurality of partition numbers), and wherein the data grid topology maps the respective partition number to an address of corresponding nodes for the respective partition (Fig. 2, paragraphs 33-34, “In a configuration, the cluster topology 200 may be derived and kept up to date based on using init-container in a service container pod 102 (prior to running an application/service in the container pod). The init-container may run a utility that detects the ToRs/VTEPs to which the worker node is connected, e.g., using for example, link layer discovery protocol (LLDP)… In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc. The SDN controller may already be aware of which ToR/VTEP 204 has the IP/MAC addresses of each worker node 202 and its address resolution protocol (ARP)/neighbor discovery protocol (NDP) table.” The cluster topology obtaining details on IP/MAC addresses of service container pods and the related worker node and updating the mapping with the service name correlates to the data grid topology mapping the partition number to an address of corresponding nodes for the partition);
labeling each of a plurality of pods with the respective partition number of the plurality of partitions associated with the plurality of pods according to the data grid topology, wherein each of the plurality of pods comprises one or more virtualized computing entities running on one or more host computer systems (Fig. 1 and 2, paragraphs 28 and 34-35, “FIG. 1 schematically illustrates a service flow in a DC/WAN environment 100 that includes 5 service container pods 102a-102e including corresponding sidecars 104a-104e… In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc. The SDN controller may already be aware of which ToR/VTEP 204 has the IP/MAC addresses of each worker node 202 and its address resolution protocol (ARP)/neighbor discovery protocol (NDP) table... This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details. For example, for service container pod 102,” The service names such as “ServiceA” correlate to the respective partition number of the plurality of partitions associated with the plurality of pods. The two partitions belonging to the same worker node with names “service-a-869854b446-9czvw” and “service-a-869854b446-bm8bc” share the same prefix of a partition number and therefore correlate to labeling each of a plurality of pods with a respective partition number of the plurality of partitions associated with the plurality of pods. The SDN controller creating and updating the mapping of service container pods based on the cluster topology correlates to labeling each pod with a partition number according to the data grid topology. The service container pods containing corresponding sidecars attached to the main container correlates to each of the plurality of pods comprising one or more virtualized computing entities), and wherein each of the plurality of pods corresponds to a node of the plurality of nodes (Paragraph 32, “The cluster topology 200 provides information about which service container pod 102 runs on which node 202, e.g., worker node, that is connected to which ToRs edge endpoint such as virtual extensible local area network (VxLAN) tunnel end points (VTEPs), MPLS tunnel end points, segment routing tunnel end points, etc. For example, service A container pod 102a may run on worker node 202a, service B container pod 102b and service B′ container pod 102c may run on worker node 202b, and service C container pod 102d and service C′ container pod 102e may run on worker node 202c.” Service B container pod 102b and service B’ container pod 102c running on worker node 202b and service C container pod 102d and service C′ container pod 102e running on worker node 202c, which are provided in the cluster topology mapping pods to nodes, correlate to each of the plurality of pods corresponding to a node of the plurality of nodes);
Nainar does not explicitly teach:
creating a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions, wherein each respective routing service of the plurality of routing services is configured to identify the node of at least one pod associated with the partition identified by the respective partition number based, at least in part, on a request, and wherein the routing service comprises at least one application programming interface (API) object identifying the at least one pod; and
creating a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number.
However, McVeigh teaches:
creating a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions, wherein each respective routing service of the plurality of routing services is configured to identify the node of at least one pod associated with the partition identified by the respective partition number based, at least in part, on a request (Paragraphs 35 and 107, “The API router component 110 exposes an API that permits access to the multiple single-tenant services as if the computing system 100 were a multi-tenant computing platform. To provide access in such a fashion, the API router component 110 can receive a message invoking a function call to a customized service for a particular tenant (e.g., tenant J or tenant K), where the message includes an attribute that identifies the particular tenant… The API router component 110 can then redirect the function call to the customized defined service based on the attribute… At block 920, the computing system can configure an API corresponding to a defined tenant. For example, the API can be the API J 130(J) (FIG. 1). The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s).” The computing system configuring an API corresponding to a defined tenant by means of a tenant-specific virtual partition for the multiple tenants correlates to creating a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions. The API router component redirecting a function call to a customized defined service for a particular tenant of the multiple tenants such as tenant J or tenant K correlates to each respective routing service of the plurality of routing services identifying the node of at least one pod associated with the partition. The API router component receiving a message invoking a function call to a customized defined service for a particular tenant which includes an attribute identifying the particular tenant correlates to the partition being identified by the respective partition number based at least in part on a request),
and wherein the respective routing service comprises an application programming interface (API) object (Paragraphs 32, 35, 45 and 107, “Tenant-specific extension modules mapped to the at least one extension point can customize the core API to yield the API corresponding to the defined tenant… In cases where the exposed API is a REST API, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant identifier (ID) that uniquely identifies the defined tenant. The tenant ID can be a universally unique identifier (UUID), for example. … As is described herein, each one of those extension modules configures additional custom functionality to the exemplified service core 140, resulting in a customized service for a particular tenant… The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s).” The unique tenant ID associated with a tenant-specific virtual partition correlates to a partition number. The extension modules configuring a custom service for a particular tenant using an API corresponding to the tenant corresponds to the respective routing service comprising at least one API object); and
creating a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number (Paragraphs 35 and 107-109, “The API router component 110 exposes an API that permits access to the multiple single-tenant services as if the computing system 100 were a multi-tenant computing platform. To provide access in such a fashion, the API router component 110 can receive a message invoking a function call to a customized service for a particular tenant (e.g., tenant J or tenant K), where the message includes an attribute that identifies the particular tenant… The API router component 110 can then redirect the function call to the customized defined service based on the attribute… At block 920, the computing system can configure an API corresponding to a defined tenant. For example, the API can be the API J 130(J) (FIG. 1). The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s)... At block 930, the computing system can configure an API router component that exposes the API… As is described herein, the API router component can expose a collection of single-tenant APIs as a multitenant arrangement according to a SaaS model of access to a service. At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The computing system configuring an API router component that exposes the API which corresponds to a defined tenant correlates to creating a routing API object. The API router component utilizing a multitenant SaaS model of access to a service to redirect function calls to tenant specific customized defined services using attributes which identify a particular tenant in the request, such as a tenant ID, involves logic to match a particular attribute to a particular tenant service and therefore correlates to creating a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number)
McVeigh does not explicitly teach that the API object identifies at least one pod. However, identifying pods using an API object is a popular method of identifying pods as evidenced by Nainar above (Paragraphs 20 and 34, “In a configuration, the cluster topology may be derived and kept up to date based on using init-container in a container pod (prior to running an application/service in the container pod). The init-container may run a utility that detects the ToRs/VTEPs to which the worker node is connected, e.g., using for example, link layer discovery protocol (LLDP) and create a mapping. The init-container may then notify (via an application programming interface (API)) the mapping to the backend software defined networking (SDN) controller. This allows the SDN controller to create/update the mapping of each container POD and corresponding worker node and ToR/VTEP connectivity with sufficient details.” The init-container notifying the SDN via API of the mapping between each pod’s service name and corresponding worker node correlates to the API object identifying at least one pod).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with creating a plurality of routing services, each routing service corresponding to a partition of the plurality of partitions, wherein the each respective routing service of the plurality of routing services is configured to identify the node of at least one pod associated with the partition identified by the respective partition number based, at least in part, on a request and wherein the routing service comprises an application programmer interface (API) object; and creating a routing API object configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number as taught by McVeigh because the additional partitions can also be represented in the graph by applying primitives to indicate the order in which partitions are made. Tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform (McVeigh: paragraphs 35, 115 and 117).
With regards to Claim 13, the method of Claim 1 performs the same steps as the machine of Claim 13, and Claim 13 is therefore rejected using the same rationale set forth above in the rejection of Claim 1.
With regards to Claim 2, Nainar in view of McVeigh teaches the method of Claim 1 above. Nainar further teaches:
updating the data grid topology at a predetermined time (Fig. 2, paragraph 34, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) … This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details.” The cluster topology periodically communicating with the orchestrator to update the mapping of service container pods and corresponding worker nodes correlates to updating the data grid topology at a predetermined time); and
updating the labeling of each of the plurality of pods corresponding to the updated data grid topology (Fig. 2, paragraph 34, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) … This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details.” The cluster topology updating the mapping of service container pods and corresponding worker nodes correlates to updating the labeling of each of the plurality of pods corresponding to the updated data grid topology).
With regards to Claim 14, the method of Claim 2 performs the same steps as the machine of Claim 14, and Claim 14 is therefore rejected using the same rationale set forth above in the rejection of Claim 2.
With regards to Claim 3, Nainar in view of McVeigh teaches the method of Claim 1 above. Nainar further teaches:
updating the data grid topology upon detecting a trigger event (Fig. 2, paragraph 34, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc.… Alternatively, the SDN controller may poll the ToR/VTEPs as needed since worker nodes need to have communication (via layer two or layer three) with the K8S master controller or orchestrator via the ToR. This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details.” The cluster topology updating the mapping of service container pods and corresponding worker nodes in response to detecting service container pod creation or deletion events correlates to updating the data grid topology upon detecting a trigger event); and
updating the labeling of each of the plurality of pods corresponding to the updated data grid topology (Fig. 2, paragraph 34, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc.… Alternatively, the SDN controller may poll the ToR/VTEPs as needed since worker nodes need to have communication (via layer two or layer three) with the K8S master controller or orchestrator via the ToR. This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details.” The cluster topology updating the mapping of service container pods and corresponding worker nodes correlates to updating the labeling of each of the plurality of pods corresponding to the updated data grid topology).
With regards to Claim 15, the method of Claim 3 performs the same steps as the machine of Claim 15, and Claim 15 is therefore rejected using the same rationale set forth above in the rejection of Claim 3.
With regards to Claim 8, Nainar in view of McVeigh teaches the method of Claim 1 above. McVeigh further teaches:
wherein the routing service and the routing API object each corresponds to an endpoint managed by a container orchestration service (Paragraphs 52 and 82, “As is described herein, core platform refers to a group of multiple core modules having respective sets of extension points, where the group of multiple core modules may be available in a repository, in computer-executable form… The group of multiple core modules provides a defined service that is common across tenants, and defines shared resources including a core API and a core data model… Further, in the core API, each core module adds function(s) and endpoint(s), but viewed from a client perspective, this is the API of the entire service) ... The index q represents a specific tenant. In some embodiments, that cluster includes a SaaS Kubernetes cluster and Kubernetes container orchestration subsystem, and the executable package component q is, or constitutes, a container.” The core modules which each have respective sets of extension points and adds endpoints to the core API correlates to the routing service corresponding to an endpoint managed by a container orchestration service. The API viewed from a client’s perspective covering the entire service correlates to the routing API object corresponding to an endpoint managed by a container orchestration service).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the routing service and the routing API object each corresponds to an endpoint managed by a container orchestration service as taught by McVeigh because the runtime subsystem can receive tenant-specific services for deployment to host a plurality of mutually isolated tenant-specific services. The subsystem can also assign resource quotas for specific tenants. From the client perspective, the API also appears to cover the entire service by using routing services and API objects (McVeigh: paragraphs 52 and 82).
With regards to Claim 9, Nainar in view of McVeigh teaches the method of Claim 1 above. Nainar further teaches:
wherein the data grid topology maps the respective partition number of each partition of the plurality of partitions to a pod of corresponding nodes for the respective partition (Fig. 2, paragraphs 34-35, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc. The SDN controller may already be aware of which ToR/VTEP 204 has the IP/MAC addresses of each worker node 202 and its address resolution protocol (ARP)/neighbor discovery protocol (NDP) table... This allows the SDN controller to create/update the mapping of service container pods 102 and corresponding worker nodes 202, as well as ToR/VTEP 204 connectivity with sufficient details. For example, for service container pod 102,” The service names such as “ServiceA” correlate to the partition number. The two partitions belonging to the same worker node with names “service-a-869854b446-9czvw” and “service-a-869854b446-bm8bc” share the same prefix of a partition number and therefore correlate to mapping the partition number to a pod of corresponding nodes for the partition. The SDN controller creating and updating the mapping of service container pods based on the cluster topology correlates to the data grid topology mapping the partition number of each partition of the plurality of partitions to a pod of corresponding nodes for the respective partition).
With regards to Claim 20, the method of Claim 9 performs the same steps as the machine of Claim 20, and Claim 20 is therefore rejected using the same rationale set forth above in the rejection of Claim 9.
Claims 4, 6-7, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Nainar in view of McVeigh and Gleyzer et al. (U.S. Patent No. US 10873627 B2), hereinafter “Gleyzer.”
With regards to Claim 4, Nainar in view of McVeigh teaches the method of Claim 1. McVeigh further teaches:
receiving the request (Paragraphs 109, “At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant.” The computing system receiving a request for the defined service correlates to receiving the request);
identifying the owner node using a labeled partition, the routing service, and the routing API object (Paragraphs 107 and 109-110, “The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s)… At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The tenant ID associated with the tenant-specific virtual partition correlates to identifying the owner node using a labeled partition. The API routing component redirecting the function call received by the computing system to the customized defined service correlates to identifying the owner node using the routing API object. The function call calling the customized defined service correlates to identifying the owner node using the routing service); and
routing the request to the owner node (Paragraphs 109-110, “At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The API router component redirecting the function call to the customized defined service associated with the tenant correlates to routing the request to the owner node).
McVeigh does not explicitly teach that the owner node is identified using a labeled pod and that the request specifies to access an entry stored in an owner node or write the entry in the owner node. However, identifying owner nodes using a labeled pod is a popular method of identifying nodes as evidenced by Nainar above (Fig. 2, paragraphs 33-34, “In a configuration, the cluster topology 200 may be derived and kept up to date based on using init-container in a service container pod 102 (prior to running an application/service in the container pod). The init-container may run a utility that detects the ToRs/VTEPs to which the worker node is connected, e.g., using for example, link layer discovery protocol (LLDP)… In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc. The SDN controller may already be aware of which ToR/VTEP 204 has the IP/MAC addresses of each worker node 202 and its address resolution protocol (ARP)/neighbor discovery protocol (NDP) table.” The cluster topology obtaining details on IP/MAC addresses of service container pods and the related worker node correlates to identifying an owner node using the labelled pod). Additionally, requests specifying access to an entry or writing an entry to an owner node are a popular form of request as evidenced by Gleyzer (Col. 9, lines 55-61, “In some embodiments, one or more data grid services, for example caches and clustered services, can be shared across multiple partitions. Alternatively, the data grid can provide read-shared/write-specific access to data grid services, in which multiple partitions can share an initial set of data, but their subsequent modifications to that data are then isolated from each other.” Read or write access to data grid services unique to each partition correlates to requests for access or writing an entry to an owner node)
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with receiving the request; identifying the owner node using the labelled pod, the routing service, and the routing API object; and routing the request to the owner node as taught by McVeigh because tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform (McVeigh: paragraphs 115 and 117).
With regards to Claim 16, the method of Claim 4 performs the same steps as the machine of Claim 16, and Claim 16 is therefore rejected using the same rationale set forth above in the rejection of Claim 4.
With regards to Claim 6, Nainar in view of McVeigh and Gleyzer teach the method of Claim 4. McVeigh further teaches:
wherein the request is received from a client device, and wherein a response from the owner node is sent to the client device (Paragraphs 35 and 164-165, “The message can be received from a client computing device (not depicted in FIG. 1) via a communication network 114 that functionally couples the client computing device and the API router component 110... The client device can include or can be functionally coupled to a display device (not depicted in FIG. 13) that can display various user interfaces in connection with configuration or otherwise customization of a tenant-specific service, as is provided, at least in part, by the software application contained in the software 1355. The one or multiple I/O interfaces 1352 can functionally couple (e.g., communicatively couple) the client device 1346 to another functional element (a component, a unit, server, gateway node, repository, a device, or similar). Functionality of the client device 1346 that is associated with data I/O or signaling I/O can be accomplished in response to execution, by a processor of the processor(s) 1348, of at least one I/O interface that can be retained in the memory 1356. In some embodiments, the at least one I/O interface embodies an API that permits exchange of data or signaling, or both, via an I/O interface. In some embodiments, the one or more I/O interfaces 1352 can include at least one port that can permit connection of the client device 1346 to another other device or functional element.” The message received from the client device correlates to receiving the request from the client device. The client device coupled to a display device that displays tenant-specific services or functional elements correlates to the response being sent to the client device).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the request is received from a client device, and wherein a response from the owner node is sent to the client device as taught by McVeigh because tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform. I/O interfaces allow delivery of output to devices to represent an outcome of a specific operation (McVeigh: paragraphs 115, 117 and 166).
With regards to Claim 7, Nainar in view of McVeigh teaches the method of Claim 1. Nainar in view of McVeigh does not explicitly teach:
wherein the request comprises a uniform resource locator (URL)
However, Gleyzer teaches:
wherein the request comprises a uniform resource locator (URL) (Col. 8, lines 8-14, “In accordance with an embodiment, each partition 321, 331 can define a different virtual target on which to accept incoming traffic for that tenant environment, and a different URL 322, 332 for connecting to the partition and to its resources 324, 334, including in this example either a bayland urgent care database, or a valley health database respectively.” The URL used for connecting to the partition and its resources correlates to the request comprising a URL).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the request comprises a uniform resource locator (URL) as taught by Gleyzer because virtual target information associated with a particular partition can be used to define a partition-specific virtual target for use by the partition. Additionally, database instances can use compatible schemas because the same application code executes against both databases. This allows virtual targets and connection pools to be created for respective database instances when the partitions are started (Gleyzer: Col.3, lines 41-46 and Col. 8, lines 14-18).
With regards to Claim 18, the method of Claim 7 performs the same steps as the machine of Claim 18, and Claim 18 is therefore rejected using the same rationale set forth above in the rejection of Claim 7.
Claims 5, 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nainar in view of McVeigh, Gleyzer and Karumbunathan et al. (U.S. Patent No. US 20230353635 A1), hereinafter “Karumbunathan.”
With regards to Claim 5, Nainar in view of McVeigh and Gleyzer teach the method of Claim 4. McVeigh further teaches:
wherein the request includes a partition number (Paragraphs 107 and 109-110, “The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s)… At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The message including a tenant ID which is representative of a tenant-specific partition correlates to the request including a partition number).
McVeigh does not explicitly teach that the partition number is computed by a consistent hash function based on a key of the entry. However, using consistent hash functions is a popular method of formatting data as evidenced by Karumbunathan (Paragraph 121, “In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage 152 having the authority 168 for that particular piece of data.” The hash value calculated for data that points to a specific authority for a particular piece of data correlates to a consistent hash value based on a key of the entry).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the request includes a partition number as taught by McVeigh because tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform (McVeigh: paragraphs 115 and 117).
With regards to Claim 17, the method of Claim 5 performs the same steps as the machine of Claim 17, and Claim 17 is therefore rejected using the same rationale set forth above in the rejection of Claim 5.
With regards to Claim 11, Nainar in view of McVeigh and Karumbunathan teaches the system of Claim 10 below. Nainar in view of McVeigh and Karumbunathan does not explicitly teach:
wherein the request comprises a uniform resource locator (URL)
However, Gleyzer teaches:
wherein the request comprises a uniform resource locator (URL) (Col. 8, lines 8-14, “In accordance with an embodiment, each partition 321, 331 can define a different virtual target on which to accept incoming traffic for that tenant environment, and a different URL 322, 332 for connecting to the partition and to its resources 324, 334, including in this example either a bayland urgent care database, or a valley health database respectively.” The URL used for connecting to the partition and its resources correlates to the request comprising a URL).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the request comprises a uniform resource locator (URL) as taught by Gleyzer because virtual target information associated with a particular partition can be used to define a partition-specific virtual target for use by the partition. Additionally, database instances can use compatible schemas because the same application code executes against both databases. This allows virtual targets and connection pools to be created for respective database instances when the partitions are started (Gleyzer: Col.3, lines 41-46 and Col. 8, lines 14-18).
Claims 10, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nainar in view of McVeigh and Karumbunathan.
With regards to Claim 10, Nainar teaches:
A system comprising:
a memory;
a processing device coupled to the memory, the processing device to perform operations comprising (Paragraphs 64 and 66, “In one illustrative configuration, one or more central processing units (“CPUs”) 704 operate in conjunction with a chipset 706. The CPUs 704 can be standard programmable processors that perform arithmetic and logical operations necessary for the operation of the server computer 602… The chipset 706 provides an interface between the CPUs 704 and the remainder of the components and devices on the baseboard 702. The chipset 706 can provide an interface to a RAM 708, used as the main memory in the server computer 602.” The CPUs being standard programmable processors connected to chipsets and RAM correspond to a processing device coupled to the memory):
creating, in view of a data grid topology of a containerized computing cluster of the system, a plurality of routing systems as subcomponents of the system (Fig.3-4, paragraphs 37-38, “Referring to FIG. 3, in configurations, once the service flow definition 110 and cluster topology 200 are defined, the K8S master controller or orchestrator (and/or Istio controller or orchestrator) 112 may create granular IP/MAC route distribution policies 300. In configurations, the route distribution policies may be specific to service container pods 102 that are newly instantiated... For example, an example of an IP/MAC route distribution policy 300 may be formulated for the service A container pod 102a and the service B container pod 102b flow. A similar process may be followed for the route distribution policies among all service container pods 102.” The orchestrator creating granular IP/MAC route distribution policies for each service container pod based on the cluster topology definition correlates to a plurality of routing systems as subcomponents of the system being created in view of a data grid topology of a containerized computing cluster), wherein the containerized computing cluster comprises a plurality of nodes (Paragraphs 32, 57 and 59, “Referring to FIG. 2, in configurations, a cluster topology 200 for nodes 202 of the DC/WAN 100 may be derived. The cluster topology 200 provides information about which service container pod 102 runs on which node 202… The computing resources may be provided by the cloud computing networks and can be data processing resources such as VM instances or hardware computing systems, database clusters, computing clusters, storage clusters, data storage resources, database resources, networking resources, and others. For example, the computing resources may include instantiating service container pods 102… In some examples, the server computers 602 may each execute one or more virtual resources that support a service or application provisioned across a set or cluster of servers 602.” The cluster topology for pods running on nodes in the cloud computing network which may be provisioned across a cluster of servers correlates to the containerized computing cluster comprising a plurality of nodes), wherein the plurality of nodes are divided into a plurality of partitions (Paragraph 32, “The cluster topology 200 provides information about which service container pod 102 runs on which node 202, e.g., worker node, that is connected to which ToRs edge endpoint such as virtual extensible local area network (VxLAN) tunnel end points (VTEPs), MPLS tunnel end points, segment routing tunnel end points, etc. For example, service A container pod 102a may run on worker node 202a, service B container pod 102b and service B′ container pod 102c may run on worker node 202b, and service C container pod 102d and service C′ container pod 102e may run on worker node 202c.” Service B container pod 102b and service B’ container pod 102c running on worker node 202b and service C container pod 102d and service C′ container pod 102e running on worker node 202c correlate to the plurality of nodes being divided into a plurality of partitions) and each partition of the plurality of partitions is identified by a respective partition number (Fig. 2, paragraphs 32 and 34, “The cluster topology 200 provides information about which service container pod 102 runs on which node 202, e.g., worker node, that is connected to which ToRs edge endpoint such as virtual extensible local area network (VxLAN) tunnel end points (VTEPs), MPLS tunnel end points, segment routing tunnel end points, etc. For example, service A container pod 102a may run on worker node 202a, service B container pod 102b and service B′ container pod 102c may run on worker node 202b, and service C container pod 102d and service C′ container pod 102e may run on worker node 202c.” The service names such as Service B container pod 102b and service B’ container pod 102c running on worker node 202b correlates to each partition being identified by a respective partition number), and wherein the data grid topology maps the respective partition number to an address of corresponding nodes for the respective partition (Fig. 2, paragraphs 33-34, “In a configuration, the cluster topology 200 may be derived and kept up to date based on using init-container in a service container pod 102 (prior to running an application/service in the container pod). The init-container may run a utility that detects the ToRs/VTEPs to which the worker node is connected, e.g., using for example, link layer discovery protocol (LLDP)… In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc. The SDN controller may already be aware of which ToR/VTEP 204 has the IP/MAC addresses of each worker node 202 and its address resolution protocol (ARP)/neighbor discovery protocol (NDP) table.” The cluster topology obtaining details on IP/MAC addresses of service container pods and the related worker node and updating the mapping with the service name correlates to the data grid topology mapping the partition number to an address of corresponding nodes for the partition), wherein each respective routing system of the plurality of routing systems is associated with a corresponding partition of the plurality of partitions (Fig.3-4, paragraphs 37-38, “Referring to FIG. 3, in configurations, once the service flow definition 110 and cluster topology 200 are defined, the K8S master controller or orchestrator (and/or Istio controller or orchestrator) 112 may create granular IP/MAC route distribution policies 300. In configurations, the route distribution policies may be specific to service container pods 102 that are newly instantiated... For example, an example of an IP/MAC route distribution policy 300 may be formulated for the service A container pod 102a and the service B container pod 102b flow. A similar process may be followed for the route distribution policies among all service container pods 102.” The orchestrator creating granular IP/MAC route distribution policies for each service container pod based on the cluster topology definition correlates to each respective routing system of the plurality of routing systems being associated with a corresponding partition of the plurality of partitions)
Nainar does not explicitly teach:
wherein each respective routing system of the plurality of routing systems comprises at least one application programmer interface (API) object identifying at least one pod of the corresponding partition;
receiving, by a routing API object of the system, a request to access an owner node,
wherein the request includes a partition number determined based on a consistent hash function of a key of an entry stored or to be stored in the owner node, wherein the routing API object is configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing system associated with the respective partition number;
sending the request to a routing system of the plurality of routing systems, wherein the routing system identifies the owner node based on the partition number in the request, and
receiving, from the owner node, a response to the request.
However, McVeigh teaches:
wherein each respective routing system of the plurality of routing systems comprises at least one application programmer interface (API); receiving, by a routing API object of the system, a request to access an owner node (Paragraphs 32, 35, 45 and 107, “Tenant-specific extension modules mapped to the at least one extension point can customize the core API to yield the API corresponding to the defined tenant… In cases where the exposed API is a REST API, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant identifier (ID) that uniquely identifies the defined tenant. The tenant ID can be a universally unique identifier (UUID), for example… The API router component 110 exposes an API that permits access to the multiple single-tenant services as if the computing system 100 were a multi-tenant computing platform. To provide access in such a fashion, the API router component 110 can receive a message invoking a function call to a customized service for a particular tenant (e.g., tenant J or tenant K), where the message includes an attribute that identifies the particular tenant… The API router component 110 can then redirect the function call to the customized defined service based on the attribute … As is described herein, each one of those extension modules configures additional custom functionality to the exemplified service core 140, resulting in a customized service for a particular tenant… The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s).” The extension modules configuring a custom service for a particular tenant using an API corresponding to the tenant corresponds to each respective routing system of the plurality of routing systems comprising at least one API object. The API router component receiving a message invoking a function call to a customized defined service for a particular tenant which includes an attribute identifying the particular tenant correlates to receiving, by a routing API object of the system, a request to access an owner node);
wherein the request includes a partition number (Paragraphs 107 and 109-110, “The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s)… At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The message including a tenant ID which is representative of a tenant-specific partition correlates to the request including a partition number), wherein the routing API object is configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing system associated with the respective partition number (Paragraphs 35 and 107-109, “The API router component 110 exposes an API that permits access to the multiple single-tenant services as if the computing system 100 were a multi-tenant computing platform. To provide access in such a fashion, the API router component 110 can receive a message invoking a function call to a customized service for a particular tenant (e.g., tenant J or tenant K), where the message includes an attribute that identifies the particular tenant… The API router component 110 can then redirect the function call to the customized defined service based on the attribute… At block 920, the computing system can configure an API corresponding to a defined tenant. For example, the API can be the API J 130(J) (FIG. 1). The API can be configured by means of a tenant-specific virtual partition (and, in some cases, associated base virtual partitions) as part of building an executable package component that includes the multiple core modules and mapped extension module(s)... At block 930, the computing system can configure an API router component that exposes the API… As is described herein, the API router component can expose a collection of single-tenant APIs as a multitenant arrangement according to a SaaS model of access to a service. At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant. In some cases, the attribute that identifies the particular tenant can be either an authentication token or a REST parameter. In other cases, the attribute can be a tenant ID that uniquely identifies the defined tenant. The tenant ID can be a UUID, for example. At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The API router component utilizing a multitenant SaaS model of access to a service to redirect function calls to tenant specific customized defined services using attributes which identify a particular tenant in the request, such as a tenant ID, involves logic to match a particular attribute to a particular tenant service and therefore correlates to the routing API object being configured with a plurality of routing rules, wherein each routing rule maps a request comprising the respective partition number to the respective routing service identified by the respective partition number);
sending the request to a routing system of the plurality of routing systems, wherein the routing system identifies the owner node based on the partition number in the request (Paragraphs 109, “At block 940, the computing system can receive, via the API router component, a message invoking a function call to the defined service. The message can include an attribute indicative of the defined tenant… At block 950, the computing system can redirect, via the API router component, the function call to the customized defined service.” The computing system receiving a request for the defined service including an attribute indicative of the defined tenant correlates to identifying the owner node based on the partition number in the request. The API router component redirecting the function call to the specific customized defined service correlates to sending the request to a routing system of a plurality of routing systems associated with the partition number), and
receiving, from the owner node, a response to the request (Paragraphs 35 and 164-165, “The message can be received from a client computing device (not depicted in FIG. 1) via a communication network 114 that functionally couples the client computing device and the API router component 110... The client device can include or can be functionally coupled to a display device (not depicted in FIG. 13) that can display various user interfaces in connection with configuration or otherwise customization of a tenant-specific service, as is provided, at least in part, by the software application contained in the software 1355. The one or multiple I/O interfaces 1352 can functionally couple (e.g., communicatively couple) the client device 1346 to another functional element (a component, a unit, server, gateway node, repository, a device, or similar). Functionality of the client device 1346 that is associated with data I/O or signaling I/O can be accomplished in response to execution, by a processor of the processor(s) 1348, of at least one I/O interface that can be retained in the memory 1356. In some embodiments, the at least one I/O interface embodies an API that permits exchange of data or signaling, or both, via an I/O interface. In some embodiments, the one or more I/O interfaces 1352 can include at least one port that can permit connection of the client device 1346 to another other device or functional element.” The message received from the client device correlates to the request. The client device coupled to a display device that displays tenant-specific services or functional elements correlates to receiving a response to the request from the owner node).
McVeigh does not explicitly teach that the partition number is computed by a consistent hash function based on a key of the entry stored or to be stored in the owner node and that the API object identifies at least one pod. However, using consistent hash function[s] of a key of an entry stored or to be stored in the owner node is a popular method of formatting data as evidenced by Karumbunathan (Paragraph 121, “In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage 152 having the authority 168 for that particular piece of data… The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage 152, which may be done through an explicit mapping.” The hash value calculated for data such as an entity identifier that points to a specific authority for a particular piece of data correlates to computing a partition number using a consistent hash function based on a key of the entry. The output pointing to the non-volatile solid state storage correlates to the entry being stored in an owner node). Additionally, identifying pods using an API object is a popular method of identifying pods as evidenced by Nainar above (Paragraphs 20 and 34).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with computing a partition number by using a consistent hash function based on a key of an entry, wherein the entry is stored or to be stored in an owner node as taught by Karumbunathan because hashing can be used to locate a particular piece of data, which may have been moved around during a data move or data reconstruction process (Karumbunathan: paragraph 121).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein each of the plurality of routing systems comprises at least one application programmer interface (API) object receiving a request to access an owner node: adding the partition number to the request to access the owner node; sending the request to a routing system of the plurality of routing systems, wherein the routing system is associated with the partition number, wherein the routing system identifies the owner node based on the partition number in the request, and receiving, from the owner node, a response to the request as taught by McVeigh because tenant-specific modules allow customization of specific services and defined functions for a particular virtual partition. API router components also allow multiple single-tenant services to act as a multi-tenant computing platform. The runtime subsystem can receive tenant-specific services for deployment to host a plurality of mutually isolated tenant-specific services. The subsystem can also assign resource quotas for specific tenants. From the client perspective, the API also appears to cover the entire service by using routing services and API objects. Additional partitions can also be represented in the graph by applying primitives to indicate the order in which partitions are made. I/O interfaces allow delivery of output to devices to represent an outcome of a specific operation (McVeigh: paragraphs 32, 35, 52, 82, 115, and 166).
With regards to Claim 12, Nainar in view of McVeigh and Karumbunathan teach the system of Claim 10 above. Nainar further teaches:
wherein the data grid topology is received and maintained by the plurality of routing systems (Fig. 2-4, paragraphs 34 and 37-38, “In another configuration, the cluster topology 200 may be derived by periodically communicating with the K8S master controller or orchestrator (and/or Istio controller or orchestrator) (e.g., orchestrator 112, not illustrated in FIG. 2) to detect service container pod 102 creation or deletion events and obtaining details about the service container pods 102 such as, for example, IP/MAC addresses, the related worker node (such as, for example, IP/MAC address), etc…. Referring to FIG. 3, in configurations, once the service flow definition 110 and cluster topology 200 are defined, the K8S master controller or orchestrator (and/or Istio controller or orchestrator) 112 may create granular IP/MAC route distribution policies 300. In configurations, the route distribution policies may be specific to service container pods 102 that are newly instantiated... For example, an example of an IP/MAC route distribution policy 300 may be formulated for the service A container pod 102a and the service B container pod 102b flow. A similar process may be followed for the route distribution policies among all service container pods 102.” The orchestrator creating granular IP/MAC route distribution policies for service container pods based on the cluster topology definition correlates to the data grid topology being received by the plurality of routing systems. The cluster topology being derived through periodically communicating with the orchestrator correlates to the data grid topology being maintained by the plurality of routing systems).
With regards to Claim 19, Nainar in view of McVeigh teaches the machine of Claim 13 above.
Nainar further teaches:
the routing system is managed using a container orchestration service (Fig. 3-4, paragraphs 37-38, “Referring to FIG. 3, in configurations, once the service flow definition 110 and cluster topology 200 are defined, the K8S master controller or orchestrator (and/or Istio controller or orchestrator) 112 may create granular IP/MAC route distribution policies 300. In configurations, the route distribution policies may be specific to service container pods 102 that are newly instantiated. For example, an example of an IP/MAC route distribution policy 300 may be formulated for the service A container pod 102a and the service B container pod 102b flow. A similar process may be followed for the route distribution policies among all service container pods 102.” The orchestrator creating granular IP/MAC route distribution policies correlates to the routing system being managed by a container orchestration service),
McVeigh further teaches:
and wherein the service, and the routing API object each corresponds to an endpoint managed by the container orchestration service (Paragraphs 52 and 82, “As is described herein, core platform refers to a group of multiple core modules having respective sets of extension points, where the group of multiple core modules may be available in a repository, in computer-executable form… The group of multiple core modules provides a defined service that is common across tenants, and defines shared resources including a core API and a core data model… Further, in the core API, each core module adds function(s) and endpoint(s), but viewed from a client perspective, this is the API of the entire service) ... The index q represents a specific tenant. In some embodiments, that cluster includes a SaaS Kubernetes cluster and Kubernetes container orchestration subsystem, and the executable package component q is, or constitutes, a container.” The core modules which each have respective sets of extension points and adds endpoints to the core API correlates to the routing service corresponding to an endpoint managed by a container orchestration service. The API viewed from a client’s perspective covering the entire service correlates to the routing API object corresponding to an endpoint managed by a container orchestration service).
McVeigh does not explicitly teach that the pod corresponds to an endpoint managed by the container orchestration service. However, pods are a common resource corresponding to an endpoint managed by a container orchestration service as evidenced by Karumbunathan (Paragraph 487, “If part of a dataset associated with a pod is exported to a particular host through a host definition (meaning that it is provided to a host based on a host definition through a list of network endpoints, iSCSI IQNs, or initiator ports from one or more of a pod's current storage systems' own network endpoints, and SCSI targets), then when an additional storage system is added to the pod, the added storage system's host definitions can be examined” The pod’s current storage systems each having network endpoints correlates to pods having an endpoint managed by container orchestration service).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Nainar with wherein the service, and the routing API object each corresponds to an endpoint managed by the container orchestration service as taught by McVeigh because the runtime subsystem can receive tenant-specific services for deployment to host a plurality of mutually isolated tenant-specific services. The subsystem can also assign resource quotas for specific tenants. From the client perspective, the API also appears to cover the entire service by using routing services and API objects (McVeigh: paragraphs 52 and 82).
Prior Art Made of Record
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Oliver et al. (U.S. Patent No. US 9348668 B2); teaching a method of using a distributed data grid storing data partitions which are distributed throughout a cluster of nodes. The system uses event interceptors to handle event associated with operations and maps event interceptors to event dispatchers in the cluster. The data grid employs additional types of events and defined different event interceptors while avoiding client interaction overhead.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
The publicPAIR and privatePAIR systems are no longer available. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SELINA ELISA HU/Examiner, Art Unit 2193
/Chat C Do/Supervisory Patent Examiner, Art Unit 2193