DETAILED ACTION
This office action is a response to a communication made on 12/15/2025.
Claim 24 is new.
Claims 1-4, 8, 10 and 18-19 are currently amended.
Claims 9, 11-17 and 20-21 are canceled.
Claims 1-8, 10, 18-19 and 22-24 are pending for this application.
Request for Continued Examination (RCE) under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/15/2025 has been entered.
Response to Arguments
Applicant: Applicant's arguments, see remark on page 7-8, filed 12/15/2025, applicant argues that, “Claim 1 recites, "communicating, by the sidecar pattern container, with a network policy enforcer external to the pod to cause the network policy enforcer to check the consumer policy." Claim 1 further recites, "determining, by the network policy enforcer and based on the check, that the cumulative usage complies with the predefined consumption quota." Claim 1 also recites, "sending, by the network policy enforcer and to the sidecar pattern container, an approval of the first request."…Kairali does not disclose network policy being enforced outside of the proxies 527. Kairali does not discuss the service mesh control plane 505 or the policy optimizer 550 being involved in deciding whether a particular request is accepted or denied. Accordingly, Kairali fails to disclose allowing a request as now set forth in amended claim 1..
Examiner: Applicant's arguments filed 12/15/2025have been fully considered but they are not persuasive. Examiner respectfully disagrees.
Kairali teaches communicating, by the sidecar pattern container, with a network policy enforcer external to the pod to cause the network policy enforcer to check the consumer policy because ¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0072-¶0073, teaches a proxy 527 a tasked with sending an outbound communication to the next service 525 b of a microservice chain knows where to send the communication, such as API calls…timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0091, teaches the retry counts of the network policy may be configured to at least 10 retries automatically by the policy optimizer 550 and the network policy can be pushed to proxies P1 and/or P2 for enforcement.
Kairali also teaches determining, by the network policy enforcer and based on the check, that the cumulative usage complies with the predefined consumption quota because ¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0083, teaches policy optimizer 550 may check the health, readiness and availability of one or more microservices 525 in a microservice chain being invoked by an API call.
Kairali further teaches sending, by the network policy enforcer and to the sidecar pattern container, an approval of the first request (¶0073, teaches a microservice chain invoked by an API call being observed and any rate limits being enforced on one or more microservices 525 fulfilling requests (i.e. approval of the first request) of the API call, ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 ¶0097, teaches one or more incoming API call(s) may be transmitted to the service mesh, invoking one or more microservice chains to fulfill the incoming request (i.e. approval of the first request) for microservices).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 18-19, 22 and 24 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kairali et al. (US 2023/0259415), hereinafter “Kairali”.
With respect to claim 1, Kairali discloses a method comprising:
receiving, by a first proxy for a first microservice consumer of a plurality of microservice consumers (¶0020, teaches the proxy for the microservice that can control the various networking parameters), a first request from the first microservice consumer associated with a usage of a resource (¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0069, teaches API calls may request the execution of one or more capabilities or functions of the microservices 525 a-525 n (generally referred to herein as microservices 525 or services 525)), wherein the resource is shared by the plurality of microservice consumers (¶0101, teaches a load balancer may help control the distribution (i.e. share) of the API calls (i.e. resource) between microservices and replicas thereof, in order to prevent a single microservice or replica from receiving too many retry attempts all at once);
allowing, by the first proxy (¶0078, teaches allow the proxies 527 to reach every instance 523 and service 525 of the service mesh 511), the first request responsive to a cumulative usage of the resource by the first microservice consumer complying with a predefined consumption quota for the first microservice consumer (¶0021, teaches the policy optimizer pushes the updated network policies and configurations to the proxies (i.e., sidecar) of the microservices within the service mesh and keeps the proxies up to date with network policies (i.e. compliance) as service mesh environment changes over time (i.e., increased loads, changes in resources, microservices are unreachable, etc.), ¶0074, teaches by optimizing network policies to set retries, timeouts, circuit breaking, rate limits, etc., in such a manner that maximizes successful API calls and/or reduces the amount of computing resources wasted on fulfilling failed API calls, ¶0075, teaches rate limits (i.e. predefined consumption quota) applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0090-¶0091, teaches identify timeouts that occur as a result of high resource utilization (i.e., resources being consumed above an average or median level for transactions executed by the same API call and/or microservice chain, wherein API consumed… the service mesh control plane 505 may observe that certain API (i.e. specific resource) calls (i.e. request) being sent to a particular microservice are successful after a set number of retry attempts (i.e., M number of retries), ¶0111, teaches the service mesh may auto scale the number of resources provisioned to the microservices experiencing high resource utilization (i.e. cumulative usage) and high timeout rates (i.e. predefined consumption quota),
wherein the predefined consumption quota is specified by a consumer policy associated with the first microservice consumer (¶0021, teaches the policy optimizer pushes the updated network policies and configurations to the proxies (i.e., sidecar) of the microservices within the service mesh and keeps the proxies up to date with network policies as service mesh environment changes over time (i.e., increased loads, changes in resources, microservices are unreachable, etc.), ¶0074, teaches optimizing network policies to set retries, timeouts, circuit breaking, rate limits, etc., in such a manner that maximizes successful API calls and/or reduces the amount of computing resources wasted on fulfilling failed API calls, ¶0075, teaches rate limits (i.e. predefined consumption quota) applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time,):
wherein the first microservice consumer is associated with a pod of containers (¶0050, teaches the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure, ¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523);
wherein the pod of containers comprises a main container to provide a microservice and a sidecar pattern container corresponding to the first proxy (¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0078, teaches sidecar proxy configuration APIs may describe the configuration of the proxies 527 mediating inbound and outbound communication to the service 525 attached to the proxies 527); and wherein allowing the first request comprises:
communicating, by the sidecar pattern container, with a network policy enforcer external to the pod to cause the network enforcer to the pod to cause the network policy enforcer to check the consumer policy (¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0072-¶0073, teaches a proxy 527 a tasked with sending an outbound communication to the next service 525 b of a microservice chain knows where to send the communication, such as API calls…timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0091, teaches the retry counts of the network policy may be configured to at least 10 retries automatically by the policy optimizer 550 and the network policy can be pushed to proxies P1 and/or P2 for enforcement);
determining, by the network policy enforcer and based on the check, that the cumulative usage complies with the predefined consumption quota (¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0083, teaches policy optimizer 550 may check the health, readiness and availability of one or more microservices 525 in a microservice chain being invoked by an API call); and
sending, by the network policy enforcer and to the sidecar pattern container, an approval of the first request (¶0073, teaches a microservice chain invoked by an API call being observed and any rate limits being enforced on one or more microservices 525 fulfilling requests (i.e. approval of the first request) of the API call, ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 ¶0097, teaches one or more incoming API call(s) may be transmitted to the service mesh, invoking one or more microservice chains to fulfill the incoming request (i.e. approval of the first request) for microservices);
responsive to allowance of the first request (¶0078, teaches allow the proxies 527 to reach every instance 523 and service 525 of the service mesh 511), receiving, by a second proxy for the resource, a second request associated with the usage of the resource (¶0048, teaches resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0112, teaches new API calls (i.e. second request) that may be routed from the first microservice to the second microservice (i.e. included with second proxy 527b), may also be set to perform the number of retries M as established by the network policy pushed to the proxy).
responsive to the second request (¶0112, teaches new API calls as second request), controlling, by the second proxy (Fig. 5, step 527 b is a second proxy), whether the second request is permitted responsive to a predefined capacity quota for the resource (¶0074, teaches optimizing network policies (i.e. predefined capacity quota) to set retries, timeouts, circuit breaking, rate limits (i.e. , etc., in such a manner that maximizes successful API calls)).
With respect to claim 2, Kairali discloses the method of claim 1, wherein:
the cumulative usage comprises a time rate of data communicated by the first microservice consumer (Kairali, ¶0111, teaches the service mesh may auto scale the number of resources provisioned to the microservices experiencing high resource utilization (i.e. cumulative usage) and high timeout rates (i.e. predefined consumption quota);
the predefined consumption quota comprises a limit on the time rate of data (Kairali, ¶0075, teaches rate limits (i.e. predefined consumption quota) applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time); and
allowing the first request further comprises approving, by the network policy enforcer (Fig. 5, policy optimizer 550), the first request based on a comparison of the time rate of data to the limit (Kairali, ¶0109, teaches the service mesh control plane calculates the rate at which the microservices of the microservice chain are expected to timeout at the current timeout configuration while handling the API call and compare the timeout rate for the transaction of the incoming API call with a configured threshold).
With respect to claim 3, Kairali discloses the method of claim 1, wherein allowing the first request further comprises:
determining, by the network policy enforcer (Fig. 5, policy optimizer 550), that the predefined consumption quota corresponds to a hard limit (Kairali, ¶0081, teaches If the retry count between M1 and M2 is set by the network policy to conduct 25 retries for a specific API call and the service mesh's threshold level of failure for API call 607 is a 75% failure rate, then if the policy optimizer 550 tracks a failure rate of 90% at M1 to M2 for API call 607, policy optimizer 550 will re-configure the retry count set by the network policy to greater than 25 retries in an effort to reduce the failure rate from 90% for the API call down to less than the 75% threshold failure rate. ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 thereof in the microservice chain being invoked);
responsive to determining that the predefined consumption quota corresponds to the hard limit (Kairali, ¶0081 and ¶0093), determining, by the network policy enforcer (Fig. 5, policy optimizer 550), whether the cumulative usage exceeds the predefined consumption quota (Kairali, ¶0080, teaches policy optimizer 550 observes that a certain API call is failing at a rate that is higher than the threshold level after conducting the number of retries prescribed by the retry count in the network policy for the specified API call and therefore increases the retry count prescribed by the network policy and pushes the updated network policy to proxies 527); and
approving, by the network policy enforcer (Fig. 5, policy optimizer 550), the first request responsive to determining that the cumulative usage does not exceed the predefined consumption quota (Kairali, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time).
With respect to claim 4, Kairali discloses the method of claim 1, wherein approving the first request comprises:
determining, by the network policy enforcer (Fig. 5, policy optimizer 550), that the predefined consumption quota corresponds to a soft limit (Kairali, ¶0063, teaches metering and pricing 482 provide cost tracking as resources are utilized within the cloud computing environment 300, and billing or invoicing for consumption of these resources, ¶0088, teaches if the service mesh control plane 505 observes that the degree of timeouts occurring for a particular API call executed on a microservice chain or globally across the service mesh 511 a below a first threshold level (i.e. soft limit) at the current timeout configuration); and
responsive to determining that the predefined consumption quota corresponds to the soft limit (Kairali, ¶0063 and ¶0088), approving, by the network policy enforcer (Fig. 5, policy optimizer 550), the first request based on the cumulative usage and a variance applied to the predefined consumption quota (Kairali, ¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time).
With respect to claim 5, Kairali discloses the method of claim 4, wherein the variance comprises a time-based deviation that allows noncompliance with the predefined consumption quota for a predetermined time period (Kairali, ¶0075, teaches rate limits (i.e. the predefined consumption quota for a predetermined time period) applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0082, teaches the policy optimizer 550 may not only automate the retry count between microservices 525 of the service mesh, ¶0109, teaches decreasing the timeout value, timeouts may occur more quickly, enabling to proxies of the service mesh to spend less time waiting for API calls timeout before retrying the API call or sending an error).
With respect to claim 6, Kairali discloses the method of claim 4, wherein the variance comprises a comprises a value-based deviation that adjusts the predefined consumption quota (Kairali, ¶0082, teaches the policy optimizer 550 may not only automate the retry count between microservices 525 of the service mesh 511 but may also automatically adjust the configuration of a polling interval between the retries, ¶0088, teaches policy optimizer 550 may automate timeout (i.e. value) adjustments between microservices 525 within a service mesh 511).
With respect to claim 7, Kairali discloses the method of claim 1, wherein the predefined consumption quota comprises a limit corresponding to a network telemetry metric associated with a network transport layer or a network application layer (Kairali, ¶0072, teaches the user sending the requested call is authenticated the proxy 527 using Mutual Transport Layer Security (mTLS) or another mechanism of authentication, ¶0075, teaches Proxies 527 of the service mesh 511 may collect and store a plurality of different metrics to the service mesh history DB 513 over time, along with user profiles associated with the metrics being collected… rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time).
With respect to claim 18, Kairali discloses a non-transitory storage medium that stores machine-readable instructions that, when executed by a hardware processor, cause a proxy for a resource to:
monitor transport layer traffic associated with a cumulative usage of the resource by a first microservice consumer (¶0072, teaches the user sending the requested call is authenticated the proxy 527 using Mutual Transport Layer Security (mTLS) or another mechanism of authentication, ¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0088, teaches policy optimizer 550 may automate timeout adjustments between microservices 525 within a service mesh 511. Service mesh control plane 505 may continuously track timeout configurations 519 alongside the timeouts occurring for each API call being executed by microservice chains of the service mesh 511), wherein the resource is shared by a plurality of microservice consumers including the first microservice consumer (¶0085, teaches API call (i.e. specific resource) is expected to be route between two microservices 525, ¶0101, teaches a load balancer may help control the distribution (i.e. share) of the API calls (i.e. specific resource) between microservices (i.e. plurality of microservice consumers) and replicas thereof, in order to prevent a single microservice or replica from receiving too many retry attempts all at once, see ¶0042, ¶0084);
receive, a request associated with the first microservice consumer using the resource (¶0069, teaches API calls may request the execution of one or more capabilities or functions of the microservices 525 a-525 n (generally referred to herein as microservices 525 or services 525);
communicate with a network policy enforcer to receive an approval of the request (¶0073, teaches a microservice chain invoked by an API call being observed and any rate limits being enforced on one or more microservices 525 fulfilling requests (i.e. approval of request) of the API call, ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 ¶0097, teaches one or more incoming API call(s) may be transmitted to the service mesh, invoking one or more microservice chains to fulfill the incoming request (i.e. approval of the first request) for microservices), wherein:
the resource is associated with a predefined capacity quota specified by a resource policy associated with the resource (¶0074, teaches optimizing network policies to set retries, timeouts, circuit breaking, rate limits, etc., in such a manner that maximizes successful API calls and/or reduces the amount of computing resources wasted on fulfilling failed API calls, ¶0093, teaches policy optimizer (i.e. network policy enforcer) 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527);
the resource is associated with a pod of containers (¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523);
the network policy enforcer is external to the pod (¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0091, teaches the retry counts of the network policy may be configured to at least 10 retries automatically by the policy optimizer 550 and the network policy can be pushed to proxies P1 and/or P2 for enforcement);
the pod comprises a main container to provide the resource and a sidecar pattern container corresponding to the proxy (¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0078, teaches sidecar proxy configuration APIs may describe the configuration of the proxies 527 mediating inbound and outbound communication to the service 525 attached to the proxies 527); and
the communicating comprises:
causing the network policy enforcer to check the resource policy and determine, based on the check, that the cumulative usage complies with the predefined capacity quota (¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0070, teaches the data plane 509 may comprise a plurality of instances 523, which can be in the form of one or more clusters, pods, or containers hosting a service 525 within the instance 523, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0091, teaches the retry counts of the network policy may be configured to at least 10 retries automatically by the policy optimizer 550 and the network policy can be pushed to proxies P1 and/or P2 for enforcement, ¶0093, teaches policy optimizer (i.e. network policy enforcer) 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527, ¶0111, teaches the service mesh may auto scale the number of resources provisioned to the microservices experiencing high resource utilization (i.e. cumulative usage) and high timeout rates (i.e. predefined consumption quota)); and
receiving, by the sidecar pattern container and from the network policy enforcer, the approval (¶0073, teaches a microservice chain invoked by an API call being observed and any rate limits being enforced on one or more microservices 525 fulfilling requests (i.e. approval of request) of the API call, ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 ¶0097, teaches one or more incoming API call(s) may be transmitted to the service mesh, invoking one or more microservice chains to fulfill the incoming request (i.e. approval of the first request) for microservices); and
permit the request responsive to receiving the approval (¶0072, teaches authentication and authorization tasks of the proxies 527 may include the performance of cryptographic attestation of incoming requests in order to determine if the request being invoked by an API call is valid and allowable, ¶0073, teaches a microservice chain invoked by an API call being observed and any rate limits being enforced on one or more microservices 525 fulfilling requests (i.e. approval of request) of the API call).
With respect to claim 19, Kairali discloses the storage medium of claim 18, wherein the instructions, when executed by the hardware processor, further cause the proxy to:
monitor transport layer traffic associated with a cumulative usage of the resource by the plurality of microservice consumers (Kairali, ¶0072, teaches the user sending the requested call is authenticated the proxy 527 using Mutual Transport Layer Security (mTLS) or another mechanism of authentication, ¶0004, teaches determining by the service mesh, whether the history of API calls indicates a timeout rate (i.e. cumulative usage) between the first microservice and the second microservice using the timeout value as configured in the timeout configuration is less than a threshold timeout rate, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0073, teaches timeout controls and rate limiting between microservices 525. The observability tasks may include, for each API call, collecting detailed metrics about the service mesh 511, ¶0075, teaches rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0088, teaches policy optimizer 550 may automate timeout adjustments between microservices 525 within a service mesh 511. Service mesh control plane 505 may continuously track timeout configurations 519 alongside the timeouts occurring for each API call being executed by microservice chains of the service mesh 511), wherein the resource is shared by a plurality of microservice consumers including the first microservice consumer (Kairali, ¶0042, teaches a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service, ¶0084, teaches Scaling up microservices 525 may include replicating one or more microservices 525 in order to distribute and handle the increased load of retries and/or increasing the number of resources provisioned to the microservices 525 of the microservice chains expected to receive an increased load of API calls following the microservice outage);
receive, a second request associated with the usage of the resource (Kairali, ¶0048, teaches Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service, ¶0077, teaches service mesh control plane 505 may utilize the service mesh metrics collected from the proxies 527 and/or micro services 525 of the service mesh 511 to track all API calls between microservices of the service mesh 511, ¶0114, teaches incoming API call(s) is received from a user(s) invoking a microservice chain to fulfill one or more of the incoming requests (i.e. second request) of the API call); and
communicate with the network policy enforcer receive an approval of the second request (Kairali, ¶0093, teaches policy optimizer (i.e. network policy enforcer) 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 ¶0097, teaches one or more incoming API call(s) (i.e. approval of the second request) may be transmitted to the service mesh, invoking one or more microservice chains to fulfill the incoming request for microservices.
With respect to claims 20 and 22, Kairali discloses the method of claim 1, wherein the consumption quota comprises at least one of an ingress data per unit of time received by the first microservice, or an egress data per unit of time provided by the first microservice (Kairali, ¶0075, teaches the service mesh history DB 513 may store and calculate API call rates generally and/or API call rates at specific times of day…rate limits applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time. For example, a microservice could be rate limited to fulfilling twenty-five API calls per second, 100 API calls per min, etc. ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 thereof in the microservice chain being invoked, ¶0111, teaches timeouts recorded and tracked by the service mesh experienced high resource utilization as consumption quota (i.e., CPU, memory, storage, etc.). For example, by comparing resource utilization for timed out transactions and the number of transactions timed out while consuming a high level of resources. If resource utilization is considered high (i.e., above a threshold level of resources) for a threshold percentage of the timed-out transactions executed by the microservices (i.e. first microservices consumer) of the microservice chain being invoked).
With respect to claim 24, Kairali discloses the storage medium of claim 18, wherein the communicating with the network policy enforcer further causes the network policy enforcer to determine whether a hard limit or a soft limit applies to the predefined capacity quota (Kairali, ¶0081, teaches If the retry count between M1 and M2 is set by the network policy to conduct 25 retries for a specific API call and the service mesh's threshold level of failure for API call 607 is a 75% failure rate, then if the policy optimizer 550 tracks a failure rate of 90% at M1 to M2 for API call 607, policy optimizer 550 will re-configure the retry count set by the network policy to greater than 25 retries in an effort to reduce the failure rate from 90% for the API call down to less than the 75% threshold failure rate, ¶0093, teaches policy optimizer 550 may apply the user level rate limit at the ingress of the API gateway 605 and control the rate at which all API calls from the user 601 are sent from the API gateway 605 to the first microservice 525 or proxy 527 thereof in the microservice chain being invoked, ¶0063, teaches metering and pricing 482 provide cost tracking as resources are utilized within the cloud computing environment 300, and billing or invoicing for consumption of these resources, ¶0088, teaches if the service mesh control plane 505 observes that the degree of timeouts occurring for a particular API call executed on a microservice chain or globally across the service mesh 511 a below a first threshold level (i.e. soft limit) at the current timeout configuration), and determine whether the request is permitted based on the determination of whether the hard limit or the soft limit applies to the predefined capacity quota (Kairali, ¶0075, teaches rate limits (i.e. predefined capacity quota) applied to one or microservices 525 of a microservice chain wherein a microservice 525 may be limited to fulfilling a number of API calls per unit of time, ¶0088, teaches if the service mesh control plane 505 observes that the degree of timeouts occurring for a particular API call executed (i.e. the request is permitted) on a microservice chain or globally across the service mesh 511 a below a first threshold level (i.e. soft limit) at the current timeout configuration).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali in view of Fortier et al. (US 2012/0166604), hereinafter “Fortier”.
With respect to claim 8, Kairali discloses the method of claim 1, However, Kairali remain silent on wherein: the consumer policy further specifies at least one of a condition associated with a network session layer or a condition associated with a network transport layer; and allowing the first request comprises approving the first request based on the first request being associated with at least one characteristic that satisfies the at least one of the condition associated with a network session layer or the condition associated with a network transport layer.
Fortier discloses wherein: the consumer policy further specifies at least one of a condition associated with a network session layer or a condition associated with a network transport layer (¶0021, teaches the policy in such cases may determine the network transport used and the endpoint(s) to receive the message, ¶0035, teaches the system accesses the deployed policy to determine whether any policy conditions are satisfied that affect handling of the received request, ¶0036, teaches the system determines a network to select among multiple available networks accessible to the device, wherein the selected network satisfies at least one policy condition defined by the policy); and
allowing the first request further comprises approving, by the network policy enforcer, the first request based on the first request being associated with at least one characteristic that satisfies the at least one of the condition associated with a network session layer or the condition associated with a network transport layer (¶0035, teaches the system accesses the deployed policy (i.e. network policy enforcer) to determine whether any policy conditions are satisfied that affect handling of the received request, ¶0036, teaches the system determines a network to select among multiple available networks accessible to the device, wherein the selected network satisfies at least one policy condition defined by the policy).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Kairali’s policies and configurations to the proxies (i.e., sidecar) of the microservices with the consumer policy further specifies at least one of a condition associated with a network session layer or a condition associated with a network transport layer of Fortier, in order to ensuring security, data integrity, and performance across the network (Fortier, see ¶0014).
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali in view of Roese et al. (US 2006/0048142), hereinafter “Roese”.
With respect to claim 10, Kairali discloses the method of claim 1, further comprising:
changing the consumer policy (Kairali, ¶0079, teaches policy optimizer 550 may optimize network policies of the service mesh by tracking and modifying network controls to automate the number of retries between microservices 525 of a service mesh 511);
detecting, by the network policy enforcer, the changing of the consumer policy (Kairali, ¶0079, teaches policy optimizer 550 may optimize network policies of the service mesh by tracking and modifying network controls to automate the number of retries between microservices 525 of a service mesh 511);
updating the network policy enforcer responsive to the detection without changing any configuration of the pod (Kairali, ¶0081, teaches the updated configuration for the retry count may be saved as part of an updated network policy and the policy optimizer 550 of the service mesh control plane 505 may push the updated network policy to one or more of the proxies of the microservice chain).
Kairali ¶0002 teaches avoid downtime as an application grows and changes over time, ¶0033, teaches a containerized computing environment comprising one or more pods or clusters of containers, and/or a distributed cloud computing environment. However, Kairali remain silent on updating the network policy enforcer responsive to the detection without changing any program code associated with the pod; and updating the network policy enforcer responsive to the detection without introducing a down time for the pod.
Roese discloses updating the network policy enforcer responsive to the detection without changing any program code associated with the pod(¶0021, teaches the policy manager function may include one or more updateable databases of trigger information and policy and/or PER sets deemed responsive to such triggers, ¶0033, teaches determine whether that information includes one or more conditions, events, occurrences, etc. (“triggers”) for the purpose of implementing one or more policy enforcement changes. The analysis function further determines whether the one or more triggers require the implementation of one or more responses through the PEF 250, ¶0047, teaches If there is no match between information that may constitute a trigger and the database of triggers requiring responsive action, the monitoring process continues without change to a policy); and
updating the network policy enforcer responsive to the detection without introducing a down time for the pod (¶0021, teaches the policy manager function may include one or more updateable databases of trigger information and policy and/or PER sets deemed responsive to such triggers, ¶0033, teaches determine whether that information includes one or more conditions, events, occurrences, etc. (“triggers”) for the purpose of implementing one or more policy enforcement changes. The analysis function further determines whether the one or more triggers require the implementation of one or more responses through the PEF 250, ¶0047, teaches If there is no match between information that may constitute a trigger and the database of triggers requiring responsive action, the monitoring process continues without change to a policy).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Kairali’s pod or container of a service and avoid downtime as an application grows and changes over time with updating the network policy enforcer responsive to the detection without changing any program code associated with the pod; and updating the network policy enforcer responsive to the detection without introducing a down time for the pod of Roese, in order to ensure continuous service availability , enhanced security and operational flexibility (Roese, ¶0010).
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kairali in view of Moore et al. (US 2022/0078075), hereinafter “Moore”.
With respect to claim 23, Kairali discloses the method of claim 1, however, Kairali remain silent on wherein the consumption quota comprises a number of open network connections for the first microservice consumer.
Moore discloses wherein the consumption quota comprises a number of open network connections for the first microservice consumer (¶0005, teaches Quotas may be used to limit the quantity of concurrent connections a client computing device may have with regard to one or more specific services, ¶0035, teaches the connection requests may be TCP/IP requests and/or requests using other protocols, The connection requests may be to receive concurrent services, ¶0038, teaches a client's quota may be referred to as the client's configured proportion of a total quantity of connections (i.e. number of open network connections) to a service source (i.e. m microservice consumer), ¶0044, teaches T(max) may represent a target maximum quantity of connections the service provider seeks to allow to the service source).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify kairali’s first microservice as microservice consumer with the consumption quota comprises a number of open network connections for the first microservice consumer of Moore, in order to adjust resource usage and ensure fair distribution of network capacity (Moore, ¶0070).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GOLAM MAHMUD whose telephone number is (571)270-0385. The examiner can normally be reached Mon-Fri 8.00-5.00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 5712703037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.M/Examiner, Art Unit 2458
/UMAR CHEEMA/Supervisory Patent Examiner, Art Unit 2458