Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 10 is objected to because of the following informalities:
Claim 10 discloses “the distributed network of claim 1”, however, it is recommended that this is revised to “the distributed processing system of claim 1.”
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitations are: “exporter” in claims 1 and 20, “application analysis module” in claims 1 and 20, and “application scaler” in claim 5.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), first paragraph:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-10 and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The recitation of “exporter” in claims 1 and 20, “application analysis module” in claims 1 and 20, and “application scaler” in claim 5 invoke 35 U.S.C. § 112(f) (formerly sixth paragraph) but the specification does not disclose any corresponding structure, material, or acts for performing those functions, nor does it link any specific structure to the claimed functions.
Instead, the specification (e.g., Fig. 1 and respective paragraphs) merely restates them without identifying any hardware, software, algorithm, or other means for carrying out the recited functions. Because the written description fails to convey possession of any means or structure that implements the claimed “providing,” “receiving,” “determining,” and “directing” functions, the claims are broader than the inventor’s disclosure and thus are not commensurate with the specification.
Claims 2-10 depend from claim 1 and are rejected for the same reasons.
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph::
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10 and 20 are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
The limitations “exporter” in claims 1 and 20, “application analysis module” in claims 1 and 20, and “application scaler” in claim 5 each invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, but fail to recite any corresponding structural equivalents. Because no structure is set forth in the claims, and because the specification does not identify any specific structure tied to the claimed functions, a person of ordinary skill cannot determine the metes and bounds of the claimed apparatus. As a result, the claim limitations are indefinite for failing to “distinctly claim” the invention.
Claims 2-10 depend from claim 1 and are rejected for the same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made
Claims 1, 6, 9, 11, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Banka et al. (US 2025/0112851, hereinafter Banka) in view of and Alluboyina et al. (US 2025/0310182, hereinafter Alluboyina).
Regarding claim 1, Banka discloses
A distributed processing system, comprising (fig. 1-5):
a cloud-based network having an orchestrator (paragraph [0036]: Orchestrator 130 implements a scheduler 148 for the computing infrastructure 100) and a plurality of application pods, each application pod including an application (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications) and an exporter configured to provide telemetry information for the associated application (paragraph [0045]: Analytics system 140 may consume network information such as network telemetry obtained by telemetry collector 142; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit);
a back-end system having an application analysis module configured to receive the telemetry information from the application pods (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit), to determine an interdependency between the applications (paragraph [0046]: analytics system 140 may store one or more maps of service dependencies and network configurations in flow database 143; paragraph [0079]: application dependency graph 300 includes services 302. Services 302 may be services of a distributed application that call each other as part of providing the functionality of the distributed application), to determine a scaling between the applications, … (paragraph [0055]: Orchestrator 130, responsive to receiving an indication from analytics system 140 may cause scheduler 148 to adjust the scheduling of one or more services of services 122 to redeploy the service to a different compute node from the compute node on which the service is currently executing).
Banka does not disclose to determine a scaling between the applications, and to direct the orchestrator to launch the applications based on the scaling. Alluboyina discloses to determine a scaling between the applications, and to direct the orchestrator to launch the applications based on the scaling (fig. 1-34, paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0060]: The log processor 130 passes the AAI to the orchestrator 106. The orchestrator 106 may use the AAI to perform various functions with respect to the components such as adding, deleting, or re-deploying to a different location; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end module identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s known autoscaling logic (i.e., AAI) and then launching or terminating application pods according to the AAI and manifest-specified dynamic requirements. Both Banka and Alluboyina operate in the same “Kubernetes-style” orchestration domain and rely on telemetry to drive orchestration decisions. The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Regarding claim 11, Banka discloses
A method, comprising:
providing, in a distributed processing system, a cloud-based network having an orchestrator (paragraph [0036]: Orchestrator 130 implements a scheduler 148 for the computing infrastructure 100) and a plurality of application pods, each application pod including an application (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications) and an exporter configured to provide telemetry information for the associated application (paragraph [0045]: Analytics system 140 may consume network information such as network telemetry obtained by telemetry collector 142; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit);
providing, in the distributed processing system, a back-end system having an application analysis module (paragraph [0045]: Analytics system 140 may consume network information such as network telemetry obtained by telemetry collector 142; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit);
receiving, by the application analysis module, the telemetry information from the application pods (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit);
determining, by the application analysis module, an interdependency between the applications (paragraph [0046]: analytics system 140 may store one or more maps of service dependencies and network configurations in flow database 143; paragraph [0079]: application dependency graph 300 includes services 302. Services 302 may be services of a distributed application that call each other as part of providing the functionality of the distributed application);
determining, by the application analysis module, a scaling between the applications (paragraph [0055]: Orchestrator 130, responsive to receiving an indication from analytics system 140 may cause scheduler 148 to adjust the scheduling of one or more services of services 122 to redeploy the service to a different compute node from the compute node on which the service is currently executing).
Banka does not disclose determining, by the application analysis module, an interdependency between the applications; and directing the orchestrator to launch the applications based on the scaling. Alluboyina discloses determining, by the application analysis module, an interdependency between the applications; and directing the orchestrator to launch the applications based on the scaling (paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0060]: The log processor 130 passes the AAI to the orchestrator 106. The orchestrator 106 may use the AAI to perform various functions with respect to the components such as adding, deleting, or re-deploying to a different location; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end module identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s known autoscaling logic (i.e., AAI) and then launching or terminating application pods according to the AAI and manifest-specified dynamic requirements. Both Banka and Alluboyina operate in the same “Kubernetes-style” orchestration domain and rely on telemetry to drive orchestration decisions. The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Regarding claims 6 and 16, Banka discloses
wherein the telemetry information includes application calls to other applications in other application pods (paragraph [0047]: Analytics system 140 may utilize application trace tools to determine call paths among services 122. For example, analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122; paragraph [0048]: [0048] Analytics system 140 may use the application trace tool to determine a critical path of one or more call paths of services 122 that underpin a distributed application).
Regarding claims 9 and 19, Banka discloses
wherein the back-end system further includes a database configured to receive the telemetry information and to provide the telemetry information to the application analysis module (paragraph [0045]: Analytics system 140 may consume network information such as network telemetry obtained by telemetry collector 142; paragraph [0046]: Flow database 143 may include sFlow or other flow records provided by switches 16, 18 and that indicate flows processed by each of the switches, for instance over a time period; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122).
Claims 2-5, 7, 8, 10, 12-15, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Banka et al. (US 2025/0112851, hereinafter Banka) in view of and Alluboyina et al. (US 2025/0310182, hereinafter Alluboyina) as applied to claims 1, 2, 11, and 12, and further in view of Enguehard et al. (US 2020/0322229, hereinafter Enguehard).
Regarding claims 2 and 12, Banka discloses
wherein in determining the interdependency between the applications (paragraph [0046]: analytics system 140 may store one or more maps of service dependencies and network configurations in flow database 143).
Banka does not disclose the application analysis module is further configured to create an application wants matrix that correlates a first application with a number instantiations of a second application.
Alluboyina discloses the application analysis module is further configured to … correlates a first application with a number of instantiations of a second application (paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s scaling logic to scale the number of application instances (i.e., create new application instances or adjust instances to satisfy replication requirements). The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Enguehard discloses the application analysis module is further configured to create an application wants matrix that correlates a first application with … a second application (paragraph [0043]: For example, a first service can be deployed on a first node, and a second service can be deployed on the second node within the network. The netflow telemetry of data flow between the first service and the second service can be evaluated so that the inter-service communication can be mapped to a weighted graph that represents the inter-service dependency. In an optimization context, if an evaluation determines based on a linear solver or other approach is disclosed above, that there is a strong level of affinity between the first service and the second service compared to other ongoing service-to-service communications, then a placement module containing or operating an optimization module can migrate, for example, the first service to the second node. The affinity can be detected based on the weighted traffic matrix extracted by the netflow module).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix derived from netflow telemetry, and mapping the matrix entries to replication/instance requirements and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Regarding claims 3 and 13, Banka in view of Alluboyina does not disclose wherein the correlation between the applications is a static correlation. Enguehard discloses wherein the correlation between the applications is a static correlation (paragraph [0039]: The resulting component can be termed a traffic matrix in which each cell (i, j) is the weight of the graph edge from i to j. In addition to simply storing the average or moving average of pair wise traffic over time, higher orders statistics can be kept and used to deduce whether observed traffic between two services corresponds to singular diversity spikes or if important background traffic is exchanged between the services thus indicating a strong affinity; paragraph [0043]: For example, a first service can be deployed on a first node, and a second service can be deployed on the second node within the network. The netflow telemetry of data flow between the first service and the second service can be evaluated so that the inter-service communication can be mapped to a weighted graph that represents the inter-service dependency. In an optimization context, if an evaluation determines based on a linear solver or other approach is disclosed above, that there is a strong level of affinity between the first service and the second service compared to other ongoing service-to-service communications, then a placement module containing or operating an optimization module can migrate, for example, the first service to the second node. The affinity can be detected based on the weighted traffic matrix extracted by the netflow module; paragraph [0044] When determining what a strong affinity means, the system can establish static thresholds which can indicate that when migration costs are taken into account, that improved performance will exist if one or more services are moved based on an evaluation).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix derived from netflow telemetry, and to use static (time-averaged or thresholded) entries in that matrix to represent a “static correlation” between applications and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Regarding claims 4 and 14, Banka in view of Alluboyina does not disclose wherein the correlation between the applications is a time-based correlation. Enguehard discloses wherein the correlation between the applications is a time-based correlation (paragraph [0039]: The resulting component can be termed a traffic matrix in which each cell (i, j) is the weight of the graph edge from i to j. In addition to simply storing the average or moving average of pair wise traffic over time, higher orders statistics can be kept and used to deduce whether observed traffic between two services corresponds to singular diversity spikes or if important background traffic is exchanged between the services thus indicating a strong affinity; paragraph [0043]: For example, a first service can be deployed on a first node, and a second service can be deployed on the second node within the network. The netflow telemetry of data flow between the first service and the second service can be evaluated so that the inter-service communication can be mapped to a weighted graph that represents the inter-service dependency. In an optimization context, if an evaluation determines based on a linear solver or other approach is disclosed above, that there is a strong level of affinity between the first service and the second service compared to other ongoing service-to-service communications, then a placement module containing or operating an optimization module can migrate, for example, the first service to the second node. The affinity can be detected based on the weighted traffic matrix extracted by the netflow module; paragraph [0044] When determining what a strong affinity means, the system can establish static thresholds which can indicate that when migration costs are taken into account, that improved performance will exist if one or more services are moved based on an evaluation).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix derived from netflow telemetry, and to use static (time-averaged or thresholded) entries in that matrix to represent a “static correlation” between applications and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Regarding claims 5 and 15, Banka discloses
wherein the back-end system further includes an application scaler configured to direct the orchestrator to launch the number of instantiations (paragraph [0055]: Orchestrator 130, responsive to receiving an indication from analytics system 140 may cause scheduler 148 to adjust the scheduling of one or more services of services 122 to redeploy the service to a different compute node from the compute node on which the service is currently executing).
Banka does not disclose to direct the orchestrator to launch the number of instantiations of the second application when the first application is launched. Alluboyina discloses to direct the orchestrator to launch the number of instantiations of the second application when the first application is launched (paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s scaling logic to scale the number of application instances (i.e., create new application instances or adjust instances to satisfy replication requirements). The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Regarding claims 7 and 17, Banka in view of Alluboyina does not disclose wherein the telemetry information further includes one of an application error rate, an application network latency, a processor load, and a memory load. Enguehard discloses wherein the telemetry information further includes one of an application error rate, an application network latency, a processor load, and a memory load (paragraph [0014]: In one aspect, the placement module can determine on which nodes within the multi-node network to place one or more services further based on usage metrics. The usage metrics can include one or more of processor usage, memory usage and bandwidth data; paragraph [0040]: While the affinity derived from the traffic matrix described above is one of the inputs to this module, the module can also take as input usage metrics such as central processor unit usage, memory, bandwidth use, or other metrics, collected on the machines). It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix based on usage metrics such as central processor unit usage, memory, bandwidth, etc. to determine on which nodes within the multi-node network to place one or more services further based on the usage metrics, and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Regarding claims 8 and 18, Banka in view of Alluboyina does not disclose wherein the telemetry information is provided on a periodic basis. Enguehard discloses wherein the telemetry information is provided on a periodic basis. (paragraph [0039]: The resulting component can be termed a traffic matrix in which each cell (i, j) is the weight of the graph edge from i to j. In addition to simply storing the average or moving average of pair wise traffic over time, higher orders statistics can be kept and used to deduce whether observed traffic between two services corresponds to singular diversity spikes or if important background traffic is exchanged between the services thus indicating a strong affinity; paragraph [0043]: For example, a first service can be deployed on a first node, and a second service can be deployed on the second node within the network. The netflow telemetry of data flow between the first service and the second service can be evaluated so that the inter-service communication can be mapped to a weighted graph that represents the inter-service dependency. In an optimization context, if an evaluation determines based on a linear solver or other approach is disclosed above, that there is a strong level of affinity between the first service and the second service compared to other ongoing service-to-service communications, then a placement module containing or operating an optimization module can migrate, for example, the first service to the second node. The affinity can be detected based on the weighted traffic matrix extracted by the netflow module; paragraph [0044] When determining what a strong affinity means, the system can establish static thresholds which can indicate that when migration costs are taken into account, that improved performance will exist if one or more services are moved based on an evaluation). It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix derived from netflow telemetry, and to use static (time-averaged or thresholded) entries in that matrix to represent a “static correlation” between applications, and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Regarding claim 10, Banka does not disclose wherein each application pod provides a containerized instantiation of the associated application. Alluboyina discloses wherein each application pod provides a containerized instantiation of the associated application (paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0060]: The log processor 130 passes the AAI to the orchestrator 106. The orchestrator 106 may use the AAI to perform various functions with respect to the components such as adding, deleting, or re-deploying to a different location; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end module identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s known autoscaling logic (i.e., AAI) and then launching or terminating containerized application instances according to the AAI and manifest-specified dynamic requirements. Both Banka and Alluboyina operate in the same “Kubernetes-style” orchestration domain and rely on telemetry to drive orchestration decisions. The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Claims 20 is rejected under 35 U.S.C. 103 as being unpatentable over Banka et al. (US 2025/0112851, hereinafter Banka) in view of and Alluboyina et al. (US 2025/0310182, hereinafter Alluboyina) and Enguehard et al. (US 2020/0322229, hereinafter Enguehard).
Regarding claim 20, Banka discloses
A distributed processing system, comprising (fig. 1-5):
a cloud-based network having an orchestrator (paragraph [0036]: Orchestrator 130 implements a scheduler 148 for the computing infrastructure 100) and a plurality of application pods, each application pod including an application (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications) and an exporter configured to provide telemetry information for the associated application (paragraph [0045]: Analytics system 140 may consume network information such as network telemetry obtained by telemetry collector 142; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit);
a back-end system having an application analysis module configured to receive the telemetry information from the application pods (paragraph [0038]: Each of services 122 may provide or implement one or more services, and where services 122 represent Pods … Compute nodes 110 may host services for multiple different distributed applications; paragraph [0047]: analytics system 140 may utilize an application tracing toolkit such as OpenTelemetry and application tracing tool such as Jaegar to acquire tracing data of calls among services 122. The application tracing toolkit may use tools such as APIs integrated into services 122 and services 122 instrumented or built using a software development kit (SDK) of the application tracing toolkit), to determine an interdependency between the applications (paragraph [0046]: analytics system 140 may store one or more maps of service dependencies and network configurations in flow database 143; paragraph [0079]: application dependency graph 300 includes services 302. Services 302 may be services of a distributed application that call each other as part of providing the functionality of the distributed application), to determine a scaling between the applications, … (paragraph [0055]: Orchestrator 130, responsive to receiving an indication from analytics system 140 may cause scheduler 148 to adjust the scheduling of one or more services of services 122 to redeploy the service to a different compute node from the compute node on which the service is currently executing), wherein in determining the interdependency between the applications (paragraph [0046]: analytics system 140 may store one or more maps of service dependencies and network configurations in flow database 143).
Banka does not disclose to determine a scaling between the applications, and to direct the orchestrator to launch the applications based on the scaling … the application analysis module is further configured to create an application wants matrix that correlates a first application with a number instantiations of a second application.
Alluboyina discloses to determine a scaling between the applications, and to direct the orchestrator to launch the applications based on the scaling … the application analysis module is further configured to … correlates a first application with a number of instantiations of a second application (paragraph [0049]: The manifest may define dynamic requirements defining the scaling up of a number of application instances and corresponding computing resources in response to usage. The orchestrator 106 may include or cooperate with a utility such as KUBERNETES to perform dynamic scaling up and scaling down the number of application instances; paragraph [0053]: the orchestrator 106 may ingest a manifest defining the provisioning of computing resources to and the instantiation of components such as a cluster 111, pod 112 (e.g., KUBERNETES pod), container 114 (e.g., DOCKER container), storage volume 116, and an application instance 118; paragraph [0056]: The orchestrator 106 may instruct a workflow orchestrator 122 to perform a task with respect to a component. In response, the workflow orchestrator 122 retrieves the workflow from the workflow repository 120 corresponding to the task (e.g., the type of task (instantiate, monitor, upgrade, replace, copy, restore, etc.) and the type of component; paragraph [0060]: The log processor 130 passes the AAI to the orchestrator 106. The orchestrator 106 may use the AAI to perform various functions with respect to the components such as adding, deleting, or re-deploying to a different location; paragraph [0211]: Since each application instance 118 of the set of application instances 118 has a dependency on every other application instance of the set, proper function may require that latency be below a maximum latency specified in terms of a time, e.g., 10 ms, 20 ms, or some other time value; paragraph [0213]: The triangle application specification 2700 may further include a replication requirement 2716 that specifies how many application instance 118 are included in the set of application instances, e.g., a value of 3 or more. In the event that an application instance 118 fails, the orchestrator 106 will therefore create a new application instance 118 to meet the replication requirement 2716).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka’s orchestrator so that, upon Banka’s back-end module identifying inter-service dependencies and performance telemetry, the orchestrator would invoke Alluboyina’s known autoscaling logic (i.e., AAI) and then launching or terminating application pods according to the AAI and manifest-specified dynamic requirements. Both Banka and Alluboyina operate in the same “Kubernetes-style” orchestration domain and rely on telemetry to drive orchestration decisions. The motivation would have been to perform dynamic scaling up and scaling down the number of application instances (Alluboyina paragraph [0049]).
Enguehard discloses the application analysis module is further configured to create an application wants matrix that correlates a first application with … a second application (paragraph [0043]: For example, a first service can be deployed on a first node, and a second service can be deployed on the second node within the network. The netflow telemetry of data flow between the first service and the second service can be evaluated so that the inter-service communication can be mapped to a weighted graph that represents the inter-service dependency. In an optimization context, if an evaluation determines based on a linear solver or other approach is disclosed above, that there is a strong level of affinity between the first service and the second service compared to other ongoing service-to-service communications, then a placement module containing or operating an optimization module can migrate, for example, the first service to the second node. The affinity can be detected based on the weighted traffic matrix extracted by the netflow module).
It would have been obvious to one of ordinary skill in the art at the time the claimed invention was effectively filed to modify Banka by converting Banka’s dependency/telemetry output into Enguehard’s weighted traffic (affinity) matrix derived from netflow telemetry, and mapping the matrix entries to replication/instance requirements and invoking Alluboyina’s orchestrator procedures to create or adjust application instances accordingly. The motivation would have been to increase the quality of service and save resources by optimizing service placement (Enguehard paragraph [0040]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Jalal et al. (US 2023/0359455) discloses “Using basic scripts or complex “orchestrators” a developer can quickly recover crashed subservice or service components, add new instances to meet increased demand” (paragraph [0041]).
Patel et al. (US 2022/0021738) discloses “in response to determining an overload at one or more of servers 408, a corresponding ingress engine 410 may analyze network traffic and identify one or more new collector applications to instantiate based upon that analysis” (paragraph [0136]).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832. The examiner can normally be reached M-F 11:30AM -7:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SISLEY N KIM/Primary Examiner, Art Unit 2196 02/13/2026