DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1–24 are presented for examination in a non-provisional application filed on 03/09/2023.
Drawings
3. The drawings were received on 03/09/2023 (in the filings). These drawings are acceptable.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1–24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
5. As to independent claim 1, the claim recites:
“determining, by the application client, a plurality of simulated performance scores for each transformation unit in the one or more transformation units, the plurality of simulated performance scores corresponding to a plurality of types of worker nodes in the heterogeneous cluster, wherein each type of worker node corresponds to a distinct hardware configuration and wherein each simulated performance score for each transformation unit is determined based at least in part on resources required for the transformation performed by the transformation unit and the distinct hardware configuration of a corresponding type of worker node;
determining, by the application client, a plurality of aggregate simulated performance scores for the at least one job, the plurality of aggregate simulated performance scores corresponding to the plurality of types of worker nodes, wherein each aggregate simulated performance score corresponds to a type of worker node and is determined based at least in part on simulated performance scores corresponding to that type of worker node for transformation units within the at least one job;
… to schedule the at least one job on one or more nodes of the heterogeneous cluster based at least in part on the plurality of aggregate simulated performance scores.”
As to independent claim 9 and dependent claim 17, they recite similar language of commensurate scope as claim 1.
These limitations, as currently drafted and within their respective claim, represent processes that, under a broadest reasonable interpretation, covers performance in the mind (including observation, evaluation, judgment, opinion, etc.) but for the recitation of generic computer components.
That is, other than reciting the use of
“one or more processors; and one or more memories” (claim 9) and
“At least one non-transitory computer-readable medium” (claim 17) to perform these steps, nothing in the claim element precludes the step from practically being performed in the mind or using pencil and paper (see MPEP 2106.04(a)(2) – Examples of Concepts The Courts Have Identified As Abstract Ideas, discussing abstract ideas or concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work).
For example, but for the use of generic computers,
the performance of these steps in the context of the claims reasonably encompasses the user mentally and/or manually performing the steps of mentally
1) mentally determine a plurality of performance scores
2) mentally “schedule” jobs based on the scores (akin to assigning or mapping jobs to resources to a particular time or time period)
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application (under Prong Two of Step 2A)
(I) Generic Computing Device
For instance, claim 9 recites the additional element of
“one or more processors; and one or more memories” and
claim 15 recites the additional element of
“At least one non-transitory computer-readable medium”
that perform these steps.
Additionally, each of these claims recites the use of “transmitting … instructions to” another computer device.
These computer components, functionalities, and/or services are all recited at a high-level of generality (i.e., as a generic computing device performing a generic computer function of processing and outputting data, or sending data or commands across a network) such that it amounts no more than mere instructions to apply the exception using a generic computer components such as processors, basic processor instructions and/or software components or programs.
Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
(II) Data Collection
As presented, the claims also include the additional element of:
(a) “receiving, by the application client, an application comprising at least one job, the at least one job comprising one or more transformation units, wherein each transformation unit is configured to perform a transformation on data input to the transformation unit.”
However, merely obtaining or receiving data or input for processing (or other uses) simply does not “integrate” the abstract idea into a practical application which improves the functioning of a computer or other technology or technological field.
Moreover, the courts have also held that limitations which merely adds insignificant extra-solution activity to the judicial exception does not integrate a judicial exception into a practical application.
As discussed below and set forth in MPEP § 2106.05(g), the mere collection and receiving of information for processing essentially amounts to data gathering and storing (using processors, basic processor instructions and/or software components or programs) and therefore is consider an “insignificant extra-solution activity.”
Accordingly, the additional elements of the claims, viewed individually and as an ordered combination, added nothing to the implementation of a mental process on an unspecified, “generic” computer and therefore failed to transform the abstract idea nature of the claims into a patent-eligible application.
(III) Data Output
Additionally, as presented, the claims also includes the additional elements of:
(b) “transmitting, by the application client, instructions to the heterogeneous cluster.”
However, merely presenting or outputting the information from one computing device to another simply does not “integrate” the abstract idea into a practical application which improves the functioning of a computer or other technology or technological field, absent a further step or activity that executes or performs tasks or controls, and thereby effecting a change or improvement to the functioning of the computer or the technological field or environment.
Moreover, the courts have also held that limitations which merely adds insignificant extra-solution activity to the judicial exception does not integrate a judicial exception into a practical application.
As discussed below and set forth in MPEP § 2106.05(g), the mere outputting of data or information is consider an “insignificant extra-solution activity.”
Accordingly, the additional elements of the claims, viewed individually and as an ordered combination, added nothing to the implementation of a mental process on an unspecified, “generic” computer and therefore failed to transform the abstract idea nature of the claims into a patent-eligible application.
(IV) Particular Technological Environment or Field Of Use
As shown above, the claims also include the elements of:
(1) “an application comprising at least one job, the at least one job comprising one or more transformation units, wherein each transformation unit is configured to perform a transformation on data input to the transformation unit,”
(2) “a plurality of simulated performance scores for each transformation unit in the one or more transformation units, the plurality of simulated performance scores corresponding to a plurality of types of worker nodes in the heterogeneous cluster, wherein each type of worker node corresponds to a distinct hardware configuration,”
(3) “a plurality of aggregate simulated performance scores for the at least one job, the plurality of aggregate simulated performance scores corresponding to the plurality of types of worker nodes.”
These exemplary elements however merely describes the general technical or computing environment (within which the claimed steps or processes operate) and restrict the processed information or data to a particular type or category (without imposing any functional claim limitations, activities, or steps).
Limitations that generally link the use of the judicial exception to a particular technological environment or field of use, neither meaningfully limit the claim nor transform (the abstract idea nature of) the claim to a particular useful application to improve the functioning of a computer or any other technology.
Under Step 2B of the 101 analysis:
The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components and field of use/technological environment which do not amount to significantly more than the abstract idea.
As claimed, the “one or more processors; and one or more memories” “at least one non-transitory computer-readable medium” and “transmitting … instructions to” another computer device.
merely encompasses generic computing components (e.g. processors, communications networks) recited at a high-level of generality, executing one or more steps of the claims.
Moreover, the activity of “mere data gathering” have also been found by the courts to be “insignificant extra-solution activity” as set forth in MPEP 2106.05(g)(3) Insignificant Extra-Solution Activity, describing that in determining whether an additional element is insignificant extra-solution activity, one may factoring into consideration whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).
As recited, the step of
(a) “receiving, by the application client, an application comprising at least one job, the at least one job comprising one or more transformation units, wherein each transformation unit is configured to perform a transformation on data input to the transformation unit”
is/are mere data gathering activities for additional processing (to obtaining or receiving inputs for processing).
Moreover, the activity of “outputting information” have also been found by the courts to be “insignificant extra-solution activity” as set forth in 2106.05(g)(3) Insignificant Extra-Solution Activity, describing that in determining whether an additional element is insignificant extra-solution activity, one may factoring into consideration whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).
As recited, the step of:
(b) “transmitting, by the application client, instructions to the heterogeneous cluster”
is/are merely data output activities for additional processing.
Additionally, the computing activity of a) receiving or transmitting data over a network, and b) storing and retrieving information in memory (for distribution and dissemination, as an example), set forth in 2106.05(d)(II), setting forth that courts have recognized computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
As recited, the step of
(b) “transmitting, by the application client, instructions to the heterogeneous cluster.”
merely involving the sending, distribution, and dissemination of data or output across computing devices.
Accordingly, the additional step(s) or element(s) of the claims, viewed individually and as an ordered combination, added nothing to the implementation of a mental process on an unspecified, “generic” computer and therefore failed to transform the abstract idea nature of the claims into a patent-eligible application.
6. As to dependent claims 2–8, 10–16, and 18–24, each of these claims either (1) recites additional step(s) that covers performance in the mind; or (2) merely restricts or links the process step, information or data to a particular type, technological environment, or field of use; (3) amounts to insignificant extra-solution activity to the judicial exception such as data input and output/transmission; or (4) recites a function which amounts to no more than a recitation of the words “apply it” (or an equivalent) and is no more than mere instructions to implement an abstract idea or other exception on a computer; and thus as a whole is also directed and confined to the same process set forth in claims 1, 8, and 16. Therefore, these claims do not individually or collectively add an inventive concept or additional element(s) amounting to significantly more than the abstract idea itself. These claims are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more.
For instance, dependent claim 2 reciting “receiving … execution statistics … and determining … an aggregate runtime performance score” merely amounts to insignificant extra-solution activity to the judicial exception such as data input and output/transmission and recites additional step(s) that covers performance in the mind.
Dependent claims 3 and 4, reciting additional “determining,” “adjusting,” and “modifying” steps or activities, merely recites additional step(s) that covers performance in the mind.
Dependent claim 5 reciting additional “receiving” step, merely amounts to insignificant extra-solution activity to the judicial exception such as data input and output/transmission.
Dependent claim 6 reciting “cause the heterogeneous cluster to schedule the at least one job on one or more nodes …” merely recites additional step(s) that covers performance in the mind, but for the recitation of generic computer components.
Dependent claim 7 reciting “detecting … and transmitting, by the application client, instructions to the heterogeneous cluster configured to cause the heterogeneous cluster to modify …” merely recites additional step(s) that covers performance in the mind but for the recitation of generic computer components, and further involves insignificant extra-solution activity to the judicial exception such as data input and output/transmission.
Dependent claim 8 reciting “determining … and transmitting, by the application client, instructions to the heterogeneous cluster configured to cause the heterogeneous cluster to schedule …” merely recites additional step(s) that covers performance in the mind but for the recitation of generic computer components, and further involves insignificant extra-solution activity to the judicial exception such as data input and output/transmission.
As to dependent claims 10–16 and 18–24, they are the corresponding system and computer program product claims correspond to at least one of claims 2–8. Therefore, these claims do not individually or collectively 1) integrated the abstract idea into a practical application, nor do they 2) include additional element(s) amounting to significantly more than the abstract idea itself.
Examiner’s Remarks
7. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution.
Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention.
Abbreviations
8. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s):
i. figure / figures: Fig. / Figs.
ii. column / columns: Col. / Cols.
iii. page / pages: p. / pp.
References Cited
9. (A) Li et al. US 2021/0073028 A1 (“Li”).
(B) Rathod et al., US 2023/0013797 A1 (“Rathod”).
(C) Chen et al., US 2017/0149681 A1 (“Chen”).
Notice re prior art available under both pre-AIA and AIA
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A.
11. Claims 1–2, 5–6, 8–10, 13–14, 16–18, 21–22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Li in view of (B) Rathod.
See “References Cited” section, above, for full citations of references.
12. Regarding claim 1, (A) Li teaches/suggests the invention substantially as claimed, including:
“A method for job scheduling on a heterogeneous cluster executed by one or more computing devices of an application client of the heterogeneous cluster, the method comprising:
receiving, by the application client, an application comprising at least one job, the at least one job comprising one or more transformation units, wherein each transformation unit is configured to perform a transformation on data input to the transformation unit”
(Fig. 2 and ¶ 46: an example scheduling system 200. The scheduling system 200 includes a recommendation engine 210, a scheduler engine 220, and a distributed computing network 230. The scheduling system 200 can receive, as input, a job 240 that includes a computational graph 250 and optional metadata 260. Although the example scheduling system 200 is shown as configured to receive the job 240 that includes operations represented as a computational graph, the scheduling system 200 can be configured in other implementations to receive data representing the operations for the job 240 in other formats, e.g., as a series of function calls);
“determining, by the application client, a plurality of simulated performance scores for each transformation unit in the one or more transformation units, the plurality of simulated performance scores corresponding to a plurality of types of worker nodes in the heterogeneous cluster, wherein each type of worker node corresponds to a distinct hardware configuration and wherein each simulated performance score for each transformation unit is determined based at least in part on resources required for the transformation performed by the transformation unit and the distinct hardware configuration of a corresponding type of worker node”
(¶ 47: The scheduler engine 220 is configured to send the computational graph 250 and the optional metadata 260 to the recommendation engine 210. The recommendation engine 210 is configured to predict a set of performance metrics for each type of computing device in the distributed computing network 230 and generate recommendations for scheduling operations to different types of hardware accelerators, using the performance metrics;
¶ 51: For example, the recommendation engine 210 can indicate that a first type of computing device is better performed to process the job 240, predicting a higher performance metric for the first type of computing device over other types of computing devices in the distributed computing network;
¶¶ 87–89: A simulator can be configured to generate a set of performance metrics according to an objective function corresponding to the type of performance metric sought. The simulator 420 can be configured to simulate performance of executing the computational graph with additional functionality and compatibility guarantees. In some implementations, the simulator 420 determines whether a type of computing device is compatible to execute the operations represented in the computational graph …. Then, the simulator 420 can use hardware specifications for a given type of computing device and predict a performance metric for executing the input job 410 on the distributed computing network;
¶¶ 10–11: The set of performance metrics can be ranked. Higher-ranked metrics can correspond to types of computing devices that are predicted to execute the operations of the computational graph more efficiently than types of computing devices that correspond to lower-ranked metrics);
“determining, by the application client, a plurality of [[aggregate]] simulated performance scores for the at least one job, the plurality of … simulated performance scores corresponding to the plurality of types of worker nodes”
(¶ 47, ¶ 51, ¶¶ 87–89, and ¶¶ 10–11, as applied above, teaching predicting and ranking a set of performance metrics for each type of computing device in the distributed computing network).
“transmitting, by the application client, instructions to the heterogeneous cluster configured to cause the heterogeneous cluster to schedule the at least one job on one or more nodes of the heterogeneous cluster based at least in part on the plurality of aggregate simulated performance scores”
(¶ 9: The recommendation can be provided as input to a scheduling system, which can use the predicted performance metrics and other metrics to generate a schedule for partitioning and assigning the computational graph across a plurality of computing devices of the distributed computing network;
¶ 100: program instructions, encoded on a computer program carrier, for execution by, or to control the operation of, data processing apparatus. The carrier may be a tangible non-transitory computer storage medium. Alternatively or in addition, the carrier may be an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus).
Li does not teach “determining … a plurality of AGGREGATE simulated performance scores for the at least one job … wherein each aggregate simulated performance score corresponds to a type of worker node and is determined based at least in part on simulated performance scores corresponding to that type of worker node for transformation units within the at least one job.”
(B) Rathod, in the context of Li’s teachings, however teaches or suggests:
““determining … a plurality of AGGREGATE simulated performance scores for the at least one job … wherein each aggregate simulated performance score corresponds to a type of worker node and is determined based at least in part on simulated performance scores corresponding to that type of worker node for transformation units within the at least one job”
(¶ 26: collect data relating to a plurality of performance metrics 138 related to performance of the software application in each of the model application environments 136 while running in the simulated environment;
¶ 27: tests may include performance testing to test the overall performance of a model application environment 136 and collect data on performance metrics 138 including availability, response time and stability of the software application 112. In one example, for each model application environment 136, application manager 130 carries out the testing in multiple cycles with different amounts of load;
¶ 30: to compare the total performance scores of multiple performance metrics 138 collected for each model application environment 136 and recommend a model application environment 136 that has the highest total performance score for the performance metrics 138. For example, application manager 130 may compare the total performance scores of all performance metrics 138 collected for each model application environment 136 and recommend a model application environment 136 that has the highest total performance score of for the performance metrics 138).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (B) Rathod with those of (A) Li to provide for an aggregate total score incorporating a plurality of performance metrics of each type of computing device. The motivation or advantage to do so is to allow for the ranking of a set (multiple) performance metrics using a single objective function or equation.
13. Regarding claim 2, Rathod teaches or suggests:
“receiving, by the application client, execution statistics corresponding to execution of the at least one job on the one or more nodes of the heterogeneous cluster”
(¶ 28: analyze the performance metrics 138 collected for each model application environment 136 (including the current application environment 120);
¶ 31: sort the performance metrics 138 collected for each model application environment 136 (including the current application environment 120) based on one or more requirements of the software application 112, for example, by placing a higher priority on performance metrics 138 that are associated with the one or more requirements of the software application 112);
“determining, by the application client, an aggregate runtime performance score corresponding to the first type of worker node for the at least one job based at least in part on the execution statistics”
(¶ 32: compare one or more model application environments 136 with the current application environment 120 based on the performance scores of a prioritized performance metric 138 (e.g., database performance, application server performance etc.). If application manager 130 determines that a model application environment 136 has a higher score for the performance metric 138 as compared to the respective score of the performance metric 138 collected for the current application environment 120 …).
14. Regarding claim 5, Li teaches or suggests:
“receiving, by the application client, one or more of an application priority level for the application or a job priority level for the at least one job”
(¶ 45: metadata 120 can include information about the job 105, e.g., a job name, information about the user sending the job 105 to the recommendation engine 100, a priority level for assigning the job;
¶ 49: scheduler engine 220 can implement priority scheduling using a set of characteristics for the job 240, in addition to the recommendations generated by the recommendation engine 210. The set of characteristics the scheduler engine 220 uses to schedule the job 240 can include, for example, a user-assigned priority level and characteristics about a user submitting the job 240 for scheduling).
15. Regarding claim 6, Li teaches or suggests:
“wherein the instructions … are further configured to cause the heterogeneous cluster to schedule the at least one job on one or more nodes of the heterogeneous cluster based at least in part on the plurality of aggregate simulated performance scores and one or more of the application priority level or the job priority level”
(¶ 49: scheduler engine 220 can implement priority scheduling using a set of characteristics for the job 240, in addition to the recommendations generated by the recommendation engine 210. The set of characteristics the scheduler engine 220 uses to schedule the job 240 can include, for example, a user-assigned priority level and characteristics about a user submitting the job 240 for scheduling;
¶ 54: scheduler engine 230 can be configured to assign more highly recommended types of computing devices to the job 240 depending on a priority level assigned to the job 240).
16. Regarding claim 8, Li and Rathod teaches or suggests:
“determining, by the application client, a plurality of stage-level aggregate simulated performance scores for each stage in the plurality of stages, the plurality of stage-level aggregate simulated performance scores corresponding to the plurality of types of worker nodes, wherein each stage-level aggregate simulated performance score corresponds to a type of worker node and is determined based at least in part on simulated performance scores corresponding to that type of worker node for transformation units within the stage”
(Li, ¶ 47, ¶ 51, ¶¶ 87–89, and ¶¶ 10–11, as applied above in rejecting claim 1, teaching predicting and ranking a set of performance metrics for each type of computing device in the distributed computing network;
¶ 51: the scheduler engine 220 can partition the computational graph 250 into a plurality of subgraphs. Each subgraph is linked to another subgraph by an edge, representing the flow of data as output from one subgraph to input for another subgraph;
¶ 52: scheduler engine 220 can decide which computing devices to assign a respective subgraph based on the recommendations generated by the recommendation engine;
Rathod, ¶ 26 and ¶ 27: tests may include performance testing to test the overall performance of a model application environment 136 and collect data on performance metrics 138 including availability, response time and stability of the software application 112. In one example, for each model application environment 136, application manager 130 carries out the testing in multiple cycles with different amounts of load;
¶ 30: to compare the total performance scores of multiple performance metrics 138 collected for each model application environment 136 and recommend a model application environment 136 that has the highest total performance score for the performance metrics 138. For example, application manager 130 may compare the total performance scores of all performance metrics 138 collected for each model application environment 136 and recommend a model application environment 136 that has the highest total performance score of for the performance metrics 138).
“transmitting, by the application client, instructions to the heterogeneous cluster configured to cause the heterogeneous cluster to schedule the at least one job on one or more nodes of the heterogeneous cluster based at least in part on the plurality of stage-level aggregate simulated performance scores”
(¶ 9 and ¶ 100, as applied in rejecting claim 1 above, teaching assigning the computational graph across a plurality of computing devices of the distributed computing network and transmission of instructions across a network).
17. Regarding claims 9–10, 13–14, 16, they are the corresponding system claims reciting similar limitations of commensurate scope as the method of claims 1–2, 5–6, and 8 respectively. Therefore, they are rejected on the same basis as claims 1–2, 5–6, and 8 above, including the following rationale:
Li teaches/suggests:
“one or more processors; and one or more memories operatively coupled to at least one
of the one or more processors and having instructions stored thereon that, when executed …”
(¶¶ 100–103: processors and storage medium).
18. Regarding claims 17–18, 21–22, and 24, they are the corresponding computer program product claims reciting similar limitations of commensurate scope as the method of claims 1–2, 5–6, and 8 respectively. Therefore, they are rejected on the same basis as claims 1–2, 5–6, and 8 above.
B.
19. Claims 3–4, 7, 11–12, 15, 19–20, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Li in view of (B) Rathod, as applied to claims 1, 9, and 17 above, and further in view of (C) Chen.
20. Regarding claim 3, Rathod teaches or suggests:
“determining, by the application client, whether the aggregate simulated performance score corresponding to the first type of worker node for the at least one job exceeds the aggregate runtime performance score corresponding to the first type of worker node”
(¶ 32: compare one or more model application environments 136 with the current application environment 120 based on the performance scores of a prioritized performance metric 138 (e.g., database performance, application server performance etc.). If application manager 130 determines that a model application environment 136 has a higher score for the performance metric 138 as compared to the respective score of the performance metric 138 collected for the current application environment 120 …).
Li and Rathod do not teach “adjusting, by the application client, one or more variables to indicate underperformance of the first type of worker node for the at least one job.”
(C) Chen, in the context of Li and Rathod’s teachings, however teaches or suggests implementing:
“adjusting, by the application client, one or more variables to indicate underperformance of the first type of worker node for the at least one job”
(¶ 114: to determine the optimum resource setting for each node of the application (e.g., application 105) is adjusted based on comparing output of the model to actual resource usage of the application;
¶ 120: proactively monitoring and assessing an application environment using a predefined model for generating a required computing resource (e.g., memory). The method may include performing data collection from a deployed application and determining if usage has increased or decreased, determining an improvement exists for the existing model, and using the model to determine the optimum computing resource settings based on user demand data including concurrent usage, existing topology, and configuration detail from existing nodes. The method may also include performing data collection on actual computing resource (e.g., memory) usage and comparing the actual usage against historical data and projected demands, determining an improvement exists for the existing model or node, and invoking available APIs to dynamically adjust the computing resource (e.g., memory) to the optimum settings).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (C) Chen with those of Li and Rathod to adjust resource settings or configurations based on monitored resource utilization and predicted performance. The motivation or advantage to do so is to provide for the dynamic provisioning and re-configuration (modification) of resources based on actual performance/utilization data, so as to optimize resource usage and job execution.
21. Regarding claim 4, Rathod and Chen teach or suggest:
“modifying, by the application client, a process used to determine each simulated performance score for the first type of worker node based at least in part on the aggregate runtime performance score”
(Rathod, ¶ 30, teaching determining and comparing total performance score for the performance metrics;
Chen, ¶ 118: adjusts the model based on the historical actual usage … an iterative parameter optimization process in which the system adjusts values of at least one of the parameters, determines new modeled setting using the adjusted parameters, compares the new modeled setting to the actual memory usage, and then again adjusts values of at least one of the parameters based on the comparing the new modeled setting to the actual memory usage;
¶ 120: performing data collection on actual computing resource (e.g., memory) usage and comparing the actual usage against historical data and projected demands, determining an improvement exists for the existing model or node, and invoking available APIs to dynamically adjust the computing resource (e.g., memory) to the optimum settings. The method may also include automatically creating a new version of the model with new values of parameters, and archiving the old version of the model for version control. The method may include repeating the steps based on scheduled cycle or defined triggering criteria).
22. Regarding claim 7, Li and Chen teaches or suggests:
“detecting, by the application client, a plurality of workloads associated with the plurality of types of worker nodes on the heterogeneous cluster”
(Chen, ¶ 15: performance of a multi-tiered application is monitored, and resources are automatically provisioned so as to optimize performance of the application;
¶ 16: enable proactive monitoring and assessment of a multi-tier application environment against real-time load and user demand within the application, and dynamically adjusting a model and auto-provisioning resources to optimize the application;
¶ 123: monitoring via a feedback loop to periodically check for additional cost improvement);
“transmitting, by the application client, instructions to the heterogeneous cluster configured to cause the heterogeneous cluster to modify a quantity of worker nodes of at least one type on the heterogeneous cluster based at least in part on the plurality of workloads”
(Li, ¶ 100: program instructions, encoded on a computer program carrier, for execution by, or to control the operation of, data processing apparatus. The carrier may be a tangible non-transitory computer storage medium. Alternatively or in addition, the carrier may be an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus;
Chen, ¶ 121: In the event the owner of the application wishes to increase capacity to a second number of tickets ( e.g., 120, 000 tickets), analytics may be used to determine that the first configuration has insufficient memory to handle the second number of tickets;
¶ 122: Therefore, instead of just adding a server using a standard template, this invention's model would recommend (a) adding a new server with the adjusted configuration, and (b) adjust the memory on the existing five servers).
23. Regarding claims 11–12 and 15, they are the corresponding system claims reciting similar limitations of commensurate scope as the method of claims 3–4 and 7 respectively. Therefore, they are rejected on the same basis as claims 3–4 and 7 above.
24. Regarding claims 19–20 and 23, they are the corresponding computer program product claims reciting similar limitations of commensurate scope as the method of claims 3–4 and 7 respectively. Therefore, they are rejected on the same basis as claims 3–4 and 7 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN C WU/Primary Examiner, Art Unit 2195
November 1, 2025