DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Office Action is in response to claims filed 06/17/2023.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
The terms “biggest” and “top” in claims 6, 13, and 19 is a relative term which renders the claims indefinite. The terms “biggest” and “top” are not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For the purposes of examination, “biggest” and “top” in the context of the claims will be understood to be synonymous with “most”. The claims are then interpreted as policies that select a batch job with the most CPU utilization, select a set of batch jobs with the most CPU utilization, or select batch jobs at random.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, an abstract idea, and it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below.
Step 1:
Claims 1-7 are directed to a method and fall within the statutory class of processes. Claims 8-14 are directed to a computer program product and fall within the statutory class of articles of manufacture. Claims 15-20 are directed to a computer system and fall within the statutory class of machine. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes.
Step 2A Prong 1:
Claims 1, 8, and 15: The limitations “grouping, … a plurality of batch jobs based on workload resource requests and dependencies of each batch job resulting in a plurality of groups”, “scheduling, … the plurality of batch jobs based on the plurality of groups”, “identifying, … one or more scheduled transaction workloads will not be able to be completed in under a preset time threshold”, and “reducing, … a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads”, as drafted, are a process that under its broadest reasonable interpretation, covers performance of the limitation of the mind. Grouping batch jobs based on resource requests and dependencies, scheduling the batch jobs, identifying transaction workloads that will miss a deadline, and reducing a resource quota for batch jobs are considered to involve a involve a mental process of observing and then forming a judgement. The recited actions are understood to be performed by a processor, but are also able to be entirely performed in the mind.
Therefore, Yes, claims 1, 8, and 15 recite a judicial exception. Step 2A Prong 2 will evaluate whether the claims are directed to the judicial exception.
Step 2A Prong 2:
Claims 1, 8, 15: The judicial exception is not integrated into a practical application. Claim 1 recites the following additional element – “monitoring, … workload resource usage of system for running the plurality of batch jobs and a plurality of transaction workloads”. This additional element is considered to be an insignificant extra-solution activity because it is mere data gathering (MPEP § 2106.05(g)). The data gathering does not integrate the judicial exception into a practical application. Additionally, each limitation of claim 1 is performed “by one or more processors”, which is considered another additional element. The additional elements only add that a processor is used to apply the judicial exception, so they are merely recitations of generic computing components and functions merely being used as a tool to apply the abstract idea (MPEP § 2106.05(f)) and they do not integrate a judicial exception into practical application. Claim 8 also claims the following additional elements that are merely recitations of generic computing components and functions merely being used as a tool to apply the abstract idea (MPEP § 2106.05(f)) – “on or more computer readable storage media and program instructions collectively stored on the one or more computer readable storage media” and “program instructions” to perform the group, schedule, monitor, identify, and reduce steps. Claim 8 does not integrate the judicial exception into a practical application. Claim 15 also claims the following additional elements that are merely recitations of generic computing components and functions merely being used as a tool to apply the abstract idea (MPEP § 2106.05(f)) – “one or more computer processors”, “one or more computer readable media”, and “program instructions collectively stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors” and “program instructions” to perform the group, schedule, monitor, identify, and reduce steps. Claim 15 does not integrate the judicial exception into a practical application.
Therefore, “Do the claims recite additional elements that integrate the judicial exception in a practical application?” No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
After having evaluated the inquiries set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1, 8, and 15 not only recite a judicial exception but that the claims are directed to the judicial exception as the judicial exception has not been integrated into practical
application.
Step 2B:
Claims 1, 8, 15: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional elements only amount to insignificant extra-solution activity and generic computing components being used as a tool to apply the abstract idea. When reevaluating the insignificant extra-solution activity for an inventive concept, the additional element did not add an inventive concept that is other than what is well understood, routine, and conventional in the field. MPEP § 2106.05(d)(II) lists that “Receiving or transmitting data over a network” is a well understood, routine, and conventional computer function. The data gathering step of receiving data over a network is one example. When reevaluating the other additional elements alone or in combination, the additional elements do not add an inventive concept.
Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception.
Having concluded analysis with in the provided framework, claims 1, 8, and 15 do not recite eligible subject matter under 35 U.S.C. § 101.
With regards to claims 2, 9, and 16, it recites “scheduling, … batch jobs of the plurality of batch jobs from different groups of the plurality groups at a same time which lowers competition between resources”. Under its broadest reasonable interpretation, the limitation covers performance of the limitation of the mind because scheduling involves forming a judgement based on observations. Scheduling batch jobs from different groups is something that can be done completely in the mind. Therefore, claims 2, 9, and 16 recite a judicial exception and fail Step 2A Prong 1. Claims 2 does not integrate a judicial exception into practical application, so the claim fails Step 2A Prong 2. When reexamining the additional elements, alone or in combination, the additional elements do not amount to significantly more than the judicial exception, the limitation fails Step 2B. Therefore, claims 2, 9, and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 3 and 10, it recites “wherein the workload resource usage includes database table usage, CPU usage, and memory usage”. This limitation further limits the insignificant extra-solution activity of data gathering (MPEP § 2106.05(g)). Claims 3 and 10 do not further integrate the judicial exception into a practical application, so claims 3 and 10 fail Step 2A Prong 2. When reevaluating the limitation for an inventive concept that is significantly more, claims 3 and 10 do not add an inventive concept that is other than what is well understood, routine, and conventional in the field. MPEP § 2106.05(d)(II) lists that “Receiving or transmitting data over a network” is a well understood, routine, and conventional computer function. Claims 3 and 10 only further specify the kind of data the workload resource usage includes. The limitation fails Step 2B. Therefore, claims 3 and 10 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regards to claims 4, 11, and 17, it recites “wherein identifying the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit”. Under its broadest reasonable interpretation, the limitation covers performance of the mind. Identifying workloads that will not meet a deadline is an evaluation that can be entirely performed in the mind. Therefore, claims 4, 11, and 17 recite a mental process and fails Step 2A Prong 1. Claims 11 and 17 add the additional element recitations of “program instructions” to perform the process. As explained previously, this additional element does not integrate the judicial exception into a practical application, so the claims fail Step 2A Prong 2. Additionally, the additional elements that amount to something significantly more, so the limitation fails Step 2B. Therefore, claims 4, 11, and 17 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 5, 12, and 18, it recites “wherein the type of resource that is needed for the one or more scheduled transaction workloads is CPU”. This additional element further limits the reducing step in claims 1, 8, and 15 by specifying the that type of resource is CPU. Therefore, this limitation is also analyzed as a mental process. Claims 5, 12, and 18 also recites “reducing, … a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads”. Under its broadest reasonable interpretation, this limitation covers performance of the mind. Moving resources between resource consumers is an evaluation that can entirely be performed in the mind. Both limitations are a mental process and fails Step 2A Prong 1. Additionally, claim 5 also recites “providing, … the reduced CPU quota to the one or more scheduled transaction workloads”. This limitation is considered as mere instructions apply an exception (MPEP 2106.05(f)). Specifically, the providing of reduced CPU quota to the scheduled transaction workloads is the carrying out of the reducing CPU quota for batch jobs step. However, this application of a judicial exception does not achieve an improvement or integrate the judicial exception into a practical application, so it fails Step 2A Prong 2. Additionally, the limitation does not amount to something significantly more, so the limitation fails Step 2B. Therefore, claims 5, 12, and 18 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 6, 13, and 19, it recites “selecting, … the one or more running batch jobs based on one or more policies, wherein the one or more policies comprise at least one of selecting one running batch job with a biggest CPU usage, selecting a set of running batch jobs with top CPU usage, and selecting running batch jobs at random”. Under its broadest reasonable interpretation, this limitation covers performance of the mind. Selecting batch jobs based on policy is a judgement based on observation. Therefore, claims 6, 13, and 19 recite a mental process and fails Step 2A Prong 1. Claims 13 and 19 also recite the additional element of “program instructions”. The listed additional elements do not integrate the judicial exception into a practical application, so claims 6, 13, and 19 fails Step 2A Prong 2. Additionally, the additional elements that amount to something significantly more, so the limitation fails Step 2B. Therefore, claims 6, 13, and 19 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 7, 14, and 20, recite “wherein the type of resource that is needed for the one or more scheduled transaction workloads is memory”. This additional element further limits the reducing step in claims 1, 8, and 15 by specifying the that type of resource is memory. Therefore, this limitation is also analyzed as a mental process. Claims 7, 14, and 20 also recites “selecting, … one or more running batch jobs from different groups of the plurality of groups”, “choosing, … from the selected one or more running batch jobs, at least one batch job whose history peak memory usage is lower than a memory quota for the at least one batch job”, and “reducing, … the memory quota of the at least one batch job to the history peak memory usage leaving a reserve amount of memory”. Under its broadest reasonable interpretation, these limitations cover performance of the mind. These steps involve an observation and judgement. Therefore claims 7, 14, and 20 recite a mental process and fails Step 2A Prong 1. Additionally, claims 7, 14, and 20 recite “releasing, … the reserve amount of memory” and “providing, … the released reserve amount of memory to the one or more scheduled transaction workloads”. There limitations are considered as mere instructions apply an exception (MPEP 2106.05(f)). The releasing and providing steps are merely ways to apply the exception in the “reducing” step. The limitations do not achieve an improvement or integrate the judicial exception into a practical application. This additional element does not integrate the judicial exception into a practical application, so claims 7, 14, and 20 fails Step 2A Prong 2. Additionally, the additional elements that amount to something significantly more, so the limitation fails Step 2B. Therefore, claims 7, 14, and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8-10, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dube et al. Pat. No. US 20160085587 A1 (hereafter Dube) in view of Di Balsamo et al. Pat. No. US 20150277987 A1 (hereafter Di Balsamo) and further in view of Antani et al. Pat. No. US 20120284721 A1 (hereafter Antani).
With regard to claim 1, Dube teaches a method comprising: grouping, by one or more processors, a plurality of batch jobs based on workload resource requests and dependencies of each batch job resulting in a plurality of groups (¶ [0050] states “Workload scheduling program 150 is stored in persistent storage 708 for execution by one or more of the respective computer processors 704 via one or more memories of memory 706”. ¶ [0021] states “A request for execution of a computing job received in step 205 includes at least a list of tasks to be executed as part of the computing job, as well as any dependencies required for execution of those tasks … Additionally, a dependency for a task can also include a data dependency which prevents the execution of a task unless a specific portion of data is available”. ¶ [0023] states “In order to determine if a task can be executed by a data processing element, workload scheduling program 150 determines if the type of computation required for the task can be performed by a given data processing element”. ¶ [0045] states “FIG. 6A depicts a first feasible execution mapping for executing computing job 120 on heterogeneous computing device 110, generally designated 600, in accordance with an embodiment of the present invention. Tasks listed inside of data processing elements represent tasks performed by those data processing elements, while data sets listed inside data storage elements represent data sets provided to data processing elements by those data storage elements”. Examiner’s Note: the workload scheduling manager is executed on a processor. Tasks can depend on another task or a specific data source. Additionally, tasks can depend on a specific computing resource is available. FIG. 4 shows the types of computing resources and data source available. FIG. 5 shows the tasks (the non-underlined numbers) grouped to each computing resource);
scheduling, by the one or more processors, the plurality of batch jobs based on the plurality of groups (¶ [0024] states “Using the information represented in the task and data graph, workload scheduling program 150 assigns the task, or set of tasks, which must be executed first to one or more data processing elements identified as capable of executing that task in the resource graph”. ¶ [0037] states “Workload scheduling program 150 selects the mapping which receives the highest total value for execution on heterogeneous computing device 110”. Examiner’s Note: the workload scheduling program first creates potential schedules called mappings. Then the workload scheduling program selects the best mapping and schedules the task);
Dube does not explicitly teach monitoring, identifying, and reducing steps.
However, in an analogous art, Di Balsamo teaches monitoring, by the one or more processors, workload resource usage of system for running the plurality of batch jobs and a plurality of transaction workloads (¶ [0069] states “The resource manager 406 may monitor the resource pool and determine a resource pool parameter … For example, the resource pool parameter may include a CPU utilization level, the number of resources in the resource pool, or the ratio the number of jobs in the workload plan to the number of resources in the resource pool.” ¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs. Batch jobs may be scheduled and stored in the workload plan 403 at time of creation of the workload plan 403 … Transactional jobs, however, may process data in real time and may not be known of prior to creation of the workload plan 403”. Examiner’s Note: the resources that resource manager monitors are for the execution of the jobs. Jobs can either be batch jobs or transaction workloads);
identifying, by the one or more processors, one or more scheduled transaction workloads will not be able to be completed in under a preset time threshold (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”);
and reducing, by the one or more processors, a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the monitoring of computer resources used by either batch jobs or transactional jobs, the identifying of jobs that exceed a deadline, and the dynamic reduction of a resource pool of Di Balsamo with the grouping and scheduling of jobs based on workload and resource dependencies of Dube. A person having ordinary skill in the art would have motivated to make this combination to “improve responses to changing components, changing workload, and changing environmental conditions, while minimizing the operating costs and reducing violations of the SLAs” (Di Balsamo ¶ [0042]).
Dube and Di Balsamo do not teach reducing a resource quota for a batch job based on the resources needed by a transaction workload.
However, in an analogous art, Antani teaches and reducing, by the one or more processors, a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0004] states “there are Long Running Transactions ("LRTs") and OnLine Transactions (OLTs)”. ¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the LRTs and OLTs of Antani are analogous to the batch jobs and transactional jobs of Di Balsamo respectively. See ¶ [0004] for more detail. ¶ [0056] states that the OLT needs resources of the LRT. The resource in this case is the database row whose lock is currently held by the LRT. Antani teaches that by changing the resource values of “X” and “Y”, the OLT can acquire the resources it needs. In other words, because of the resources needed by the OLT, or transactional workload, the resources of the LRT, or batch job, are reduced. See ¶ [0030] – [0031] for a full description of “ALGORITHM 2”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the process of reducing resources of LRTs due to OLTs of Antani with the grouping, scheduling, monitoring, identifying, and reducing steps of Dube and Di Balsamo. A person having ordinary skill in the art would have motivated to make this combination for the purpose of controlling checkpoint intervals at a fine-grained level for LRTs without outside influence and to make intelligent decisions about which LRT to throttle up or down (Antani ¶ [0019]). Additionally, one of ordinary skill in the art would recognize that the steps of the process described in Antani ¶ [0052] – [0057] are for the purpose of reducing SLA violations which has clear benefits.
With regard to claim 2, Dube, Di Balsamo, and Antani teach the method of claim 1. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”). Dube additionally teaches wherein scheduling the plurality of batch jobs based on the grouping further comprises: scheduling, by the one or more processors, batch jobs of the plurality of batch jobs from different groups of the plurality of groups at a same time which lowers competition between resources (¶ [0040] states “Having no prerequisite connection between tasks 302 and 303 indicates that both tasks can execute simultaneously once task 301 completes execution”. ¶ [0041] states “As task 302 has no dashed line connections to any data sets, task 302 has no data dependencies and does not require any data sets to be available in order for it to execute. Task 303 has a data dependency of both data set 311 and 312”. Examiner’s Note: tasks can be substituted by batch jobs. ¶ [0040] and [0041] are referring to FIG. 3. Task 302 and 303 belong to different groups because although they both depend on task 301, task 303 also depends on data set 311 and 312. Once dependency task 301 is completed and task 303 has access to data sets 311 and 312, task 302 and 303 can be executed simultaneously. In other words, they can be scheduled for the same time slot).
With regard to claim 3, Dube, Di Balsamo, and Antani teach the method of claim 1. Di Balsamo additionally teaches wherein the workload resource usage includes database table usage, CPU usage, and memory usage (¶ [0068] states “In an embodiment the resource range may have an upper resource limit set at eighty percent (80%) CPU utilization so that when the resource pool parameter is represented by a CPU utilization level, a greater than 80% CPU utilization violates the SLA policy” and “Other computing parameters may be used including, but not limited to, the quantity of free memory”. ¶ [0069] states “The resource manager 406 may monitor the resource pool and determine a resource pool parameter. The resource pool parameter may be a representation of computing resources in the resource pool”. Examiner’s Note: one of ordinary skill in the art would recognize that the quantity of free memory and used memory are interchangeable).
Antani also teaches wherein the workload resource usage includes database table usage, CPU usage, and memory usage (¶ [0008] states “WorkLoad Managers (WLMs) are typically found in TPSs”. ¶ [0048] states “The WLM continuously monitors transaction processing to determine when a transaction processing job is at risk of completion”. ¶ [0053] states “the LRT obtains an exclusive lock on transactional resources (e.g., a row in a table of a database 116 of FIG. 1)”. Examiner’s Note: the WLM monitors transaction processing which would include workload resource usage. ¶ [0053] states a database row is an example of a transactional resource. It would be obvious to one of ordinary skill the art that the lock could control the row or the table).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine database table usage of Antani with the workload resource usage including CPU and memory usage of Di Balsamo. A person having ordinary skill in the art would have motivated to make this combination for the purpose of “balancing how many records get locked during a transaction and for how long the records are locked. The balancing is done in the context of other transactional work in the TPS, the priorities of the transactional work and deadlines of the transactional work” (Antani ¶ [0028]). Balancing transactional resources with priorities of transactional work requires the monitoring of database tables along with the other resources. Additionally, the balancing of priorities aids in detecting if an SLA is being met and adjusting resource allocation (Antani ¶ [0008]) which has clear benefits.
With regard to claim 8, Dube teaches a computer program product comprising: one or more computer readable storage media and program instructions collectively stored on the one or more computer readable storage media, the stored program instructions comprising (¶ [0056] states “The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention”):
program instructions to group a plurality of batch jobs based on workload resource requests and dependencies of each batch job resulting in a plurality of groups (¶ [0050] states “Workload scheduling program 150 is stored in persistent storage 708 for execution by one or more of the respective computer processors 704 via one or more memories of memory 706”. ¶ [0021] states “A request for execution of a computing job received in step 205 includes at least a list of tasks to be executed as part of the computing job, as well as any dependencies required for execution of those tasks … Additionally, a dependency for a task can also include a data dependency which prevents the execution of a task unless a specific portion of data is available”. ¶ [0023] states “In order to determine if a task can be executed by a data processing element, workload scheduling program 150 determines if the type of computation required for the task can be performed by a given data processing element”. ¶ [0045] states “FIG. 6A depicts a first feasible execution mapping for executing computing job 120 on heterogeneous computing device 110, generally designated 600, in accordance with an embodiment of the present invention. Tasks listed inside of data processing elements represent tasks performed by those data processing elements, while data sets listed inside data storage elements represent data sets provided to data processing elements by those data storage elements”. Examiner’s Note: the workload scheduling manager is executed on a processor. Tasks can depend on another task or a specific data source. Additionally, tasks can depend on a specific computing resource is available. FIG. 4 shows the types of computing resources and data source available. FIG. 5 shows the tasks (the non-underlined numbers) grouped to each computing resource);
program instructions to schedule the plurality of batch jobs based on the plurality of groups (¶ [0024] states “Using the information represented in the task and data graph, workload scheduling program 150 assigns the task, or set of tasks, which must be executed first to one or more data processing elements identified as capable of executing that task in the resource graph”. ¶ [0037] states “Workload scheduling program 150 selects the mapping which receives the highest total value for execution on heterogeneous computing device 110”. Examiner’s Note: the workload scheduling program first creates potential schedules called mappings. Then the workload scheduling program selects the best mapping and schedules the task);
Dube does not explicitly teach monitoring, identifying, and reducing steps.
However, in an analogous art, Di Balsamo teaches program instructions to monitor workload resource usage of system for running the plurality of batch jobs and a plurality of transaction workloads (¶ [0069] states “The resource manager 406 may monitor the resource pool and determine a resource pool parameter … For example, the resource pool parameter may include a CPU utilization level, the number of resources in the resource pool, or the ratio the number of jobs in the workload plan to the number of resources in the resource pool.” ¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs. Batch jobs may be scheduled and stored in the workload plan 403 at time of creation of the workload plan 403 … Transactional jobs, however, may process data in real time and may not be known of prior to creation of the workload plan 403”. Examiner’s Note: the resources that resource manager monitors are for the execution of the jobs. Jobs can either be batch jobs or transaction workloads);
program instructions to identify one or more scheduled transaction workloads will not be able to be completed in under a preset time threshold (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”);
and program instructions to reduce a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the monitoring of computer resources used by either batch jobs or transactional jobs, the identifying of jobs that exceed a deadline, and the dynamic reduction of a resource pool of Di Balsamo with the grouping and scheduling of jobs based on workload and resource dependencies of Dube. A person having ordinary skill in the art would have motivated to make this combination to “improve responses to changing components, changing workload, and changing environmental conditions, while minimizing the operating costs and reducing violations of the SLAs” (Di Balsamo ¶ [0042]).
Dube and Di Balsamo do not teach reducing a resource quota for a batch job based on the resources needed by a transaction workload.
However, in an analogous art, Antani teaches and program instructions to reduce a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0004] states “there are Long Running Transactions ("LRTs") and OnLine Transactions (OLTs)”. ¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the LRTs and OLTs of Antani are analogous to the batch jobs and transactional jobs of Di Balsamo respectively. See ¶ [0004] for more detail. ¶ [0056] states that the OLT needs resources of the LRT. The resource in this case is the database row whose lock is currently held by the LRT. Antani teaches that by changing the resource values of “X” and “Y”, the OLT can acquire the resources it needs. In other words, because of the resources needed by the OLT, or transactional workload, the resources of the LRT, or batch job, are reduced. See ¶ [0030] – [0031] for a full description of “ALGORITHM 2”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the process of reducing resources of LRTs due to OLTs of Antani with the grouping, scheduling, monitoring, identifying, and reducing steps of Dube and Di Balsamo. A person having ordinary skill in the art would have motivated to make this combination for the purpose of controlling checkpoint intervals at a fine-grained level for LRTs without outside influence and to make intelligent decisions about which LRT to throttle up or down (Antani ¶ [0019]). Additionally, one of ordinary skill in the art would recognize that the steps of the process described in Antani ¶ [0052] – [0057] are for the purpose of reducing SLA violations which has clear benefits.
With regard to claim 9, Dube, Di Balsamo, and Antani teach the computer program product of claim 8. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”). Dube additionally teaches wherein the program instructions to schedule the plurality of batch jobs based on the grouping further comprise: program instructions to schedule batch jobs of the plurality of batch jobs from different groups of the plurality of groups at a same time which lowers competition between resources (¶ [0040] states “Having no prerequisite connection between tasks 302 and 303 indicates that both tasks can execute simultaneously once task 301 completes execution”. ¶ [0041] states “As task 302 has no dashed line connections to any data sets, task 302 has no data dependencies and does not require any data sets to be available in order for it to execute. Task 303 has a data dependency of both data set 311 and 312”. Examiner’s Note: tasks can be substituted by batch jobs. ¶ [0040] and [0041] are referring to FIG. 3. Task 302 and 303 belong to different groups because although they both depend on task 301, task 303 also depends on data set 311 and 312. Once dependency task 301 is completed and task 303 has access to data sets 311 and 312, task 302 and 303 can be executed simultaneously. In other words, they can be scheduled for the same time slot).
With regard to claim 10, Dube, Di Balsamo, and Antani teach the computer program product of claim 8. Di Balsamo additionally teaches wherein the workload resource usage includes database table usage, CPU usage, and memory usage (¶ [0068] states “In an embodiment the resource range may have an upper resource limit set at eighty percent (80%) CPU utilization so that when the resource pool parameter is represented by a CPU utilization level, a greater than 80% CPU utilization violates the SLA policy” and “Other computing parameters may be used including, but not limited to, the quantity of free memory”. ¶ [0069] states “The resource manager 406 may monitor the resource pool and determine a resource pool parameter. The resource pool parameter may be a representation of computing resources in the resource pool”. Examiner’s Note: one of ordinary skill in the art would recognize that the quantity of free memory and used memory are interchangeable).
Antani also teaches wherein the workload resource usage includes database table usage, CPU usage, and memory usage (¶ [0008] states “WorkLoad Managers (WLMs) are typically found in TPSs”. ¶ [0048] states “The WLM continuously monitors transaction processing to determine when a transaction processing job is at risk of completion”. ¶ [0053] states “the LRT obtains an exclusive lock on transactional resources (e.g., a row in a table of a database 116 of FIG. 1)”. Examiner’s Note: the WLM monitors transaction processing which would include workload resource usage. ¶ [0053] states a database row is an example of a transactional resource. It would be obvious to one of ordinary skill the art that the lock could control the row or the table).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine database table usage of Antani with the workload resource usage including CPU and memory usage of Di Balsamo. A person having ordinary skill in the art would have motivated to make this combination for the purpose of “balancing how many records get locked during a transaction and for how long the records are locked. The balancing is done in the context of other transactional work in the TPS, the priorities of the transactional work and deadlines of the transactional work” (Antani ¶ [0028]). Balancing transactional resources with priorities of transactional work requires the monitoring of database tables along with the other resources. Additionally, the balancing of priorities aids in detecting if an SLA is being met and adjusting resource allocation (Antani ¶ [0008]) which has clear benefits.
With regard to claim 15, Dube teaches a computer system comprising: one or more computer processors; one or more computer readable storage media; program instructions collectively stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the stored program instructions comprising (¶ [0056] states “The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention”. ¶ [0061] states “These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks”):
program instructions to group a plurality of batch jobs based on workload resource requests and dependencies of each batch job resulting in a plurality of groups (¶ [0050] states “Workload scheduling program 150 is stored in persistent storage 708 for execution by one or more of the respective computer processors 704 via one or more memories of memory 706”. ¶ [0021] states “A request for execution of a computing job received in step 205 includes at least a list of tasks to be executed as part of the computing job, as well as any dependencies required for execution of those tasks … Additionally, a dependency for a task can also include a data dependency which prevents the execution of a task unless a specific portion of data is available”. ¶ [0023] states “In order to determine if a task can be executed by a data processing element, workload scheduling program 150 determines if the type of computation required for the task can be performed by a given data processing element”. ¶ [0045] states “FIG. 6A depicts a first feasible execution mapping for executing computing job 120 on heterogeneous computing device 110, generally designated 600, in accordance with an embodiment of the present invention. Tasks listed inside of data processing elements represent tasks performed by those data processing elements, while data sets listed inside data storage elements represent data sets provided to data processing elements by those data storage elements”. Examiner’s Note: the workload scheduling manager is executed on a processor. Tasks can depend on another task or a specific data source. Additionally, tasks can depend on a specific computing resource is available. FIG. 4 shows the types of computing resources and data source available. FIG. 5 shows the tasks (the non-underlined numbers) grouped to each computing resource);
program instructions to schedule the plurality of batch jobs based on the plurality of groups (¶ [0024] states “Using the information represented in the task and data graph, workload scheduling program 150 assigns the task, or set of tasks, which must be executed first to one or more data processing elements identified as capable of executing that task in the resource graph”. ¶ [0037] states “Workload scheduling program 150 selects the mapping which receives the highest total value for execution on heterogeneous computing device 110”. Examiner’s Note: the workload scheduling program first creates potential schedules called mappings. Then the workload scheduling program selects the best mapping and schedules the task);
Dube does not explicitly teach monitoring, identifying, and reducing steps.
However, in an analogous art, Di Balsamo teaches program instructions to monitor workload resource usage of system for running the plurality of batch jobs and a plurality of transaction workloads ((¶ [0069] states “The resource manager 406 may monitor the resource pool and determine a resource pool parameter … For example, the resource pool parameter may include a CPU utilization level, the number of resources in the resource pool, or the ratio the number of jobs in the workload plan to the number of resources in the resource pool.” ¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs. Batch jobs may be scheduled and stored in the workload plan 403 at time of creation of the workload plan 403 … Transactional jobs, however, may process data in real time and may not be known of prior to creation of the workload plan 403”. Examiner’s Note: the resources that resource manager monitors are for the execution of the jobs. Jobs can either be batch jobs or transaction workloads);
program instructions to identify one or more scheduled transaction workloads will not be able to be completed in under a preset time threshold (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”);
and program instructions to reduce a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the monitoring of computer resources used by either batch jobs or transactional jobs, the identifying of jobs that exceed a deadline, and the dynamic reduction of a resource pool of Di Balsamo with the grouping and scheduling of jobs based on workload and resource dependencies of Dube. A person having ordinary skill in the art would have motivated to make this combination to “improve responses to changing components, changing workload, and changing environmental conditions, while minimizing the operating costs and reducing violations of the SLAs” (Di Balsamo ¶ [0042]).
Dube and Di Balsamo do not teach reducing a resource quota for a batch job based on the resources needed by a transaction workload.
However, in an analogous art, Antani teaches and program instructions to reduce a resource quota of one or more batch jobs of the plurality of batch jobs based on type of resource that is needed for the one or more scheduled transaction workloads (¶ [0004] states “there are Long Running Transactions ("LRTs") and OnLine Transactions (OLTs)”. ¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the LRTs and OLTs of Antani are analogous to the batch jobs and transactional jobs of Di Balsamo respectively. See ¶ [0004] for more detail. ¶ [0056] states that the OLT needs resources of the LRT. The resource in this case is the database row whose lock is currently held by the LRT. Antani teaches that by changing the resource values of “X” and “Y”, the OLT can acquire the resources it needs. In other words, because of the resources needed by the OLT, or transactional workload, the resources of the LRT, or batch job, are reduced. See ¶ [0030] – [0031] for a full description of “ALGORITHM 2”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the process of reducing resources of LRTs due to OLTs of Antani with the grouping, scheduling, monitoring, identifying, and reducing steps of Dube and Di Balsamo. A person having ordinary skill in the art would have motivated to make this combination for the purpose of controlling checkpoint intervals at a fine-grained level for LRTs without outside influence and to make intelligent decisions about which LRT to throttle up or down (Antani ¶ [0019]). Additionally, one of ordinary skill in the art would recognize that the steps of the process described in Antani ¶ [0052] – [0057] are for the purpose of reducing SLA violations which has clear benefits.
With regard to claim 16, Dube, Di Balsamo, and Antani teach the computer system of claim 15. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”). Dube additionally teaches wherein the program instructions to schedule the plurality of batch jobs based on the grouping further comprise: program instructions to schedule batch jobs of the plurality of batch jobs from different groups of the plurality of groups at a same time which lowers competition between resources (¶ [0040] states “Having no prerequisite connection between tasks 302 and 303 indicates that both tasks can execute simultaneously once task 301 completes execution”. ¶ [0041] states “As task 302 has no dashed line connections to any data sets, task 302 has no data dependencies and does not require any data sets to be available in order for it to execute. Task 303 has a data dependency of both data set 311 and 312”. Examiner’s Note: tasks can be substituted by batch jobs. ¶ [0040] and [0041] are referring to FIG. 3. Task 302 and 303 belong to different groups because although they both depend on task 301, task 303 also depends on data set 311 and 312. Once dependency task 301 is completed and task 303 has access to data sets 311 and 312, task 302 and 303 can be executed simultaneously. In other words, they can be scheduled for the same time slot).
Claims 4, 11, and 17 is/are rejected under 35 U.S.C. as being unpatentable over Dube in view of Di Balsamo and Antani, and further in view of Bird et al. Pat. No. US 20140201756 A1 (hereafter Bird).
With regard to claim 4, Dube, Di Balsamo, and Antani teach the method of claim 1. Di Balsamo also teaches wherein identifying the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”).
Although, Di Balsamo teaches SLA upper and lower limits of resources (¶ [0072] – [0074]), Dube, Di Balsamo, and Antani do not explicitly teach a resource usage limit as in a maximum amount of resource usage that cannot be crossed.
However, in an analogous art, Bird teaches wherein identifying the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0032] states “"Hard limits" provide the capability for the user to specify a strict processor usage consumption limit for a workload, specified as a percentage of the overall processor capacity available on the computer system”. ¶ [0033] states “workload manager 16 has the capability to apply "hard limits"”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the hard resource limit of Bird with the identifying of workloads that exceed a job deadline of Dube, Di Balsamo, and Antani. It would have obvious that workloads may not be able to meet a deadline because of a resource limit. A person having ordinary skill in the art would have motivated to make this combination so that “intensive workloads can be strictly limited in the presence of other work to ensure they do not impact the response times or expected performance of higher priority work” (Bird ¶ [0037]). One of ordinary skill in the art would recognize the benefits to controlling intensive workloads using resource usage limits.
With regard to claim 11, Dube, Di Balsamo, and Antani teach the computer program product of claim 8. Di Balsamo also teaches wherein the program instructions to identify the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”).
Although, Di Balsamo teaches SLA upper and lower limits of resources (¶ [0072] – [0074]), Dube, Di Balsamo, and Antani do not explicitly teach a resource usage limit as in a maximum amount of resource usage that cannot be crossed.
However, in an analogous art, Bird teaches wherein the program instructions to identify the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0032] states “"Hard limits" provide the capability for the user to specify a strict processor usages consumption limit for a workload, specified as a percentage of the overall processor capacity available on the computer system”. ¶ [0033] states “workload manager 16 has the capability to apply "hard limits"”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the hard resource limit of Bird with the identifying of workloads that exceed a job deadline of Dube, Di Balsamo, and Antani. It would have obvious that workloads may not be able to meet a deadline because of a resource limit. A person having ordinary skill in the art would have motivated to make this combination so that “intensive workloads can be strictly limited in the presence of other work to ensure they do not impact the response times or expected performance of higher priority work” (Bird ¶ [0037]). One of ordinary skill in the art would recognize the benefits to controlling intensive workloads using resource usage limits.
With regard to claim 17, Dube, Di Balsamo, and Antani teach the computer system of claim 15. Di Balsamo also teaches wherein the program instructions to identify the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0073] states “Resource allocation 514 may also include determining whether the job forecast exceeds the job deadline 516”).
Although, Di Balsamo teaches SLA upper and lower limits of resources (¶ [0072] – [0074]), Dube, Di Balsamo, and Antani do not explicitly teach a resource usage limit as in a maximum amount of resource usage that cannot be crossed.
However, in an analogous art, Bird teaches wherein the program instructions to identify the one or more scheduled transaction workloads that will not be able to be completed in under the preset time threshold is based on a workload resource usage limit (¶ [0032] states “"Hard limits" provide the capability for the user to specify a strict processor usages consumption limit for a workload, specified as a percentage of the overall processor capacity available on the computer system”. ¶ [0033] states “workload manager 16 has the capability to apply "hard limits"”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the hard resource limit of Bird with the identifying of workloads that exceed a job deadline of Dube, Di Balsamo, and Antani. It would have obvious that workloads may not be able to meet a deadline because of a resource limit. A person having ordinary skill in the art would have motivated to make this combination so that “intensive workloads can be strictly limited in the presence of other work to ensure they do not impact the response times or expected performance of higher priority work” (Bird ¶ [0037]). One of ordinary skill in the art would recognize the benefits to controlling intensive workloads using resource usage limits.
Claims 5, 12, and 18 is/are rejected under 35 U.S.C 103 as being unpatentable over Dube, in view of Di Balsamo and Antani, and further in view of Sampathkumar Pat. No. 20140089282 A1 (hereafter Sampathkumar)
With regard to claim 5, Dube, Di Balsamo, and Antani teach the method of claim 1. Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is CPU (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”. ¶ [0073] states “Resource allocation may include determining a job forecast for the one or more jobs 502, 504 in the time slot 506”. ¶ [0068] states “In an embodiment the resource range may have an upper resource limit set at eighty percent (80%) CPU utilization so that when the resource pool parameter is represented by a CPU utilization level, a greater than 80% CPU utilization violates the SLA policy”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the jobs scheduled in ¶ [0073] and shown in FIG. 5A could be transactional workloads. ¶ [0068] gives evidence that these jobs are dependent on CPU resources);
reducing, by the one or more processors, a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0072] states “Resource allocation 514 may occur prior to each time slot 506, 507, 508, and 509”. ¶ [0069] states “The resource pool parameter may be a representation of computing resources in the resource pool. For example, the resource pool parameter may include a CPU utilization level”. ¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”. Examiner’s Notes: the resource allocation that occurs before the time slot may be an initial CPU quota).
Antani additionally teaches and providing, by the one or more processors, the reduced CPU quota to the one or more scheduled transaction workloads (¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the CPM slows down the LRT because it is using resources that the OLT needs. It would be obvious to one of ordinary skill in the art that the processing resources freed up by slowing down the LRT would be used by the OLT, which is analogous to the transaction workload).
Although Antani teaches workloads can access the same transactional resources such as tables, Dube, Di Balsamo, and Antani do not explicitly teach workloads accessing different tables.
However, in an analogous art, Sampathkumar teaches reducing, by the one or more processors, a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0026] states “Manager 150 causes the system to dequeue requests from different batches, instead of the default behavior of finishing an entire batch before beginning to pull requests from a subsequently received batch. Thus, the different requests executed by the processing managers come from different batches, which can reduce or eliminate contention for shared resources, such as rows of a table”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine different tasks accessing different tables of Sampathkumar, with the reduction of batch job CPU quota of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination to “reduce or elimination contention for shared resources” and “the system can improve parallelism in processing the batches” (Sampathkumar ¶ [0026] and ¶ [0014]).
With regard to claim 12, Dube, Di Balsamo, and Antani teach the computer program product of claim 8. Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is CPU (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”. ¶ [0073] states “Resource allocation may include determining a job forecast for the one or more jobs 502, 504 in the time slot 506”. ¶ [0068] states “In an embodiment the resource range may have an upper resource limit set at eighty percent (80%) CPU utilization so that when the resource pool parameter is represented by a CPU utilization level, a greater than 80% CPU utilization violates the SLA policy”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the jobs scheduled in ¶ [0073] and shown in FIG. 5A could be transactional workloads. ¶ [0068] gives evidence that these jobs are dependent on CPU resources);
program instructions to reduce a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0072] states “Resource allocation 514 may occur prior to each time slot 506, 507, 508, and 509”. ¶ [0069] states “The resource pool parameter may be a representation of computing resources in the resource pool. For example, the resource pool parameter may include a CPU utilization level”. ¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”. Examiner’s Notes: the resource allocation that occurs before the time slot may be an initial CPU quota);
Antani additionally teaches and program instructions to provide the reduced CPU quota to the one or more scheduled transaction workloads (¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the CPM slows down the LRT because it is using resources that the OLT needs. It would be obvious to one of ordinary skill in the art that the processing resources freed up by slowing down the LRT would be used by the OLT, which is analogous to the transaction workload).
Although Antani teaches workloads can access the same transactional resources such as tables, Dube, Di Balsamo, and Antani do not explicitly teach workloads accessing different tables.
However, in an analogous art, Sampathkumar teaches program instructions to reduce a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0026] states “Manager 150 causes the system to dequeue requests from different batches, instead of the default behavior of finishing an entire batch before beginning to pull requests from a subsequently received batch. Thus, the different requests executed by the processing managers come from different batches, which can reduce or eliminate contention for shared resources, such as rows of a table”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine different tasks accessing different tables of Sampathkumar, with the reduction of batch job CPU quota of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination to “reduce or elimination contention for shared resources” and “the system can improve parallelism in processing the batches” (Sampathkumar ¶ [0026] and ¶ [0014]).
With regard to claim 18, Dube, Di Balsamo, and Antani teach the computer system of claim 15. Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is CPU (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”. ¶ [0073] states “Resource allocation may include determining a job forecast for the one or more jobs 502, 504 in the time slot 506”. ¶ [0068] states “In an embodiment the resource range may have an upper resource limit set at eighty percent (80%) CPU utilization so that when the resource pool parameter is represented by a CPU utilization level, a greater than 80% CPU utilization violates the SLA policy”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the jobs scheduled in ¶ [0073] and shown in FIG. 5A could be transactional workloads. ¶ [0068] gives evidence that these jobs are dependent on CPU resources);
program instructions to reduce a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0072] states “Resource allocation 514 may occur prior to each time slot 506, 507, 508, and 509”. ¶ [0069] states “The resource pool parameter may be a representation of computing resources in the resource pool. For example, the resource pool parameter may include a CPU utilization level”. ¶ [0062] states “In an embodiment, the policy evaluator 400 may be configured to operate only in the case of critical jobs. This may allow the policy evaluator 400 to dynamically expand or reduce a resource pool 410 when the allocation schedule includes one or more important jobs”. Examiner’s Notes: the resource allocation that occurs before the time slot may be an initial CPU quota);
Antani additionally teaches and program instructions to provide the reduced CPU quota to the one or more scheduled transaction workloads (¶ [0056] states “Step 226 involves identifying the transaction(s) that is(are) using resources needed by the OLT. More particularly, the CPM slows down the LRT(s) identified/selected in step 226 by adjusting how many records are to be processed in each sub-transaction of the LRT(s) and/or increases/decreases the time period between commit operations of the LRT(s). The number of records is adjusted by changing the value of parameter "X" of the above-described ALGORITHM 2. Similarly, the time period between commit operations is adjusted by changing the values of the parameter "X" and/or "Y" of the above-described ALGORITHM 2”. Examiner’s Note: the CPM slows down the LRT because it is using resources that the OLT needs. It would be obvious to one of ordinary skill in the art that the processing resources freed up by slowing down the LRT would be used by the OLT, which is analogous to the transaction workload).
Although Antani teaches workloads can access the same transactional resources such as tables, Dube, Di Balsamo, and Antani do not explicitly teach workloads accessing different tables.
However, in an analogous art, Sampathkumar teaches program instructions to reduce a CPU quota of one or more running batch jobs from a group that uses different tables than the one or more scheduled transaction workloads (¶ [0026] states “Manager 150 causes the system to dequeue requests from different batches, instead of the default behavior of finishing an entire batch before beginning to pull requests from a subsequently received batch. Thus, the different requests executed by the processing managers come from different batches, which can reduce or eliminate contention for shared resources, such as rows of a table”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine different tasks accessing different tables of Sampathkumar, with the reduction of batch job CPU quota of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination to “reduce or elimination contention for shared resources” and “the system can improve parallelism in processing the batches” (Sampathkumar ¶ [0026] and ¶ [0014]).
Claims 6, 13, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dube, in view of Di Balsamo, Antani, and Sampathkumar, and further in view of Boutnaru Pat. No. US 20180349159 A1 (hereafter Boutnaru).
With regard to claim 6, Dube, Di Balsamo, Antani, and Sampathkumar teach the method of claim 5. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”).
Dube, Di Balsamo, Antani, and Sampathkumar do not teach selecting a batch job based on policies.
However, in an analogous art, Boutnaru teaches further comprising: selecting, by the one or more processors, the one or more running batch jobs based on one or more policies, wherein the one or more policies comprise at least one of selecting one running batch job with a biggest CPU usage, selecting a set of running batch jobs with top CPU usage, and selecting running batch jobs at random (¶ [0039] states “In some examples, one or more runtime environments may be selected at random to add diversity. In some examples, one or more runtime environments may be selected based on extreme values, such as the fastest response time, least CPU usage, most CPU usage, and/or the like”. Examiner’s Note: in light of the 35 U.S.C. 112(b) issue raised, examiner interprets “biggest” and “top” to be synonymous with “most”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the selection of a runtime environment based on most CPU usage or at random of Boutnaru with the method of reducing CPU resources and reassigning CPU resources to a transaction workload of Dube, Di Balsamo, Antani, and Sampathkumar. A person having ordinary skill in the art would have motivated to make this combination “to optimize the runtime environment for use, and the user may use the system to provide configurations for an optimized runtime environment for one or more applications” (Boutnaru ¶ [0028]). By using the policies to select a batch job with most CPU, a set of batch jobs with most CPU, or at random, the efficiency and optimization of a system can be controlled.
With regard to claim 13, Dube, Di Balsamo, Antani, and Sampathkumar teach the computer program product of claim 12. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”).
Dube, Di Balsamo, Antani, and Sampathkumar do not teach selecting a batch job based on policies.
However, in an analogous art, Boutnaru teaches further comprising: program instructions to select the one or more running batch jobs based on one or more policies, wherein the one or more policies comprise at least one of selecting one running batch job with a biggest CPU usage, selecting a set of running batch jobs with top CPU usage, and selecting running batch jobs at random (¶ [0039] states “In some examples, one or more runtime environments may be selected at random to add diversity. In some examples, one or more runtime environments may be selected based on extreme values, such as the fastest response time, least CPU usage, most CPU usage, and/or the like”. Examiner’s Note: in light of the 35 U.S.C. 112(b) issue raised, examiner interprets “biggest” and “top” to be synonymous with “most”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the selection of a runtime environment based on most CPU usage or at random of Boutnaru with the method of reducing CPU resources and reassigning CPU resources to a transaction workload of Dube, Di Balsamo, Antani, and Sampathkumar. A person having ordinary skill in the art would have motivated to make this combination “to optimize the runtime environment for use, and the user may use the system to provide configurations for an optimized runtime environment for one or more applications” (Boutnaru ¶ [0028]). By using the policies to select a batch job with most CPU, a set of batch jobs with most CPU, or at random, the efficiency and optimization of a system can be controlled.
With regard to claim 19, Dube, Di Balsamo, Antani, and Sampathkumar teach the computer system of claim 18. To reestablish the teaching, Di Balsamo teaches batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”).
Dube, Di Balsamo, Antani, and Sampathkumar do not teach selecting a batch job based on policies.
However, in an analogous art, Boutnaru teaches further comprising: program instructions to select the one or more running batch jobs based on one or more policies, wherein the one or more policies comprise at least one of selecting one running batch job with a biggest CPU usage, selecting a set of running batch jobs with top CPU usage, and selecting running batch jobs at random (¶ [0039] states “In some examples, one or more runtime environments may be selected at random to add diversity. In some examples, one or more runtime environments may be selected based on extreme values, such as the fastest response time, least CPU usage, most CPU usage, and/or the like”. Examiner’s Note: in light of the 35 U.S.C. 112(b) issue raised, examiner interprets “biggest” and “top” to be synonymous with “most”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the selection of a runtime environment based on most CPU usage or at random of Boutnaru with the method of reducing CPU resources and reassigning CPU resources to a transaction workload of Dube, Di Balsamo, Antani, and Sampathkumar. A person having ordinary skill in the art would have motivated to make this combination “to optimize the runtime environment for use, and the user may use the system to provide configurations for an optimized runtime environment for one or more applications” (Boutnaru ¶ [0028]). By using the policies to select a batch job with most CPU, a set of batch jobs with most CPU, or at random, the efficiency and optimization of a system can be controlled.
Claims 7, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dube, in view of Di Balsamo and Antani, and further in view of Poothia et al. Pat. No. US 20200042338 A1 (hereafter Poothia).
With regard to claim 7, Dube, Di Balsamo, and Antani teach the method of claim 1. To reestablish the environment involving batch jobs and transact workloads, Di Balsamo teaches scheduled transaction workloads and batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”)
Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is memory (¶ [0068] states “the SLA may be a policy represented as a resource range having an upper and lower resource limit of computing parameters for the resource pool 410 … Other computing parameters may be used including, but not limited to, the quantity of free memory”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the tracking the quantity of free memory could also include tracking the quantity of used memory);
Dube, Di Balsamo, and Antani do not explicitly teach selecting, choosing based on history peak memory usage, reducing memory to history peak memory usage, releasing memory, and assigning released memory to scheduled transaction workloads.
However, in an analogous art, Poothia teaches selecting, by the one or more processors, one or more running batch jobs from different groups of the plurality of groups (¶ [0095] states “If the recommendation engine 305 finds a virtual machine (e.g., the virtual machines 320) to be constrained, at operation 430, the recommendation engine determines how much additional memory that particular virtual machine needs to become not constrained. Similarly, if a virtual machine is not constrained (e.g., has additional memory that is not used), the recommendation engine 305 may determine how much extra memory that particular virtual machine has, which may then be moved to constrained virtual machines”. Examiner’s Note: the recommendation engine is selecting a virtual machine. It is interpreted that virtual machines could be substituted with the batch jobs teaching of Di Balsamo as explained earlier);
choosing, by the one or more processors, from the selected one or more running batch jobs, at least one batch job whose history peak memory usage is lower than a memory quota for the at least one batch job (¶ [0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0111] states “if the memory resizing recommendation system 340 determines at the operation 520 that the upper baseline is within the predetermined threshold of the operation 520 (and/or does not have a high number of page faults), it means that the virtual machine is not consuming all or substantially all of its initial memory allocation”. ¶ [0109] states “In some embodiments, the predetermined threshold may be based on a specific percentage of the initial memory allocation”. ¶ [0114] states “Upon designating a virtual machine (e.g., one of the virtual machines 320) as constrained for memory, the memory resizing recommendation system 340 determines a revised or optimal memory allocation for that particular virtual machine via process 600 of FIG. 6”. ¶ [0125] states “The analysis for the not constrained case is same as outlined above in the process 600”. Examiner’s Note: the “upper baseline” represents the history peak memory usage. The initial memory allocation is the memory quota. Process 600 could be applied to when the virtual machine has extra memory, so process 600 involves choosing the virtual machine that has a previous peak that is lower than its initial allocation);
reducing, by the one or more processors, the memory quota of the at least one batch job to the history peak memory usage leaving a reserve amount of memory ([0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0125] states “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine … The analysis for the not constrained case is same as outlined above in the process 600”. ¶ [0115] states “the memory resizing recommendation system 340 may add a predetermined fraction to the upper baseline computed for active memory usage from the current memory usage profile. For example, if the predetermined fraction is 20% and the upper baseline is 100% of initial memory allocation, the memory resizing recommendation system 340 may compute the initial revised memory allocation as the sum of 100% upper baseline plus 20% such that the initial revised memory allocation is 120% of the initial memory allocation”. Examiner’s Note: in ¶ [0115], the memory resizing recommendation system resizes the memory in order to give the virtual machine additional memory. It does so by adding a predetermined fraction. When process 600 is applied to a virtual machine with extra memory, it is interpreted that the predetermined fraction would be subtracted from the upper baseline. In light of the upper baseline representing a previous peak memory usage, it would be obvious to one of ordinary skill in the art that as the predetermined fraction approaches and becomes zero, the new determined memory size would be equal to the upper baseline. In the case of shrinking memory allocation, setting the memory allocation to the upper baseline would mean setting the memory allocation to a previous peak memory usage);
releasing, by the one or more processors, the reserve amount of memory (¶ [0123] states “at the operation 625, the memory resizing recommendation system 340 adjusts the initial revised memory allocation based upon the historical memory usage and computes the final revised memory allocation for a future period of time. Upon determining the final revised memory usage values, the memory resizing recommendation system 340 outputs those values at operation 630. In some embodiments, the output may be sent to the management system 315”. ¶ [0080] states “The memory resizing recommendation system 340 may, in some embodiments, convey the revised memory allocation determinations to the management system 315, which can then adjust the memory allocated to a particular virtual machine”. Examiner’s Note: when process 600 is applied to a virtual machine with extra memory, the management system would lower its memory to the previous peak as explained previously. The difference between the initial allocation and the new allocation at the previous peak memory usage is the reserve amount of memory. Since it is no longer allocated to the original virtual machine, it is released);
and providing, by the one or more processors, the released reserve amount of memory to the one or more scheduled transaction workloads (¶ [0124] states “the memory resizing recommendation system 340 may be configured to run the processes 500 and 600 on one virtual machine at a time, on all of the virtual machines simultaneously, or on a subset of virtual machines at a time”. ¶ [0125] states “when the process 600 is performed for a constrained memory situation, the virtual machine being analyzed uses more memory than allocated, so the process 600 determines the additional amount of memory to allocate to that virtual machine” and “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine”. Examiner’s Note: the process 600 can either increase the memory allocation or decrease the memory allocation. ¶ [0124] explains that the process 500 and 600 can happen on multiple virtual machines simultaneously. Therefore, it would obvious to one of ordinary skill in the art that the process 600 could happen on two virtual machines on the same host, one that needs extra memory and on that has extra memory. The process 600 would determine to reallocate memory from the virtual machine with extra memory to the virtual machine that needs memory).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine virtual machine reallocation process of Poothia with the method of grouping and scheduling of batch jobs, the monitoring of workload resource usage, the identifying of a job that will not meet a deadline, and the resource quota reduction of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination because when “memory is optimally used, workloads are run faster and with less disruption, and ultimately operation and performance of the virtual machines is improved” (Poothia ¶ [0027]).
With regard to claim 14, Dube, Di Balsamo, and Antani teach the computer program product of claim 8. To reestablish the environment involving batch jobs and transact workloads, Di Balsamo teaches scheduled transaction workloads and batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”).
Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is memory (¶ [0068] states “the SLA may be a policy represented as a resource range having an upper and lower resource limit of computing parameters for the resource pool 410 … Other computing parameters may be used including, but not limited to, the quantity of free memory”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the tracking the quantity of free memory could also include tracking the quantity of used memory);
Dube, Di Balsamo, and Antani do not explicitly teach selecting, choosing based on history peak memory usage, reducing memory to history peak memory usage, releasing memory, and assigning released memory to scheduled transaction workloads.
However, in an analogous art, Poothia teaches program instructions to select one or more running batch jobs from different groups of the plurality of groups (¶ [0095] states “If the recommendation engine 305 finds a virtual machine (e.g., the virtual machines 320) to be constrained, at operation 430, the recommendation engine determines how much additional memory that particular virtual machine needs to become not constrained. Similarly, if a virtual machine is not constrained (e.g., has additional memory that is not used), the recommendation engine 305 may determine how much extra memory that particular virtual machine has, which may then be moved to constrained virtual machines”. Examiner’s Note: the recommendation engine is selecting a virtual machine. It is interpreted that virtual machines could be substituted with the batch jobs teaching of Di Balsamo as explained earlier);
program instructions to choose, from the selected one or more running batch jobs, at least one batch job whose history peak memory usage is lower than a memory quota for the at least one batch job (¶ [0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0111] states “if the memory resizing recommendation system 340 determines at the operation 520 that the upper baseline is within the predetermined threshold of the operation 520 (and/or does not have a high number of page faults), it means that the virtual machine is not consuming all or substantially all of its initial memory allocation”. ¶ [0109] states “In some embodiments, the predetermined threshold may be based on a specific percentage of the initial memory allocation”. ¶ [0114] states “Upon designating a virtual machine (e.g., one of the virtual machines 320) as constrained for memory, the memory resizing recommendation system 340 determines a revised or optimal memory allocation for that particular virtual machine via process 600 of FIG. 6”. ¶ [0125] states “The analysis for the not constrained case is same as outlined above in the process 600”. Examiner’s Note: the “upper baseline” represents the history peak memory usage. The initial memory allocation is the memory quota. Process 600 could be applied to when the virtual machine has extra memory, so process 600 involves choosing the virtual machine that has a previous peak that is lower than its initial allocation);
program instructions to reduce the memory quota of the at least one batch job to the history peak memory usage leaving a reserve amount of memory ([0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0125] states “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine … The analysis for the not constrained case is same as outlined above in the process 600”. ¶ [0115] states “the memory resizing recommendation system 340 may add a predetermined fraction to the upper baseline computed for active memory usage from the current memory usage profile. For example, if the predetermined fraction is 20% and the upper baseline is 100% of initial memory allocation, the memory resizing recommendation system 340 may compute the initial revised memory allocation as the sum of 100% upper baseline plus 20% such that the initial revised memory allocation is 120% of the initial memory allocation”. Examiner’s Note: in ¶ [0115], the memory resizing recommendation system resizes the memory in order to give the virtual machine additional memory. It does so by adding a predetermined fraction. When process 600 is applied to a virtual machine with extra memory, it is interpreted that the predetermined fraction would be subtracted from the upper baseline. In light of the upper baseline representing a previous peak memory usage, it would be obvious to one of ordinary skill in the art that as the predetermined fraction approaches and becomes zero, the new determined memory size would be equal to the upper baseline. In the case of shrinking memory allocation, setting the memory allocation to the upper baseline would mean setting the memory allocation to a previous peak memory usage);
program instructions to release the reserve amount of memory (¶ [0123] states “at the operation 625, the memory resizing recommendation system 340 adjusts the initial revised memory allocation based upon the historical memory usage and computes the final revised memory allocation for a future period of time. Upon determining the final revised memory usage values, the memory resizing recommendation system 340 outputs those values at operation 630. In some embodiments, the output may be sent to the management system 315”. ¶ [0080] states “The memory resizing recommendation system 340 may, in some embodiments, convey the revised memory allocation determinations to the management system 315, which can then adjust the memory allocated to a particular virtual machine”. Examiner’s Note: when process 600 is applied to a virtual machine with extra memory, the management system would lower its memory to the previous peak as explained previously. The difference between the initial allocation and the new allocation at the previous peak memory usage is the reserve amount of memory. Since it is no longer allocated to the original virtual machine, it is released);
and program instructions to provide the released reserve amount of memory to the one or more scheduled transaction workloads (¶ [0124] states “the memory resizing recommendation system 340 may be configured to run the processes 500 and 600 on one virtual machine at a time, on all of the virtual machines simultaneously, or on a subset of virtual machines at a time”. ¶ [0125] states “when the process 600 is performed for a constrained memory situation, the virtual machine being analyzed uses more memory than allocated, so the process 600 determines the additional amount of memory to allocate to that virtual machine” and “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine”. Examiner’s Note: the process 600 can either increase the memory allocation or decrease the memory allocation. ¶ [0124] explains that the process 500 and 600 can happen on multiple virtual machines simultaneously. Therefore, it would obvious to one of ordinary skill in the art that the process 600 could happen on two virtual machines on the same host, one that needs extra memory and on that has extra memory. The process 600 would determine to reallocate memory from the virtual machine with extra memory to the virtual machine that needs memory).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine virtual machine reallocation process of Poothia with the method of grouping and scheduling of batch jobs, the monitoring of workload resource usage, the identifying of a job that will not meet a deadline, and the resource quota reduction of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination because when “memory is optimally used, workloads are run faster and with less disruption, and ultimately operation and performance of the virtual machines is improved” (Poothia ¶ [0027]).
With regard to claim 20, Dube, Di Balsamo, and Antani teach the computer system of claim 15. To reestablish the environment involving batch jobs and transact workloads, Di Balsamo teaches scheduled transaction workloads and batch jobs (¶ [0062] states “Jobs may include at least two types of workloads including batch jobs and transactional jobs”).
Di Balsamo additionally teaches further comprising: wherein the type of resource that is needed for the one or more scheduled transaction workloads is memory (¶ [0068] states “the SLA may be a policy represented as a resource range having an upper and lower resource limit of computing parameters for the resource pool 410 … Other computing parameters may be used including, but not limited to, the quantity of free memory”. Examiner’s Note: it would be obvious to one of ordinary skill in the art that the tracking the quantity of free memory could also include tracking the quantity of used memory);
Dube, Di Balsamo, and Antani do not explicitly teach selecting, choosing based on history peak memory usage, reducing memory to history peak memory usage, releasing memory, and assigning released memory to scheduled transaction workloads.
However, in an analogous art, Poothia teaches program instructions to select one or more running batch jobs from different groups of the plurality of groups (¶ [0095] states “If the recommendation engine 305 finds a virtual machine (e.g., the virtual machines 320) to be constrained, at operation 430, the recommendation engine determines how much additional memory that particular virtual machine needs to become not constrained. Similarly, if a virtual machine is not constrained (e.g., has additional memory that is not used), the recommendation engine 305 may determine how much extra memory that particular virtual machine has, which may then be moved to constrained virtual machines”. Examiner’s Note: the recommendation engine is selecting a virtual machine. It is interpreted that virtual machines could be substituted with the batch jobs teaching of Di Balsamo as explained earlier);
program instructions to choose, from the selected one or more running batch jobs, at least one batch job whose history peak memory usage is lower than a memory quota for the at least one batch job (¶ [0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0111] states “if the memory resizing recommendation system 340 determines at the operation 520 that the upper baseline is within the predetermined threshold of the operation 520 (and/or does not have a high number of page faults), it means that the virtual machine is not consuming all or substantially all of its initial memory allocation”. ¶ [0109] states “In some embodiments, the predetermined threshold may be based on a specific percentage of the initial memory allocation”. ¶ [0114] states “Upon designating a virtual machine (e.g., one of the virtual machines 320) as constrained for memory, the memory resizing recommendation system 340 determines a revised or optimal memory allocation for that particular virtual machine via process 600 of FIG. 6”. ¶ [0125] states “The analysis for the not constrained case is same as outlined above in the process 600”. Examiner’s Note: the “upper baseline” represents the history peak memory usage. The initial memory allocation is the memory quota. Process 600 could be applied to when the virtual machine has extra memory, so process 600 involves choosing the virtual machine that has a previous peak that is lower than its initial allocation);
program instructions to reduce the memory quota of the at least one batch job to the history peak memory usage leaving a reserve amount of memory ([0104] states “The “upper baseline” identifies one or more peak or highest actual memory usage values of a particular virtual machine within the second predetermined time period (e.g., one day)”. ¶ [0125] states “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine … The analysis for the not constrained case is same as outlined above in the process 600”. ¶ [0115] states “the memory resizing recommendation system 340 may add a predetermined fraction to the upper baseline computed for active memory usage from the current memory usage profile. For example, if the predetermined fraction is 20% and the upper baseline is 100% of initial memory allocation, the memory resizing recommendation system 340 may compute the initial revised memory allocation as the sum of 100% upper baseline plus 20% such that the initial revised memory allocation is 120% of the initial memory allocation”. Examiner’s Note: in ¶ [0115], the memory resizing recommendation system resizes the memory in order to give the virtual machine additional memory. It does so by adding a predetermined fraction. When process 600 is applied to a virtual machine with extra memory, it is interpreted that the predetermined fraction would be subtracted from the upper baseline. In light of the upper baseline representing a previous peak memory usage, it would be obvious to one of ordinary skill in the art that as the predetermined fraction approaches and becomes zero, the new determined memory size would be equal to the upper baseline. In the case of shrinking memory allocation, setting the memory allocation to the upper baseline would mean setting the memory allocation to a previous peak memory usage);
program instructions to release the reserve amount of memory (¶ [0123] states “at the operation 625, the memory resizing recommendation system 340 adjusts the initial revised memory allocation based upon the historical memory usage and computes the final revised memory allocation for a future period of time. Upon determining the final revised memory usage values, the memory resizing recommendation system 340 outputs those values at operation 630. In some embodiments, the output may be sent to the management system 315”. ¶ [0080] states “The memory resizing recommendation system 340 may, in some embodiments, convey the revised memory allocation determinations to the management system 315, which can then adjust the memory allocated to a particular virtual machine”. Examiner’s Note: when process 600 is applied to a virtual machine with extra memory, the management system would lower its memory to the previous peak as explained previously. The difference between the initial allocation and the new allocation at the previous peak memory usage is the reserve amount of memory. Since it is no longer allocated to the original virtual machine, it is released);
and program instructions to provide the released reserve amount of memory to the one or more scheduled transaction workloads (¶ [0124] states “the memory resizing recommendation system 340 may be configured to run the processes 500 and 600 on one virtual machine at a time, on all of the virtual machines simultaneously, or on a subset of virtual machines at a time”. ¶ [0125] states “when the process 600 is performed for a constrained memory situation, the virtual machine being analyzed uses more memory than allocated, so the process 600 determines the additional amount of memory to allocate to that virtual machine” and “When the process 600 is performed for a not constrained memory, the virtual machine being analyzed has more memory than it uses, and the process 600 determines how much extra memory the virtual machine has that may be taken away from that virtual machine”. Examiner’s Note: the process 600 can either increase the memory allocation or decrease the memory allocation. ¶ [0124] explains that the process 500 and 600 can happen on multiple virtual machines simultaneously. Therefore, it would obvious to one of ordinary skill in the art that the process 600 could happen on two virtual machines on the same host, one that needs extra memory and on that has extra memory. The process 600 would determine to reallocate memory from the virtual machine with extra memory to the virtual machine that needs memory).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine virtual machine reallocation process of Poothia with the method of grouping and scheduling of batch jobs, the monitoring of workload resource usage, the identifying of a job that will not meet a deadline, and the resource quota reduction of Dube, Di Balsamo, and Antani. A person having ordinary skill in the art would have motivated to make this combination because when “memory is optimally used, workloads are run faster and with less disruption, and ultimately operation and performance of the virtual machines is improved” (Poothia ¶ [0027]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER L YUAN whose telephone number is (571)272-5737. The examiner can normally be reached Mon-Fri 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER LI YUAN/Examiner, Art Unit 2197
/BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197