DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the 35 U.S.C. 101 rejections (Remarks pp. 8-9) have been fully considered and are persuasive. The 35 U.S.C. 101 rejections have been withdrawn.
Applicant's arguments with respect to the 35 U.S.C. 102/103 rejections (Remarks pp. 9-18) are moot in view of the Examiner’s new ground of rejections based on added prior-art references to address applicant’s amendments.
Claim Interpretation
Claim 18 recites a computer-readable storage medium. The examiner is interpreting this to be non-transitory in light of the specification, which states that “As used herein, the terms ‘computer program medium,’ ‘computer-readable medium,’ and ‘computer-readable storage medium" are used to refer to physical hardware media such as the hard disk associated with hard disk drive 614, removable magnetic disk 618, removable optical disk 622, other physical hardware media such as RAMs, ROMs, flash memory cards, digital video disks, zip disks, MEMs, nanotechnology-based storage devices, and further types of physical/tangible hardware storage media. Such computer-readable storage media are distinguished from and non-overlapping with communication media (do not include communication media). Communication media embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term ‘modulated data signal’ means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal,” ¶ 0154.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1).
Regarding Claim 1, Zlatanchev teaches a computing system, comprising:
a processor (
Zlatanchev discloses, “When a plurality of tasks execute on the same execution unit, concurrent execution means that the plurality of tasks use the execution unit in a time sharing manner.… Time sharing is controlled by a scheduler. The scheduler may be a hardware or software scheduler or a combination thereof. Preferably the scheduler is controlled by the operating system,” ¶ 0010, and “An execution unit is typically a processor core,” ¶ 0013);
and a memory device storing program code to cause the processor to (
Zlatanchev discloses, “The architecture of such processors is commonly such that the cores share common resources on the processor, for example a cache, a communication bus, and a memory interface. This type of sharing due to the architecture of a multicore-processor has the effect that the execution of a first task on a first core may affect the timing of an execution of a second task on a second execution core,” ¶ 0013.):
predict a first runtime probability distribution for a proposed computing job (
Zlatanchev discloses, “…determine an estimated execution time for each micro task based on a probability distribution of the execution duration of the micro task,” ¶ 0113, and “determine a statistical distribution for the execution duration of each micro task,” ¶ 0114.
The claimed “runtime probability distribution” is mapped to the disclosed “probability distribution of the execution duration”. This mapping is consistent with the specification of the present application, which states “A job's median runtime may provide useful correlation with individual job runtimes, providing useful insight into variations across repeated runs and how long the next run of the job may take,” ¶ 0070.
The claimed “proposed computing job” is mapped to the disclosed “micro task”, which has a disclosed “estimated execution time” determined for it, indicating that the “micro task” is proposed because it has not been executed yet.).
Zlatanchev does not teach to determine the first runtime probability distribution comprises an outlier runtime, modify the proposed computing job, resulting in a modified proposed computing job, determine a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job, and cause utilization of the runtime server in execution of the modified proposed computing job.
However, Liu teaches to determine the first runtime probability distribution comprises an outlier runtime (
Liu discloses, “For example, the working time estimation module 520 computes the working time estimate for a task. For example, the working time estimation module 520 of server, from the set computes the estimate as WTE.sub.i=T.sub.job/T.sub.i−max, where T.sub.iob is a number of tasks in the job request, and T.sub.i−max is the maximum number of simultaneous tasks that the server, can handle at this time, based on current load of the server.sub.i. For example, if the job request includes 50 tasks (T.sub.job=50) and if a first server, server, can currently execute 4 tasks (T.sub.1−max=4) in parallel, WTE.sub.1=50/4=12.5. … In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete. The task scheduler module 540 determines a maximum WTE from among the work time estimations from each of the servers in the set,” ¶ 0079, and
“Hence, the technical features herein facilitate the system to determine whether the performance requirement can be met by the current estimated work time, and in such a case default data and task distribution strategy will be used. If the performance requirement is high, and if the estimate-time of the job cannot meet the performance requirement, the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node.,” ¶ 0101.
Here, Liu teaches determining a running time distribution based on tasks, in order to predict or estimate the time it takes to run tasks.
An outlier runtime happens when the estimated time of the job exceeds the maximum time limit that is part of the performance requirement.),
modify the proposed computing job, resulting in a modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “modified proposed computing job” is mapped to the disclosed execution of the tasks/jobs on a different node from the original.),
determine a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime (
Liu discloses, “In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete.,” ¶ 0079.
Here, the runtime probability distribution can be calculated for the new node.),
the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “runtime server” is mapped to the disclosed “new node”, which takes in the tasks/jobs for execution.),
and cause utilization of the runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.).
Zlatanchev and Liu are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev to incorporate the teachings of Liu and provide to determine the first runtime probability distribution comprises an outlier runtime, modify the proposed computing job, resulting in a modified proposed computing job, determine a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job, and cause utilization of the runtime server in execution of the modified proposed computing job. Doing so would improve efficiency of the computing job (Liu discloses, “The technical solutions further take into consideration a cost for a supplier of the cloud-computing platform so that the suppliers' cost is optimized while satisfying the performance requirements,” ¶ 0017.).
Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1) and Morinaka (US 20140257816 A1).
Regarding Claim 2, Zlatanchev in view of Liu teaches the computing system of claim 1. Zlatanchev in view of Liu does not teach wherein the runtime probability distribution comprises a runtime probability distribution shape and parameters for the shape.
However, Morinaka teaches wherein the runtime probability distribution comprises a runtime probability distribution shape and parameters for the shape (
Morinaka discloses, “For example, when the duration of a phoneme or the duration of a state generated by using a distribution is too long (or too short), the user modifies the distribution regarding the state duration by changing the mean value of the distribution to a desired duration. Similarly, the user performs modification so that the variance values of the distribution are changed to desired value,” ¶ 0051.).
Zlatanchev in view of Liu, and Morinaka are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Morinaka and provide wherein the runtime probability distribution comprises a runtime probability distribution shape and parameters for the shape. Doing so would help provide mechanisms for setting the shape and parameters of a statistical distribution to more accurately model after the sample data, which may allow more accurate predictions (Morinaka discloses, “In this process, the user refers to an image displayed by the display unit 14, for example, and changes the mean and variance values of the distribution to desired values,” ¶ 0051.).
Regarding Claim 3, Zlatanchev in view of Liu and Morinaka teaches the computing system of claim 2, wherein the runtime probability distribution shape comprises a flexible distribution shape with tunable parameters for customized runtime probability distribution shapes (
Morinaka discloses, “For example, when the duration of a phoneme or the duration of a state generated by using a distribution is too long (or too short), the user modifies the distribution regarding the state duration by changing the mean value of the distribution to a desired duration. Similarly, the user performs modification so that the variance values of the distribution are changed to desired values,” ¶ 0051.).
Zlatanchev in view of Liu, and Morinaka are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Morinaka and provide wherein the runtime probability distribution shape comprises a flexible distribution shape with tunable parameters for customized runtime probability distribution shapes. Doing so would help provide mechanisms for setting the shape and parameters of a statistical distribution to more accurately model after the sample data, which may allow more accurate predictions (Morinaka discloses, “In this process, the user refers to an image displayed by the display unit 14, for example, and changes the mean and variance values of the distribution to desired values,” ¶ 0051.).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1) and Selim (US 20240314200 A1).
Regarding Claim 4, Zlatanchev in view of Liu teaches the computing system of claim 1. Zlatanchev in view of Liu does not teach wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups.
However, Selim teaches wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups (
Selim discloses, “The cloud creates two distributions for the expected finish time of the task before the synchronization point for each cluster in the system using the previous reported runs and progress of edge servers in the clusters. Each cluster is represented by a mixture of two Gaussian distributions. The first distribution, custom-character.sub.e(μ.sub.e, σ.sub.e.sup.2) represents the early execution times distribution of a cluster while the second distribution, custom-character.sub.l(μ.sub.l, σ.sub.l.sup.2) represents the late execution times distribution of a cluster. It is assumed that the distribution of the execution times of local tasks on edge servers in both clusters is learned,” ¶ 0073.
After the combination of Zlatanchev in view of Liu, with Selim, the prediction of the runtime probability distribution from Zlatanchev in view of Liu is based on classifying said distribution based on a set of two Gaussian distributions as described in Selim.).
Zlatanchev in view of Liu, and Selim are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Selim and provide wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups. Doing so would help provide mechanisms for setting the shape and parameters of a statistical distribution to more accurately model after the sample data, which may allow more accurate predictions, and allow for tracking the status and progress of the recurring computing job groups (Selim discloses, “The cloud tracks the progress of edge servers using checkpoints defined in the application. The cloud creates two distributions for the expected finish time of the task before the synchronization point for each cluster in the system using the previous reported runs and progress of edge servers in the clusters,” ¶ 0073.).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Selim (US 20240314200 A1) and Biggin (US 20220114901 A1).
Regarding Claim 5, Zlatanchev in view of Liu and Selim teaches the computing system of Claim 4. Zlatanchev in view of Liu and Selim does not teach wherein to predict the first runtime probability distribution, the program code is further structured to cause the processor to: predict a delta-normalized runtime probability distribution for the proposed computing job from a plurality of delta-normalized runtime probability distributions representing a first plurality of clusters for delta-normalized runtime probability distributions for the executed recurring computing job groups; and predict a ratio-normalized runtime probability distribution for the proposed computing job from a plurality of ratio-normalized runtime probability distributions representing a second plurality of clusters for ratio-normalized runtime probability distributions for the executed recurring computing job groups.
However, Biggin teaches wherein to predict the first runtime probability distribution, the program code is further structured to cause the processor to: predict a delta-normalized runtime probability distribution [Examiner’s Note: Paragraph 80 of the present application’s specification states “Delta-normalization may be defined as the difference between job runtime and job historic median (e.g., job runtime - median runtime).”] for the proposed computing job from a plurality of delta-normalized runtime probability distributions representing a first plurality of clusters for delta-normalized runtime probability distributions for the executed recurring computing job groups (
Biggin discloses, “First, a student specific bias can be mitigated by subtraction the median IS of the distribution of ISs for each student from the Max IS, resulting in an Identity Metric (IM) for each student.” ¶ 0149.
The claimed “delta-normalized runtime probability distribution” is mapped to the disclosed “subtraction the median IS of the distribution of ISs for each student from the Max IS”. This mapping is consistent with Paragraph 80 of the present application’s specification.
After the combination of Zlatanchev in view of Liu and Selim, with Biggin, the disclosed subtraction of a median value from the overall distribution from Biggin is used to normalize the runtime probability distribution from Zlatanchev in view of Liu and Selim.);
and predict a ratio-normalized runtime probability distribution [Examiner’s Note: Paragraph 80 of the present application’s specification states “Ratio-normalization may be defined as the ratio of job runtime to job historic median (e.g., job runtime/median runtime).”] for the proposed computing job from a plurality of ratio-normalized runtime probability distributions representing a second plurality of clusters for ratio-normalized runtime probability distributions for the executed recurring computing job groups (
Biggin discloses, “The CS for each student i is the ratio of IM divided by the local median IM,” ¶ 0150.
The claimed “ratio-normalized runtime probability distribution” is mapped to the disclosed “ratio of job runtime to job historic median (e.g., job runtime/median runtime)”. This mapping is consistent with Paragraph 80 of the present application’s specification.
After the combination of Zlatanchev in view of Liu and Selim with Biggin, the disclosed division of an overall distribution by a median value from Biggin is used to normalize the runtime probability distribution from Zlatanchev in view of Liu and Selim.).
Zlatanchev in view of Liu and Selim, and Biggin are both considered to be analogous to the claimed invention because they are in the same field of probability distributions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu and Selim to incorporate the teachings of Biggin and provide wherein to predict the first runtime probability distribution, the program code is further structured to cause the processor to: predict a delta-normalized runtime probability distribution for the proposed computing job from a plurality of delta-normalized runtime probability distributions representing a first plurality of clusters for delta-normalized runtime probability distributions for the executed recurring computing job groups; and predict a ratio-normalized runtime probability distribution for the proposed computing job from a plurality of ratio-normalized runtime probability distributions representing a second plurality of clusters for ratio-normalized runtime probability distributions for the executed recurring computing job groups. Doing so would help provide ways to mitigate bias for the distributions (Biggin discloses, “The Max IS can then be normalized to mitigate systemic bias to yield a single Collusion Score (CS) per student. First, a student specific bias can be mitigated by subtraction the median IS of the distribution of ISs for each student from the Max IS, resulting in an Identity Metric (IM) for each student,” ¶ 0149, and “Second, the tendency of students with similar Test Scores to have more similar IMs than students with quite different Test Scores can be mitigated as follows… As an example, for a student Test Score rank 50 in a class of 100, the median of the IMs for students ranks 35 to 65 is calculated. The CS for each student i is the ratio of IM divided by the local median IM…” ¶ 0150).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1) and Sturlaugson (US 20210272072 A1).
Regarding Claim 6, Zlatanchev in view of Liu teaches the computing system of Claim 1. Zlatanchev in view of Liu does not teach wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions having at least one multi-mode runtime probability distribution.
However, Sturlaugson teaches wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions having at least one multi-mode runtime probability distribution (
Sturlaugson discloses, “Method 600 analyzes the distribution of lifetimes for the maintenance task in the scheduled maintenance data and unscheduled in-service maintenance data for high variance or multiple modes (operation 604).” ¶ 0087.
After the combination of Zlatanchev in view of Liu, with Sturlaugson, at least one distribution with multiple modes from Sturlaugson is included in the plurality of distributions from Zlatanchev in view of Liu.).
Zlatanchev in view of Liu, and Sturlaugson are both considered to be analogous to the claimed invention because they are in the same field of probability distributions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Sturlaugson and provide wherein the program code is further structured to cause the processor to classify the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions having at least one multi-mode runtime probability distribution. Doing so would help allow modeling after more types of data, improving flexibility and accuracy (Sturlaugson discloses, “The illustrative examples use the statistical distributions of populations of lifetimes to make maintenance interval recommendations. The illustrative examples expand the CMP process by considering multiple groups with multiple, potentially distinct distributions. The illustrative examples provide an apparatus and methods for determining which distribution applies to each aircraft based on its actual observable conditions.,” ¶ 0024).
Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1) and Stahley (US 20130103174 A1).
Regarding Claim 7, Zlatanchev in view of Liu teaches the computing system of Claim 1. Zlatanchev in view of Liu does not teach wherein the program code is further structured to cause the processor to identify at least one source of runtime variation for the proposed computing job.
However, Stahley teaches wherein the program code is further structured to cause the processor to identify at least one source of runtime variation for the proposed computing job (
Stahley discloses, “…identify at least one source of measurement variation and to quantify each identified source of measurement variation,” ¶ 0006.).
Zlatanchev in view of Liu, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Stahley and provide wherein the program code is further structured to cause the processor to identify at least one source of runtime variation for the proposed computing job. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Regarding Claim 8, Zlatanchev in view of Liu and Stahley teaches the computing system of Claim 7, wherein the at least one source of runtime variation comprises a plurality of sources of runtime variation and a quantitative contribution for each of the plurality of sources of runtime variation to the first runtime probability distribution (
Stahley discloses, “…identify at least one source of measurement variation and to quantify each identified source of measurement variation,” ¶ 0006, and “Type B MSA methods 147 are distribution types that best describes the variation and provides a statistical estimate of the measurement variation,” ¶ 0042).
Zlatanchev in view of Liu, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Stahley and provide wherein the at least one source of runtime variation comprises a plurality of sources of runtime variation and a quantitative contribution for each of the plurality of sources of runtime variation to the first runtime probability distribution. Doing so would help allow for making adjustments to reduce runtime variation as needed based on each of the sources (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Regarding Claim 9, Zlatanchev in view of Liu and Stahley teaches the computing system of Claim 1, wherein to modify the proposed computing job, the program code is further structured to identify a modification to the proposed computing job that reduces runtime variation for the proposed computing job. (
Stahley discloses, “If the measurement system is determined incapable, then one or both of the following can be performed: (1) the conformity assessment MSA is performed again via an adjusted road map 132, considering changes in the measurement system's application, design etc. that may have introduced excessive measurement variation; and (2) the feature's specification is reviewed (e.g., reviewed with the customer) to determine whether adjustments can be applied to assure the measurement system is capable or marginally capable of determining compliance to specification. Any changes to feature specification are preferably agreed upon by the customer and records of the customer's acceptance maintained.,” ¶ 0081.
Such changes could reduce the measurement variation if it is too high.).
Zlatanchev in view of Liu, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Stahley and provide wherein to modify the proposed computing job, the program code is further structured to identify a modification to the proposed computing job that reduces runtime variation for the proposed computing job. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Regarding Claim 10, Zlatanchev in view of Liu and Stahley teaches the computing system of Claim 9, wherein the program code is further structured to identify, based on the identified modification to the proposed computing job, a modification to the first runtime probability distribution or a different predicted runtime probability distribution (
Stahley discloses, “If the measurement system is determined incapable, then one or both of the following can be performed: (1) the conformity assessment MSA is performed again via an adjusted road map 132, considering changes in the measurement system's application, design etc. that may have introduced excessive measurement variation; and (2) the feature's specification is reviewed (e.g., reviewed with the customer) to determine whether adjustments can be applied to assure the measurement system is capable or marginally capable of determining compliance to specification. Any changes to feature specification are preferably agreed upon by the customer and records of the customer's acceptance maintained.,” ¶ 0081, and “Type B MSA methods 147 are distribution types that best describes the variation and provides a statistical estimate of the measurement variation,” ¶ 0042.
The adjustments will affect the distributions of the variation.).
Zlatanchev in view of Liu, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Stahley and provide wherein the program code is further structured to identify, based on the identified modification to the proposed computing job, a modification to the first runtime probability distribution or a different predicted runtime probability distribution. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Stahley (US 20150188623 A1) and Bhamidipaty (US 20110295634 A1).
Regarding Claim 11, Zlatanchev in view of Liu and Stahley teaches the computing system of claim 9. Zlatanchev in view of Liu and Stahley does not teach wherein the proposed computing job indicates an execution plan and computing resources to execute the execution plan, and wherein the modification to the proposed computing job comprises a modification to at least one of the proposed execution plans or the computing resources.
However, Bhamidipaty teaches wherein the proposed computing job indicates an execution plan and computing resources to execute the execution plan (
Bhamidipaty discloses, “Thereafter, new resource information is dynamically acquired (708), and resources are assigned to tasks based on the assimilated historical information and the dynamically acquired new information (710). Then, in outputting a plan of resource assignment to tasks (712) …” ¶ 0048.
The claimed “execution plan” is mapped to the planned arrangement of the disclosed “tasks”.
The claimed “computing resources” is mapped to the disclosed “resources” that are assigned to tasks.),
and wherein the modification to the proposed computing job comprises a modification to at least one of the proposed execution plans or the computing resources (
Bhamidipaty discloses, “Then, in outputting a plan of resource assignment to tasks (712), a first plan is developed (714) that is unrelated to the dynamically acquired new information while a second plan is developed (716) that is related to the dynamically acquired new information,” ¶ 0048.).
Zlatanchev in view of Liu and Stahley, and Bhamidipaty are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu and Stahley to incorporate the teachings of Bhamidipaty and provide wherein the proposed computing job indicates an execution plan and computing resources to execute the execution plan, and wherein the modification to the proposed computing job comprises a modification to at least one of the proposed execution plans or the computing resources. Doing so would help improve efficiency of the execution for the proposed computing job. (Bhamidipaty discloses, “As a result of employing a planning engine in accordance with embodiments of the invention, it is possible to complete more tasks on time and avoid penalties, which can lead to being able to accept more tasks and increase revenue, reduce the idle time of resources and promote more collaboration across team members to thereby improve future organizational performance,” ¶ 0049.).
Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Chirkin (“Execution Time Estimation for Workflow Scheduling”), and Bhamidipaty (US 20110295634 A1).
Regarding Claim 12, Zlatanchev teaches a method, comprising: receiving a proposed computing job (
Zlatanchev discloses, “…determine an estimated execution time for each micro task based on a probability distribution of the execution duration of the micro task,” ¶ 0113, and “determine a statistical distribution for the execution duration of each micro task,” ¶ 0114.
The claimed “runtime probability distribution” is mapped to the disclosed “probability distribution of the execution duration”. This mapping is consistent with the specification of the present application, which states “A job's median runtime may provide useful correlation with individual job runtimes, providing useful insight into variations across repeated runs and how long the next run of the job may take,” ¶ 0070.
The claimed “proposed computing job” is mapped to the disclosed “micro task”, which has a disclosed “estimated execution time” determined for it, indicating that the “micro task” is proposed because it has not been executed yet.).
Zlatanchev does not teach determining the first runtime probability distribution comprises an outlier runtime; modifying the proposed computing job, resulting in a modified proposed computing job; determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job; and causing utilization of the runtime server in execution of the modified proposed computing job;
receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; and predicting a first runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan.
However, Liu teaches determining the first runtime probability distribution comprises an outlier runtime (
Liu discloses, “For example, the working time estimation module 520 computes the working time estimate for a task. For example, the working time estimation module 520 of server, from the set computes the estimate as WTE.sub.i=T.sub.job/T.sub.i−max, where T.sub.iob is a number of tasks in the job request, and T.sub.i−max is the maximum number of simultaneous tasks that the server, can handle at this time, based on current load of the server.sub.i. For example, if the job request includes 50 tasks (T.sub.job=50) and if a first server, server, can currently execute 4 tasks (T.sub.1−max=4) in parallel, WTE.sub.1=50/4=12.5. … In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete. The task scheduler module 540 determines a maximum WTE from among the work time estimations from each of the servers in the set,” ¶ 0079, and
“Hence, the technical features herein facilitate the system to determine whether the performance requirement can be met by the current estimated work time, and in such a case default data and task distribution strategy will be used. If the performance requirement is high, and if the estimate-time of the job cannot meet the performance requirement, the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node.,” ¶ 0101.
Here, Liu teaches determining a running time distribution based on tasks, in order to predict or estimate the time it takes to run tasks.
An outlier runtime happens when the estimated time of the job exceeds the maximum time limit that is part of the performance requirement.),
modifying the proposed computing job, resulting in a modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “modified proposed computing job” is mapped to the disclosed execution of the tasks/jobs on a different node from the original.),
determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime (
Liu discloses, “In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete.,” ¶ 0079.
Here, the runtime probability distribution can be calculated for the new node.),
the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “runtime server” is mapped to the disclosed “new node”, which takes in the tasks/jobs for execution.),
and causing utilization of the runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.).
Zlatanchev and Liu are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev to incorporate the teachings of Liu and provide determining the first runtime probability distribution comprises an outlier runtime, modifying the proposed computing job, resulting in a modified proposed computing job, determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job, and causing utilization of the runtime server in execution of the modified proposed computing job. Doing so would improve efficiency of the computing job (Liu discloses, “The technical solutions further take into consideration a cost for a supplier of the cloud-computing platform so that the suppliers' cost is optimized while satisfying the performance requirements,” ¶ 0017.).
Zlatanchev in view of Liu does not teach receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; and predicting a first runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan.
However, Chirkin teaches a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; and predicting a first runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan (
Chirkin discloses, “The execution time (makespan, runtime) of a workflow is the time required to execute all paths of the workflow in parallel:
T
w
f
=
max
j
∈
1
.
.
r
(
T
p
a
t
h
j
)
=
max
j
∈
1
.
.
r
(
∑
i
=
1
s
j
T
k
i
j
)
,” Page 4, and “Input of the algorithm is a workflow graph produced by the scheduler and the normalized samples (that are used to compute the runtime distribution of the workflow’s tasks),” Page 10.
After Zlatanchev in view of Liu, and Chirkin are combined, the information about duration of the tasks from Zlatanchev in view of Liu could be used according to Chirkin to generate a runtime distribution for a workflow of tasks.).
Zlatanchev in view of Liu, and Chirkin are both considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Chirkin and provide a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; and predicting a first runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan. Doing so would help estimate the required time to complete every part of the job. (Chirkin discloses, “The execution time (makespan, runtime) of a workflow is the time required to execute all paths of the workflow in parallel,” Page 4.).
Zlatanchev in view of Liu and Chirkin does not explicitly teach proposed computing resources to execute the proposed computing plan; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan.
However, Bhamidipaty teaches proposed computing resources to execute the proposed computing plan; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan (
Bhamidipaty discloses, “Thereafter, new resource information is dynamically acquired (708), and resources are assigned to tasks based on the assimilated historical information and the dynamically acquired new information (710). Then, in outputting a plan of resource assignment to tasks (712) …” ¶ 0048.
The claimed “computing resources” is mapped to the disclosed “resources” that are assigned to tasks.
After the combination of Zlatanchev in view of Liu and Chirkin, with Bhamidipaty, the runtime distribution for a workflow of tasks from Zlatanchev in view of Liu and Chirkin is now estimated based in part on the resources that are assigned to each task according to Bhamidipaty.).
Zlatanchev in view of Liu and Chirkin, and Bhamidipaty are both considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu and Chirkin to incorporate the teachings of Bhamidipaty and provide proposed computing resources to execute a proposed computing plan. Doing so would help improve efficiency of the execution for the proposed computing job. (Bhamidipaty discloses, “As a result of employing a planning engine in accordance with embodiments of the invention, it is possible to complete more tasks on time and avoid penalties, which can lead to being able to accept more tasks and increase revenue, reduce the idle time of resources and promote more collaboration across team members to thereby improve future organizational performance,” ¶ 0049.).
Regarding Claim 13, Zlatanchev in view of Liu, Chirkin, and Bhamidipaty teaches the method of claim 12, further comprising: determining a status of computing resources; and wherein the predicting comprises predicting the first runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources (
Bhamidipaty discloses, “As shown, process modelling tool 202 in an example embodiment receives input from several sources, including (but not necessarily limited to): a database or other data source 204 containing information about resource availability, qualifications, cost and/or other resource attributes…,” ¶ 0031, and “Process modeling tool 202 includes, in an example embodiment, an intelligent constraint mapper 212 that serves to collect and consolidate information from the aforementioned sources 204/206/208/210. As such, database 204 provides resource constraints to mapper 212, that is, information governing the extent to which a resource may be employed in a process and their qualifications therefor,” ¶ 0032.
After the combination of Zlatanchev in view of Liu and Chirkin, with Bhamidipaty, the runtime distribution for a workflow of tasks from Zlatanchev in view of Liu and Chirkin is predicted using information about resource availability, and/or other resource attributes.).
Zlatanchev in view of Liu and Chirkin, and Bhamidipaty are both considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu and Chirkin to incorporate the teachings of Bhamidipaty and provide further comprising: determining a status of computing resources; and wherein the predicting comprises predicting the first runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources Doing so would help improve efficiency of the execution for the proposed computing job. (Bhamidipaty discloses, “As a result of employing a planning engine in accordance with embodiments of the invention, it is possible to complete more tasks on time and avoid penalties, which can lead to being able to accept more tasks and increase revenue, reduce the idle time of resources and promote more collaboration across team members to thereby improve future organizational performance,” ¶ 0049.).
Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Chirkin (“Execution Time Estimation for Workflow Scheduling”), Bhamidipaty (US 20110295634 A1) and Stahley (US 20130103174 A1).
Regarding Claim 14, Zlatanchev in view of Liu, Chirkin, and Bhamidipaty teaches the method of claim 12. Zlatanchev in view of Liu, Chirkin, and Bhamidipaty does not teach further comprising: identifying at least one source of runtime variation for the proposed computing job.
However, Stahley teaches further comprising: identifying at least one source of runtime variation for the proposed computing job (
Stahley discloses, “…identify at least one source of measurement variation and to quantify each identified source of measurement variation,” ¶ 0006.).
Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, to incorporate the teachings of Stahley and provide further comprising: identifying at least one source of runtime variation for the proposed computing job. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Regarding Claim 15, Zlatanchev in view of Liu, Chirkin, Bhamidipaty, and Stahley teaches the method of claim 12, further comprising: identifying at least one modification to the proposed computing job that reduces runtime variation for the proposed computing job (
Liu discloses, “If the performance requirement is high, and if the estimate-time of the job cannot meet the performance requirement, the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
This will reduce the runtime variation for the tasks/jobs by moving them to a new node to increase optimal performance.).
Zlatanchev and Liu both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev to incorporate the teachings of Liu and provide further comprising: identifying at least one modification to the proposed computing job that reduces runtime variation for the proposed computing job. Doing so would improve efficiency of the computing job (Liu discloses, “The technical solutions further take into consideration a cost for a supplier of the cloud-computing platform so that the suppliers' cost is optimized while satisfying the performance requirements,” ¶ 0017.).
Regarding Claim 16, Zlatanchev in view of Liu, Chirkin, Bhamidipaty, and Stahley teaches the method of claim 14, wherein said modifying the proposed computing job comprises at least one: modifying the proposed execution plan, or modifying the proposed computing resources to execute the modified proposed computing plan; and wherein the method further comprises predicting the second runtime probability distribution for the modified proposed computing job (
Liu discloses, “In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete.,” ¶ 0079.
Here, the runtime probability distribution can be calculated for the new node.).
Zlatanchev and Liu are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev to incorporate the teachings of Liu and provide wherein said modifying the proposed computing job comprises at least one: modifying the proposed execution plan, or modifying the proposed computing resources to execute the modified proposed computing plan; and wherein the method further comprises predicting the second runtime probability distribution for the modified proposed computing job. Doing so would improve efficiency of the computing job (Liu discloses, “The technical solutions further take into consideration a cost for a supplier of the cloud-computing platform so that the suppliers' cost is optimized while satisfying the performance requirements,” ¶ 0017.).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Chirkin (“Execution Time Estimation for Workflow Scheduling”), Bhamidipaty (US 20110295634 A1) and Selim (US 20240314200 A1).
Regarding Claim 17, Zlatanchev in view of Liu, Chirkin, and Bhamidipaty teaches the method of claim 12. Zlatanchev in view of Liu, Chirkin, and Bhamidipaty does not teach wherein the predicting classifies the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups.
However, Selim teaches wherein the predicting classifies the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups (
Selim discloses, “The cloud creates two distributions for the expected finish time of the task before the synchronization point for each cluster in the system using the previous reported runs and progress of edge servers in the clusters. Each cluster is represented by a mixture of two Gaussian distributions. The first distribution, custom-character.sub.e(μ.sub.e, σ.sub.e.sup.2) represents the early execution times distribution of a cluster while the second distribution, custom-character.sub.l(μ.sub.l, σ.sub.l.sup.2) represents the late execution times distribution of a cluster. It is assumed that the distribution of the execution times of local tasks on edge servers in both clusters is learned,” ¶ 0073.
After the combination of Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, with Selim, the prediction of the runtime probability distribution from Zlatanchev in view of Liu, Chirkin, and Bhamidipaty is based on classifying said distribution based on a set of two Gaussian distributions as described in Selim.).
Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, and Selim are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu, Chirkin, and Bhamidipaty to incorporate the teachings of Selim and provide wherein the predicting classifies the proposed computing job as the first runtime probability distribution from a plurality of runtime probability distributions representing a plurality of clusters of runtime probability distributions for a plurality of executed recurring computing job groups. Doing so would help provide mechanisms for setting the shape and parameters of a statistical distribution to more accurately model after the sample data, which may allow more accurate predictions, and allow for tracking the status and progress of the recurring computing job groups (Selim discloses, “The cloud tracks the progress of edge servers using checkpoints defined in the application. The cloud creates two distributions for the expected finish time of the task before the synchronization point for each cluster in the system using the previous reported runs and progress of edge servers in the clusters,” ¶ 0073.).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Chirkin (“Execution Time Estimation for Workflow Scheduling”) and Bhamidipaty (US 20110295634 A1).
Regarding Claim 18, Zlatanchev teaches a computer-readable storage medium having program instructions recorded thereon that, when executed by a processing circuit, perform a method comprising (
Zlatanchev discloses, “FIG. 2 further shows elements of a modern OS kernel and target hardware. For example a pager included in the OS is responsible for managing the sharing of memory available in the hardware among the applications. Furthermore, the target hardware includes in the illustrated example a multicore CPU, including several cores and a hierarchy of caches, including a shared L2 cache and L1 caches, the latter being associated with each core. Furthermore, the example hardware illustrated in FIG. 2 includes a high performance interconnect which connects the caches to a memory interface, special purpose computation units, such as a vector unit, and further interfaces, such as a logic which allows communication through a PCIe bus or similar. Furthermore the target hardware includes main memory,” ¶ 0080):
receiving a proposed computing job
predicting a first runtime probability distribution for the proposed computing job (
Zlatanchev discloses, “…determine an estimated execution time for each micro task based on a probability distribution of the execution duration of the micro task,” ¶ 0113, and “determine a statistical distribution for the execution duration of each micro task,” ¶ 0114.
The claimed “runtime probability distribution” is mapped to the disclosed “probability distribution of the execution duration”. This mapping is consistent with the specification of the present application, which states “A job's median runtime may provide useful correlation with individual job runtimes, providing useful insight into variations across repeated runs and how long the next run of the job may take,” ¶ 0070.
The claimed “proposed computing job” is mapped to the disclosed “micro task”, which has a disclosed “estimated execution time” determined for it, indicating that the “micro task” is proposed because it has not been executed yet.).
Zlatanchev does not teach determining the first runtime probability distribution comprises an outlier runtime; modifying the proposed computing job, resulting in a modified proposed computing job; determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job; and causing utilization of the runtime server in execution of the modified proposed computing job;
receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; and predicting a first runtime probability distribution for the proposed computing job based on the proposed execution plan and the proposed computing resources to execute the proposed computing plan.
However, Liu teaches determining the first runtime probability distribution comprises an outlier runtime (
Liu discloses, “For example, the working time estimation module 520 computes the working time estimate for a task. For example, the working time estimation module 520 of server, from the set computes the estimate as WTE.sub.i=T.sub.job/T.sub.i−max, where T.sub.iob is a number of tasks in the job request, and T.sub.i−max is the maximum number of simultaneous tasks that the server, can handle at this time, based on current load of the server.sub.i. For example, if the job request includes 50 tasks (T.sub.job=50) and if a first server, server, can currently execute 4 tasks (T.sub.1−max=4) in parallel, WTE.sub.1=50/4=12.5. … In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete. The task scheduler module 540 determines a maximum WTE from among the work time estimations from each of the servers in the set,” ¶ 0079, and
“Hence, the technical features herein facilitate the system to determine whether the performance requirement can be met by the current estimated work time, and in such a case default data and task distribution strategy will be used. If the performance requirement is high, and if the estimate-time of the job cannot meet the performance requirement, the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node.,” ¶ 0101.
Here, Liu teaches determining a running time distribution based on tasks, in order to predict or estimate the time it takes to run tasks.
An outlier runtime happens when the estimated time of the job exceeds the maximum time limit that is part of the performance requirement.),
modifying the proposed computing job, resulting in a modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “modified proposed computing job” is mapped to the disclosed execution of the tasks/jobs on a different node from the original.),
determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime (
Liu discloses, “In one or more examples, the work time estimation module 520 of the server, generates the estimate based on information associated with the server.sub.i, like data block distribution and past performance of server, once data import from client 410 is complete.,” ¶ 0079.
Here, the runtime probability distribution can be calculated for the new node.),
the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.
The claimed “runtime server” is mapped to the disclosed “new node”, which takes in the tasks/jobs for execution.),
and causing utilization of the runtime server in execution of the modified proposed computing job (
Liu discloses, “the system selects part of data and migrates to a new node so that tasks from the job are distributed with the new node,” ¶ 0101.).
Zlatanchev and Liu are both considered to be analogous to the claimed invention because they are in the same field of computer systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev to incorporate the teachings of Liu and provide determining the first runtime probability distribution comprises an outlier runtime, modifying the proposed computing job, resulting in a modified proposed computing job, determining a second runtime probability distribution for the modified proposed computing job does not include the outlier runtime, the second runtime probability distribution specifying utilization of a runtime server in execution of the modified proposed computing job, and causing utilization of the runtime server in execution of the modified proposed computing job. Doing so would improve efficiency of the computing job (Liu discloses, “The technical solutions further take into consideration a cost for a supplier of the cloud-computing platform so that the suppliers' cost is optimized while satisfying the performance requirements,” ¶ 0017.).
Zlatanchev in view of Liu does not teach receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; determining a status of computing resources; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources.
However, Chirkin teaches comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; determining a status of computing resources; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources (
Chirkin discloses, “The execution time (makespan, runtime) of a workflow is the time required to execute all paths of the workflow in parallel:
T
w
f
=
max
j
∈
1
.
.
r
(
T
p
a
t
h
j
)
=
max
j
∈
1
.
.
r
(
∑
i
=
1
s
j
T
k
i
j
)
,” Page 4, and “Input of the algorithm is a workflow graph produced by the scheduler and the normalized samples (that are used to compute the runtime distribution of the workflow’s tasks),” Page 10.
After Zlatanchev in view of Liu, and Chirkin are combined, the information about duration of the tasks from Zlatanchev in view of Liu could be used according to Chirkin to generate a runtime distribution for a workflow of tasks.).
Zlatanchev in view of Liu, and Chirkin are both considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu to incorporate the teachings of Chirkin and provide a proposed computing job comprising a proposed execution plan … to execute the proposed computing plan. Doing so would help estimate the required time to complete every part of the job. (Chirkin discloses, “The execution time (makespan, runtime) of a workflow is the time required to execute all paths of the workflow in parallel,” Page 4.).
Zlatanchev in view of Liu and Chirkin does not explicitly teach receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; determining a status of computing resources; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources.
However, Bhamidipaty teaches receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; determining a status of computing resources; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources (
Bhamidipaty discloses, “As shown, process modelling tool 202 in an example embodiment receives input from several sources, including (but not necessarily limited to): a database or other data source 204 containing information about resource availability, qualifications, cost and/or other resource attributes…,” ¶ 0031, and “Process modeling tool 202 includes, in an example embodiment, an intelligent constraint mapper 212 that serves to collect and consolidate information from the aforementioned sources 204/206/208/210. As such, database 204 provides resource constraints to mapper 212, that is, information governing the extent to which a resource may be employed in a process and their qualifications therefor,” ¶ 0032, and “Thereafter, new resource information is dynamically acquired (708), and resources are assigned to tasks based on the assimilated historical information and the dynamically acquired new information (710). Then, in outputting a plan of resource assignment to tasks (712) …” ¶ 0048.
The claimed “computing resources” is mapped to the disclosed “resources” that are assigned to tasks.
The claimed “determining a status of computing resources” is mapped to the disclosed “collect and consolidate information from the aforementioned sources 204/206/208/210”.
After the combination of Zlatanchev in view of Liu and Chirkin, with Bhamidipaty, the runtime distribution for a workflow of tasks from Zlatanchev in view of Liu and Chirkin is now estimated based in part on the resources that are assigned to each task in Bhamidipaty.).
Zlatanchev in view of Liu and Chirkin, and Bhamidipaty are both considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu and Chirkin, to incorporate the teachings of Bhamidipaty and receiving a proposed computing job comprising a proposed execution plan and proposed computing resources to execute the proposed computing plan; determining a status of computing resources; and predicting a runtime probability distribution for the proposed computing job based on the proposed execution plan, the proposed computing resources to execute the proposed computing plan, and the status of the computing resources. Doing so would help improve efficiency of the execution for the proposed computing job. (Bhamidipaty discloses, “As a result of employing a planning engine in accordance with embodiments of the invention, it is possible to complete more tasks on time and avoid penalties, which can lead to being able to accept more tasks and increase revenue, reduce the idle time of resources and promote more collaboration across team members to thereby improve future organizational performance,” ¶ 0049.).
Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zlatanchev (US 20180081720 A1) in view of Liu (US 20180227240 A1), Chirkin (“Execution Time Estimation for Workflow Scheduling”), Bhamidipaty (US 20110295634 A1) and Stahley (US 20130103174 A1).
Regarding Claim 19, Zlatanchev in view of Liu, Chirkin, and Bhamidipaty teaches the computer-readable storage medium of claim 18. Zlatanchev in view of Liu, Chirkin, and Bhamidipaty does not teach further comprising: identifying at least one source of runtime variation for the proposed computing job.
However, Stahley teaches further comprising: identifying at least one source of runtime variation for the proposed computing job (
Stahley discloses, “…identify at least one source of measurement variation and to quantify each identified source of measurement variation,” ¶ 0006.).
Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu, Chirkin, and Bhamidipaty to incorporate the teachings of Stahley and provide further comprising: identifying at least one source of runtime variation for the proposed computing job. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Regarding Claim 20, Zlatanchev in view of Liu, Chirkin, Bhamidipaty and Stahley teaches the computer-readable storage medium of claim 19, wherein said modifying the proposed computing job further comprises: identifying at least one modification to the proposed computing job that reduces runtime variation for the proposed computing job (
Stahley discloses, “If the measurement system is determined incapable, then one or both of the following can be performed: (1) the conformity assessment MSA is performed again via an adjusted road map 132, considering changes in the measurement system's application, design etc. that may have introduced excessive measurement variation; and (2) the feature's specification is reviewed (e.g., reviewed with the customer) to determine whether adjustments can be applied to assure the measurement system is capable or marginally capable of determining compliance to specification. Any changes to feature specification are preferably agreed upon by the customer and records of the customer's acceptance maintained.,” ¶ 0081.
Such changes could reduce the measurement variation if it is too high.).
Zlatanchev in view of Liu, Chirkin, and Bhamidipaty, and Stahley are both considered to be analogous to the claimed invention because they are in the same field of computing systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zlatanchev in view of Liu, Chirkin, and Bhamidipaty to incorporate the teachings of Stahley and provide wherein said modifying the proposed computing job further comprises: identifying at least one modification to the proposed computing job that reduces runtime variation for the proposed computing job. Doing so would help allow for making adjustments to reduce runtime variation as needed (Stahley discloses, “In the case of complex measurement systems, modeling the measurement system can greatly assist in identifying the significant sources of measurement variation and can assist in the design or redesign of the MSA,” ¶ 0033.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sharma et al. (US 20230068418 A1): Machine Learning Model Classifying Data Set Distribution Type From Minimum Number of Samples
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SUN whose telephone number is (571)272-6735. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW NMN SUN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195