DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-25 have been examined.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(a)-(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The instant application claims priority to U.S. Provisional Application 63/425,857, filed November 16, 2022.
Information Disclosure Statement
The Applicant's submission of the Information Disclosure Statements dated June 1, 2023(x2) and August 28, 2024 is acknowledged by the Examiner and the cited references have been considered in the examination of the claims now pending, except as otherwise indicated. Copies of the PTOL-1449s initialed and dated by the Examiner are attached to the instant office action. One NPL reference was not considered from the IDS dated June 1, 2023 because the reference was not provided.
Claim Objections
Claims 12-20 are objected to because of the following informalities.
Claim 12 recites identifying, determining, and applying. These should be identify, determine, and apply.
Claims 13-20 are objected to as depending from objected to base claims and failing to remedy the deficiencies of those claims. Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-7, 11-18, 21, and 22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Non-patent literature “Reining in the Outliers in Map-Reduce Clusters using Mantri,” by Ananthanarayanan et al. (as cited by Applicant and hereinafter referred to as “Mantri”).
Regarding claims 1 and 12, taking claim 1 as representative, Mantri discloses:
a method for task management of a workload in a distributed computing environment, comprising (Mantri discloses, at § 1, parallel execution of job tasks in clusters, which discloses a method for task management of a workload in a distributed computing environment.):
identifying multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently (Mantri discloses, at § 1, parallel execution of tasks with tasks in one phase depending on the output of tasks in a previous phase.);
monitoring an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks (Mantri discloses, at § 1, determining that one task takes longer than others and at § 4, calculating outliers relative to the median execution time, which discloses monitoring an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks.);
identifying the execution time of a particular task as exceeding an execution time threshold for the particular task (Mantri discloses, at § 4, high runtime tasks, or outliers, exceed the median task execution duration, i.e., threshold, by significant amounts.);
determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines. See also § 3.1.); and
applying the remediation to increase speed of execution of the workload (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines. See also § 3.1.).
Regarding claims 2 and 13, taking claim 2 as representative, Mantri discloses the elements of claim 1, as discussed above. Mantri also discloses:
the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload (Mantri discloses, at § 3.2, “At barriers in the workflow, where none of the tasks in successive phase(s) can begin until all of the tasks in the preceding phase(s) finish, at barriers none of the tasks.” See also § 2, which discloses a directed acyclic graph of dependent nodes joined to producer nodes.).
Regarding claims 3 and 14, taking claim 3 as representative, Mantri discloses the elements of claim 2, as discussed above. Mantri also discloses:
the remediation is applied in response to determining that the dependent task is a join point of the workload (Mantri discloses, at § 3.2, outliers at barriers, i.e., join points, can prevent progress, and are therefore to be culled.).
Regarding claims 4 and 15, taking claim 4 as representative, Mantri discloses the elements of claim 2, as discussed above. Mantri also discloses:
calculating the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task (Mantri discloses, at § 4, calculating the median execution time for a group of tasks and then how much longer, i.e., wait time weighting, an outlier requires.).
Regarding claims 5 and 16, taking claim 5 as representative, Mantri discloses the elements of claim 1, as discussed above. Mantri also discloses:
identifying the multiple tasks of the workload comprises splitting the workload into the multiple tasks, and wherein the method further comprises distributing the multiple tasks among multiple compute locations of the distributed computing environment (Mantri discloses, at § 1, parallel execution of job tasks in clusters, which discloses identifying the multiple tasks of the workload comprises splitting the workload into the multiple tasks, and wherein the method further comprises distributing the multiple tasks among multiple compute locations of the distributed computing environment.).
Regarding claims 6 and 17, taking claim 6 as representative, Mantri discloses the elements of claim 1, as discussed above. Mantri also discloses:
the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines. See also § 3.1 and § 4.3, which discloses idle slots.).
Regarding claims 7 and 18, taking claim 7 as representative, Mantri discloses the elements of claim 6, as discussed above. Mantri also discloses:
the use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines, which discloses use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment. See also § 3.1.).
Regarding claims 10 and 21, taking claim 10 as representative, Mantri discloses the elements of claim 1, as discussed above. Mantri also discloses:
the method is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit (Mantri discloses (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines, which discloses a first networked processing unit acting as an orchestrator and using a second networked processing unit, e.g., the nodes in the cluster.).
Regarding claims 11 and 22, taking claim 11 as representative, Mantri discloses the elements of claim 10, as discussed above. Mantri also discloses:
the particular task is executed by a first set of compute resources, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines, which discloses the particular task is executed by a first set of compute resources, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8, 19, and 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Mantri in view of US Publication No. 2020/0065098 by Parandeh et al. (hereinafter referred to as “Parandeh”).
Regarding claims 8 and 19, taking claim 8 as representative, Mantri discloses the elements of claim 6, as discussed above. Mantri also discloses:
the use of the fallback compute infrastructure includes …underutilization of the fallback compute infrastructure (Mantri discloses, at § 1, restarting outlier tasks on different, e.g., less congested, machines. See also § 3.1 and § 4.3, which discloses idle slots, i.e., underutilization.).
Mantri does not explicitly disclose use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated.
However, in the same field of endeavor (e.g., execution) Parandeh discloses:
deferring execution of divergent iterations that exceed a time threshold (Parandeh discloses, at Figures 4A and 4B and related description, deferring execution when a particular divergent iteration exceeds a time threshold.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Mantri to include deferred execution for divergent instructions, as disclosed by Parandeh, in order to improve performance by providing an additional mechanism to prevent bottlenecks in parallel execution. See Parandeh, ¶ [0005].
Regarding claim 23, Mantri discloses the elements of claim 1, as discussed above. Mantri not explicitly disclose a non-transitory machine-readable storage medium comprising information representative of instructions, wherein the instructions, when executed by processing circuitry, cause the processing circuitry to perform the method.
However, in the same field of endeavor (e.g., execution) Parandeh discloses:
instructions storing executable instructions (Parandeh discloses, at ¶ [0038], implementing the invention in computer readable media.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Mantri to include computer readable media, as disclosed by Parandeh, in order to improve performance by providing flexible implementation mechanisms.
Regarding claim 24, Mantri, as modified, discloses the elements of claim 23, as discussed above. Mantri also discloses the elements of claim 3, which correspond to those of claim 24, as disclosed above.
Regarding claim 25 Mantri, as modified, the elements of claim 23, as discussed above. Mantri also discloses the elements of claim 11, which correspond to those of claim 25, as disclosed above.
Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mantri in view of US Publication No. 2020/0065098 by Parandeh et al. (hereinafter referred to as “Parandeh”).
Regarding claims 9 and 20, taking claim 9 as representative, Mantri discloses the elements of claim 6, as discussed above. Mantri also discloses:
the use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of …categories according to the particular task (Mantri discloses, at § 5, selecting between different types, i.e., categories, of remedial actions.).
Mantri does not explicitly disclose the aforementioned categories are priority categories.
However, in the same field of endeavor (e.g., execution) Doshi discloses:
priority categories (Doshi discloses, at ¶ [0033] et seq., priority categories.).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Mantri to include priority categories, as disclosed by Doshi, in order to prevent improve performance by increasing control over execution.
Conclusion
The following prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure.
US 20160004571 by Smith discloses dynamic migration to avoid hot spots that have exceptional wait times.
US 20200285523 by Bernat (cited by App, intel ref) discloses resource augmentation.
US 20210109785 by Prabhakaran discloses waiting threshold and percentage.
US 20110239220 by Gibson discloses monitoring tasks and adjusting resources based on performance.
US 20210200592 by Guim discloses monitoring resource usage and modifying allocation based on QOS rules.
US 11924060 by Smith discloses translating a workload into functions.
US 20160350157 by Necas discloses monitoring execution time and reassigning to a long-running task pool.
US 20140143781 by Yao discloses migrating threads that exceed execution time threshold.
US 20230199440 by Lee discloses determining whether execution time exceeds a latency threshold.
US 20180285766 by Shen discloses dividing workloads into tasks and diagnosing straggler tasks.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAWN DOMAN/Primary Examiner, Art Unit 2183