DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 4/2/2025 has been entered. Claims 1-20 remain pending in this application. Applicant’s amendment to claim 2 has overcome the objection previously set forth in the Non-Final Office Action mailed on 2/5/2025.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 and 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama (US Patent No. 10,162,675 B2) in view of Gunter et al. (US Pub. No. 2022/0326988 A1 hereinafter Gunter) in view of Paltashev et al. (US Pub. No. 2010/0110083 A1 hereinafter Paltashev).
As per claim 1, Aoyama teaches a method comprising: executing a first execution schedule using a plurality of compute engines (Col. 4, lines 28-37, “Process schedulers which manage processes executed by the main processor 11 and a coprocessor 21 to be described later operate on the operating system of the main processor node 1…Therefore, process schedulers based on the number of the main processor 11 and the coprocessor 21 operate on the operating system.” see also col. 5, lines 8-19), by the plurality of compute engines, corresponding to a first computing application (Col. 5, lines 20-22, “The main processor 11 includes one or more processor cores 111. The processor core 111 included by the main processor 11 executes a process under control of the main processor process scheduler 12.” See also Col. 4, lines 28-37, “Process schedulers which manage processes executed by the main processor 11 and a coprocessor 21 to be described later operate on the operating system of the main processor node 1…Therefore, process schedulers based on the number of the main processor 11 and the coprocessor 21 operate on the operating system.”); prior to completion of execution of the first execution schedule, switching to executing a second execution schedule by the plurality of compute engines, by the plurality of compute engines, corresponding to a second computing application (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama fails to explicitly teach the execution schedules being determined prior to the execution of the first execution schedule.
However, Gunter teaches the first execution schedule of a first plurality of runnables (operations) being determined prior to execution of the first computing application and remaining unchanged during execution of the first computing application (¶ [0052], “Each hardware block 12 operates according to its own individualized operations schedule 18. The operations schedules 18 each represent a portion of a program to be executed by the chip 10 as a whole, and each operation schedule 18 individual represents that portion of the program that to be executed by an individual hardware block 12. The operations schedule 18 includes a set of operations to be executed by the hardware block at predetermined counter values.” ¶ [0054]-[0055], “The individualized operation schedules 18 may be particularly useful for applications that are computationally intense, highly repetitive, or both, such as neural network and graphic processing computations. For example, the use of explicitly defined schedules for individual hardware blocks 12 on a chip 10 can be conducive to deterministic operations in which scheduled operations are each executed in a predefined number of clock cycles…The operation schedules 18 for hardware blocks 12 of a chip 10 can be generated by a program compiler. For example, a compiler can process an overall program for the chip and identify the hardware functions will occur at each time increment on chip 10 in order to execute the program. The compiler can parse the functions into operation schedules 18 for each hardware block 12 on the chip 10. The operation schedules 18 are then loaded onto the chip 10 and can be stored in a common chip 10 memory or operational schedules 10 can be distributed to local memories associated with respective hardware blocks 12.” See also Fig. 2A.) and the second execution schedule of a first plurality of runnables (operations) being determined prior to execution of the first execution schedule (¶ [0055], “The operation schedules 18 for hardware blocks 12 of a chip 10 can be generated by a program compiler. For example, a compiler can process an overall program for the chip and identify the hardware functions will occur at each time increment on chip 10 in order to execute the program. The compiler can parse the functions into operation schedules 18 for each hardware block 12 on the chip 10. The operation schedules 18 are then loaded onto the chip 10 and can be stored in a common chip 10 memory or operational schedules 10 can be distributed to local memories associated with respective hardware blocks 12.”; see also abstract; paragraph 0003; 0045-0055).
Aoyama and Gunter are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama with the schedule generation functionality of Gunter to arrive at the claimed invention. The motivation to modify Aoyama with the teachings of Gunter is that having knowledge of the exact execution schedule prior to processing the execution schedule allows the system to optimize how it handles the processing of the execution schedule.
Aoyama and Gunter fail to teach the schedules including commands that dictate timing and order of execution.
However, Paltashev teaches the first execution schedule including a first set of commands dictating timing and order of execution and the second schedule including a second set of commands dictating timing and order of execution (¶ [0110-0122], “…Semaphore P and semaphore V are metacommands that can be configured to provide a capability to manage context execution on software events versus astronomical time in case of time slice counter based management. Both the CPU and the GPU can send these metacommands to the context and manage execution and/or suspension of the context….”; EN: There are a number of metacommands that control the order and timing of execution of the processes during any scheduler’s processing.).
Aoyama, Gunter, and Paltashev are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aoyama and Gunter’s scheduling method with the timing and order commands of Paltashev to arrive at the claimed invention. The motivation to modify Aoyama and Gunter with the teachings of Paltashev is that it becomes easier to control and/or manipulate execution schedules through the use of timing and order commands taught by Paltashev.
As per claim 2, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches wherein at least one runnable of the first plurality of runnables is included in the second plurality of runnables (Col. 4, lines 46-63, “That is to say, the main processor node 1 and the coprocessor node 2 execute predetermined processes associated with each other, such as processes which are dependent on each other. To be specific, for example, as shown in FIG. 2, a process 1 is executed by the main processor 11, and a process 2 which is dependent on an arithmetic result of the process 1 is executed by the coprocessor 21.”). See also Gunter (paragraph 0045-0055).
As per claim 4, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches wherein the execution of the first execution schedule is completely terminated prior to completion of the first execution schedule and prior to the execution of the second execution schedule beginning (Col. 9, lines 27-50, “Meanwhile, the main processor process scheduler 12 may simultaneously receive, from the coprocessor process scheduler 14, status notification information showing that a process has transitioned to the standby state and status notification information showing that a process has been dispatched…Thus, the main processor process scheduler 12 checks whether or not a process on the main processor node 1 associated with the process switched by the coprocessor process scheduler 14 is in the waiting state (step S202). In a case where the associated process is not in the waiting state (step S202, no), the main processor process scheduler 12 ends the processing.”).
As per claim 5, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches wherein an initialization process corresponding to the second execution schedule begins in response to an instruction, corresponding to the switching, for termination of at least a portion of the first execution schedule (Col. Col. 8, lines 16-30, “After that, by using the status notification means 13, the main processor process scheduler 12 notifies status notification information showing that the process has transitioned to the standby state and status notification information showing that a process has been dispatched, to the other process scheduler 14 (the coprocessor process scheduler 14) (step S103).”).
As per claim 6, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches wherein an initialization process corresponding to the second execution schedule is performed prior to execution of the first execution schedule (Col. 4, lines 46-63, “That is to say, the main processor node 1 and the coprocessor node 2 execute predetermined processes…As described above, a process executed by the main processor node 1 and a process executed by the coprocessor node 2 exclusively run, respectively. Therefore, while the main processor 11 is executing the process 1, the coprocessor node 2 is in the waiting state.”).
As per claim 7, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches wherein a third execution schedule that is executed using at least one compute engine of the plurality of compute engines during the execution of the first execution schedule continues after the switching to the execution of the second execution schedule (Col. 10, lines 9-34, “Next, with reference to FIG. 7, an operation when the coprocessor process scheduler 14 has received status notification information from the other process scheduler (the main processor process scheduler 12 and the other coprocessor process scheduler 14) will be described…With reference to FIG. 7, by using the status notification means 13, the coprocessor process scheduler 14 receives status notification information showing switching of a process from the other process scheduler (the main processor process scheduler 12 or the other coprocessor process scheduler 14) (step S301).” The aforementioned citation indicates the existence of a third scheduler involved in executing a third process aside from the aforementioned first and second schedulers and processes.).
Claim(s) 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama, Gunter, and Paltashev as applied to claim 1 above, and further in view of Herbert et al. (US Patent No. 10,235,207 B2 hereinafter Herbert).
As per claim 3, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches the first execution schedule and switching to execution of the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama, Gunter, and Paltashev fail to teach that a portion of the first execution schedule continues to be executed after switching to the second execution schedule.
However, Herbert teaches wherein at least a portion of the first execution schedule continues to be executed after the switching to execution of the second execution schedule (Col. 8, lines 19-56, “Using the resource availability thus determined, compute job management process 400 can attempt to schedule execution of the compute job(s) (and/or sub-tasks) (430)…As will be appreciated in light of the present disclosure, such an attempt can include not only efforts to schedule execution of the given unit(s) of execution by the available compute nodes/components, but also comprehends efforts made to make such compute nodes/components available by way of migrating one or more units of execution to other (available and appropriate) compute nodes/components…In performing such migration, the compute nodes/components to which the unit(s) of execution is (are) migrated would typically need to support some minimal set of functionalities.”).
Aoyama, Gunter, Paltashev, and Herbert are all considered to be analogous to the claimed invention because they are all in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling method of Aoyama, Gunter, and Paltashev which demonstrates parallel schedulers operating on the multiple nodes with the algorithm of Herbert to arrive at the claimed invention. The motivation to modify Aoyama, Gunter, and Paltashev with the teachings of Herbert is that executing at least a portion of the first execution schedule after switching allows for critical portions of the first execution schedule to continue to execute without interfering with the second execution schedule.
As per claim 8, Aoyama, Gunter, and Paltashev teach the method of claim 1. Aoyama also teaches the first execution schedule and switching to execution of the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama, Gunter, and Paltashev fail to teach completing the currently executing frame before switching to the second schedule.
However, Herbert teaches wherein a currently executing frame of the first execution schedule is completed prior to the switching to the execution of the second execution schedule (Col. 14, lines 3-22, “In response to the receipt of this interrupt request, the preemptible resource ceases execution of the currently executing compute job sub-task (970). Execution of the currently executing compute job sub-task having been preempted (either gracefully or immediately)…” One of ordinary skill in the art will understand that graceful preempting allows the currently executing task to complete before switching.).
Aoyama, Gunter, Paltashev, and Herbert are all considered to be analogous to the claimed invention because they are all in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling method of Aoyama, Gunter, and Paltashev with the graceful preemption functionality of Herbert to arrive at the claimed invention. The motivation to modify Aoyama, Gunter, and Paltashev with the teachings of Herbert is that allowing the currently executing frame to complete before switching execution schedules avoids any critical portions of the first execution schedule from being prematurely terminated leading to unwanted problems.
Claim(s) 9 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama in view of Gunter.
As per claim 9, Aoyama teaches a system comprising: one or more processors (Col. 4, lines 15-21, “With reference to FIG. 1, the parallel computer 4 in this exemplary embodiment includes one main processor node 1 (a first node) and one or more coprocessor nodes 2 (a second node).”) to perform operations comprising: directing execution of a first execution schedule by a plurality of compute engines (col. 4, lines 28-37, “Process schedulers which manage processes executed by the main processor 11 and a coprocessor 21 to be described later operate on the operating system of the main processor node 1…Therefore, process schedulers based on the number of the main processor 11 and the coprocessor 21 operate on the operating system.”; see also col. 5, lines 8-19); and prior to completion of execution of the first execution schedule and in response to a switching instruction (Col. 6, lines 5-13, “Thus, the main processor process scheduler 12 is configured to detect the processing status of a process executed by the processor core 111 (detect the switching of a process) and transmit status notification information to the other process scheduler.”), directing that a switch be made to execution of a second execution schedule by the plurality of compute engines (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”)).
Aoyama fails to teach the execution schedules being deterministic.
However, Gunter teaches wherein the first execution schedule and the second execution schedule are deterministic (¶ [0054], “The individualized operation schedules 18 may be particularly useful for applications that are computationally intense, highly repetitive, or both, such as neural network and graphic processing computations. For example, the use of explicitly defined schedules for individual hardware blocks 12 on a chip 10 can be conducive to deterministic operations in which scheduled operations are each executed in a predefined number of clock cycles.”).
Aoyama and Gunter are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama with the schedule generation functionality of Gunter to arrive at the claimed invention. The motivation to modify Aoyama with the teachings of Gunter is that having knowledge of the exact execution schedule prior to processing the execution schedule allows the system to optimize how it handles the processing of the execution schedule.
As per claim 11, Aoyama and Gunter teach the system of claim 9. Aoyama also teaches wherein the operations further comprise directing that the execution of the first execution schedule be completely terminated prior to completion of the first execution schedule and prior to the execution of the second execution schedule beginning (Col. 9, lines 27-50, “Meanwhile, the main processor process scheduler 12 may simultaneously receive, from the coprocessor process scheduler 14, status notification information showing that a process has transitioned to the standby state and status notification information showing that a process has been dispatched…Thus, the main processor process scheduler 12 checks whether or not a process on the main processor node 1 associated with the process switched by the coprocessor process scheduler 14 is in the waiting state (step S202). In a case where the associated process is not in the waiting state (step S202, no), the main processor process scheduler 12 ends the processing.”).
As per claim 12, Aoyama and Gunter teach the system of claim 9. Aoyama also teaches wherein the operations further comprise reporting to a schedule monitoring system that the switching to execution of the second execution schedule has occurred (Col. 8, lines 24-51, “After that, by using the status notification means 13, the main processor process scheduler 12 notifies status notification information showing that the process has transitioned to the standby state and status notification information showing that a process has been dispatched, to the other process scheduler 14 (the coprocessor process scheduler 14) (step S103)…For example, the main processor process scheduler 12 can transmit, at different timings, status notification information showing that a process has transitioned to the standby state and status notification information showing that a process has been dispatched.”).
As per claim 13, Aoyama and Gunter teach the system of claim 9. Aoyama also teaches wherein the operations further comprise reporting to a schedule monitoring system that execution of at least a portion of the first execution schedule has been terminated prior to directing switching to execution of the second execution schedule (Col. 8, lines 24-51, “For example, after (or at the same time as) saving the context of the target processor core 111 as the process context 124, the main processor process scheduler 12 transmits status notification information showing that the process has transitioned to the standby state. Then, after (or at the same time as) restoring the process context 124 of a dispatch target to the processor core 111 of the context switch target, the main processor process scheduler 12 transmits status notification information showing that a process has been dispatched.”).
Claim(s) 10 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama and Gunter as applied to claim 9 above, and further in view of Herbert.
As per claim 10, Aoyama and Gunter teach the system of claim 9. Aoyama also teaches the first execution schedule and switching to execution of the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama and Gunter fail to teach that a portion of the first execution schedule continues to be executed after switching to the second execution schedule.
However, Herbert teaches wherein at least a portion of the first execution schedule continues to be executed after the switching to execution of the second execution schedule (Col. 8, lines 19-56, “Using the resource availability thus determined, compute job management process 400 can attempt to schedule execution of the compute job(s) (and/or sub-tasks) (430)…As will be appreciated in light of the present disclosure, such an attempt can include not only efforts to schedule execution of the given unit(s) of execution by the available compute nodes/components, but also comprehends efforts made to make such compute nodes/components available by way of migrating one or more units of execution to other (available and appropriate) compute nodes/components…In performing such migration, the compute nodes/components to which the unit(s) of execution is (are) migrated would typically need to support some minimal set of functionalities.”).
Aoyama, Gunter, and Herbert are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama and Gunter with the switching/migration functionality of Herbert to arrive at the claimed invention. The motivation to modify Aoyama and Gunter with the teachings of Herbert is that executing at least a portion of the first execution schedule after switching allows for critical portions of the first execution schedule to continue to execute without interfering with the second execution schedule.
As per claim 14, Aoyama and Gunter teach the system of claim 9. Aoyama also teaches the first execution schedule and switching to execution to the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama and Gunter fail to teach reconfiguring some of the processing clients upon switching from the first to the second execution schedule.
However, Herbert teaches wherein one or more processing clients corresponding to execution of the first execution schedule and the second execution schedule are reconfigured for the switching to execution to the second execution schedule (Col. 12, lines 1-25, “Alternatively, if one or more of the resources needed for the execution of the given compute job or a subtask thereof cannot be assigned (e.g., due to failure, use by another unit of execution of either the same compute job or that of another compute job, or the like) (650), a determination is made as to whether the one or more resources are preemptible (e.g., the one or more resources are now in use by another compute job, but that use can be migrated elsewhere or otherwise preempted) (660).”).
Aoyama, Gunter, and Herbert are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama and Gunter with the processing resource allocation functionality of Herbert to arrive at the claimed invention. The motivation to modify Aoyama and Gunter with the teachings of Herbert is that the ability to reconfigure processing clients gives the system the ability to execute any execution schedule regardless of processing needs.
As per claim 15, Aoyama and Herr teach the system of claim 9.
Aoyama and Gunter fail to explicitly teach the system being used for: implementing a robot, performing conversational AI operations, generating synthetic data, a data center, or cloud computing.
However, Herbert teaches wherein the system is comprised in at least one of:
a system implemented using a robot;
a system for performing conversational AI operations;
a system for generating synthetic data;
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources (Col. 3, lines 45-67, “FIG. 1 is a block diagram illustrating an example of a cloud computing architecture, according to methods and systems such as those disclosed herein. FIG. 1 thus illustrates a cloud computing architecture 100.).
Aoyama, Gunter, and Herbert are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the scheduling system of Aoyama and Herr in the cloud computing architecture of Herbert to arrive at the claimed invention.
Claim(s) 16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama in view of Gunter in view of Herr et al. (US Pub. No. 2019/0317740 hereinafter Herr).
As per claim 16, Aoyama teaches one or more processors comprising: processing circuitry (Col. 4, lines 15-21, “With reference to FIG. 1, the parallel computer 4 in this exemplary embodiment includes one main processor node 1 (a first node) and one or more coprocessor nodes 2 (a second node).”) to cause performance of operations comprising: prior to completion of execution of a first execution schedule using a plurality of compute engines, as managed by a task managing system, directing the task managing system to cause a switch to execution of a second execution schedule by the plurality of compute engines (Col. 6, lines 19-38, “FIG. 4 shows an example of a configuration used when the main processor process scheduler 12 executes a context switch. With reference to FIG. 4, the main processor process scheduler 12 includes the scheduling policy 122, a process switch means 123…The process switch means 123 is a means for switching a process executed by the processor core 111. For example, the process switch means 123 saves, as the process context 124, the context of the process core 111 on which a process as the target of switching in a context switch is running. Then, the process switch means 123 restores the process context 124 of a dispatch target to the target processor core 111.”).
Aoyama fails to teach the execution schedules being deterministic and unchanged during different execution iterations.
However, Gunter teaches the first execution schedule and the second execution schedule both being deterministic and static such that the first execution schedule and the second execution schedule are unchanged during different execution iterations (¶ [0054], “The individualized operation schedules 18 may be particularly useful for applications that are computationally intense, highly repetitive, or both, such as neural network and graphic processing computations. For example, the use of explicitly defined schedules for individual hardware blocks 12 on a chip 10 can be conducive to deterministic operations in which scheduled operations are each executed in a predefined number of clock cycles.” See also Fig. 2A.).
Aoyama and Gunter are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama with the schedule generation functionality of Gunter to arrive at the claimed invention. The motivation to modify Aoyama with the teachings of Gunter is that having knowledge of the exact execution schedule prior to processing the execution schedule allows the system to optimize how it handles the processing of the execution schedule.
Aoyama and Gunter fail to explicitly teach the execution schedules being processed by different types of compute engines.
Accordingly, Herr teaches in which two or more of the compute engines are of different types (¶ [0039], “FIG. 1 depicts a block diagram illustrating an example heterogeneous system 100. In the illustrated example of FIG. 1, the heterogeneous system 100 includes an example CPU 102, example storage 104, an example FPGA 106, an example VPU 108, and an example GPU 110.”).
Aoyama, Gunter, and Herr are all considered to be analogous to the claimed invention because they are all in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the scheduling method of Aoyama and Gunter within the heterogenous environment of Herr to arrive at the claimed invention. This implementation would yield predictable results under MPEP § 2143 because all three references deal with task scheduling and swapping execution of one schedule for another when necessary.
As per claim 18, Aoyama, Gunter, and Herr teach the one or more processors of claim 16. Aoyama also teaches wherein the operations further comprise directing the task managing system (Col. 4, lines 28-37, “Process schedulers which manage processes executed by the main processor 11 and a coprocessor 21 to be described later operate on the operating system of the main processor node 1.”) to cause the execution of the first execution schedule to be completely terminated prior to completion of the first execution schedule and prior to causing the execution of the second execution schedule to start (Col. 9, lines 27-50, “Meanwhile, the main processor process scheduler 12 may simultaneously receive, from the coprocessor process scheduler 14, status notification information showing that a process has transitioned to the standby state and status notification information showing that a process has been dispatched…Thus, the main processor process scheduler 12 checks whether or not a process on the main processor node 1 associated with the process switched by the coprocessor process scheduler 14 is in the waiting state (step S202). In a case where the associated process is not in the waiting state (step S202, no), the main processor process scheduler 12 ends the processing.”).
As per claim 19, Aoyama, Gunter, and Herr teach the one or more processors of claim 16. Aoyama also teaches wherein the directing to switch to execution of the second execution schedule is communicated to the task managing system in response to receiving an indication from the task managing system that execution of at least a portion of the first execution schedule has been terminated (Col. 8 & 9, lines 55-67 & 1-22, “That is to say, the coprocessor process scheduler 14 determines a process to switch in a context switch in accordance with a scheduling policy included by the coprocessor process scheduler 14 (step S101; see FIG. 5)…After that, by using the status notification means 13, the coprocessor process scheduler 14 notifies status notification information showing that the process has transitioned to the standby state and status notification information showing that a process has been dispatched, to the other process scheduler (the main processor process scheduler 12 and the other coprocessor process scheduler 14) (step S103; see FIG. 5).”).
Claim(s) 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Aoyama, Gunter, and Herr as applied to claim 16 above, and further in view of Herbert.
As per claim 17, Aoyama, Gunter, and Herr teach the one or more processors of claim 16. Aoyama also teaches the first execution schedule and switching to execution of the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama, Gunter, and Herr fail to teach that a portion of the first execution schedule continues to be executed after switching to the second execution schedule.
However, Herbert teaches wherein at least a portion of the first execution schedule continues to be executed after the switching to execution of the second execution schedule (Col. 8, lines 19-56, “Using the resource availability thus determined, compute job management process 400 can attempt to schedule execution of the compute job(s) (and/or sub-tasks) (430)…As will be appreciated in light of the present disclosure, such an attempt can include not only efforts to schedule execution of the given unit(s) of execution by the available compute nodes/components, but also comprehends efforts made to make such compute nodes/components available by way of migrating one or more units of execution to other (available and appropriate) compute nodes/components…In performing such migration, the compute nodes/components to which the unit(s) of execution is (are) migrated would typically need to support some minimal set of functionalities.”).
Aoyama, Gunter, Herr, and Herbert are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama, Gunter, and Herr with the switching/migration functionality of Herbert to arrive at the claimed invention. The motivation to modify Aoyama, Gunter, and Herr with the teachings of Herbert is that executing at least a portion of the first execution schedule after switching allows for critical portions of the first execution schedule to continue to execute without interfering with the second execution schedule.
As per claim 20, Aoyama, Gunter, and Herr teach the one or more processors of claim 16. Aoyama also teaches the first execution schedule and switching to execution to the second execution schedule (Col. 5, lines 23-32, “For example, the main processor process scheduler 12 executes a context switch on the basis of a scheduling policy 122 to be described later. That is to say, the main processor process scheduler 12 switches a process executed by the processor core 111.” Col. 5 & 6, lines 48-67 & 1-4, “To be specific, for example, the main processor process scheduler 12 receives status notification information representing that the process has transitioned to the standby state from the coprocessor process scheduler 14…Meanwhile, for example, the main processor process scheduler 12 receives status notification information representing that the process has been dispatched from the coprocessor process scheduler 14.”).
Aoyama, Gunter, and Herr fail to teach reconfiguring some of the processing clients upon switching from the first to the second execution schedule.
However, Herbert teaches wherein one or more processing clients corresponding to execution of the first execution schedule and the second execution schedule are reconfigured for the switching to execution to the second execution schedule (Col. 12, lines 1-25, “Alternatively, if one or more of the resources needed for the execution of the given compute job or a subtask thereof cannot be assigned (e.g., due to failure, use by another unit of execution of either the same compute job or that of another compute job, or the like) (650), a determination is made as to whether the one or more resources are preemptible (e.g., the one or more resources are now in use by another compute job, but that use can be migrated elsewhere or otherwise preempted) (660).”).
Aoyama, Gunter, Herr, and Herbert are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling system of Aoyama, Gunter, and Herr with the processing resource allocation functionality of Herbert to arrive at the claimed invention. The motivation to modify Aoyama, Gunter, and Herr with the teachings of Herbert is that the ability to reconfigure processing clients gives the system the ability to execute any execution schedule regardless of processing needs.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant has amended the claims with new limitations that change the scope of the claimed invention. Therefore, the amended claims necessitate new rejections, as addressed above. The amended claims are not allowable over prior art previously cited along with an additional reference, necessitated by amendment, for reasons indicated above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Oetsch et al. (US Pub. No. 2022/0358421 A1) teaches switching between execution schedules. Taira et al. (US Pub. No. 2016/0085591 A1) teaches generating a first and second execution schedule based on user input.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ROBERT DAKITA EWALD whose telephone number is (703)756-1845. The examiner can normally be reached Monday-Friday: 9:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.D.E./Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199