Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-3 and 10-12 are currently pending and have been examined.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kodialam et al. (U.S. Pub. No. 20120284727 A1) in view of Suzuki et al. (U.S. Pub. No. 20150082314 A1), further in view of Ito et al. (U.S. Pub. No. 20150049627 A1), further in view of Application Admitted Prior Art (AAPA), and further in view Ahmed et al. (U.S. Pub. No. 20160328272 A1).
Kodialam, Suzuki, AAPA and Ahmed were cited in a previous office action.
As per claim 1, Kodialam teaches the invention as claimed including a method performed in at least one processor of a control node to split and distribute processing task to multiple calculating nodes for synchronization of their data processing in a calculating system (par. 0002 MapReduce includes a map phase and a reduce phase. In the map phase, a dataset is partitioned into several smaller chunks that are assigned to individual nodes for partial computation of results; par. 0003 a method and system for scheduling tasks is provided. A plurality of lower bound completion times … is determined for each of a plurality of jobs, each of the plurality of jobs including a respective subset plurality of tasks. A task schedule is determined for each of a plurality of processors based on the lower bound completion times), wherein the calculating system comprises a primary system input interface … (0061 computer 500 also includes one or more network interfaces; par. 0022 server 102 receiving at least one new job including tasks. This implies using an interface to receive), control node and the multiple calculating nodes which are independent and perform data processing in parallel (par. 0002 MapReduce includes a map phase and a reduce phase. In the map phase, a dataset is partitioned into several smaller chunks that are assigned to individual nodes for partial computation of results. The results are computed at each node in the form of key-value pairs on the original data set; MapReduce jobs are often performed in parallel, scheduling problems often arise due to varying workloads and job size distributions) … the method comprising:
receiving a second processing task from the secondary system input interface par. 0015 Server 102 may receive groups of jobs in bulk or receive individual jobs one at a time [implicitly receiving via an interface]; par. 0022 server 102 receiving at least one new job including tasks that need to be assigned to clients);
splitting the second processing task into a number of execution requests according to the number of the multiple calculating nodes (par. 0002 In the map phase, a dataset is partitioned into several smaller chunks that are assigned to individual nodes for partial computation of results);
calculating … [completion] time … for each execution request (par. 0026 The finish time of job j on a client or processor p.epsilon.M.sub.j, is denoted by f.sub.jp, the time at which the task of job j is completed on processor p. The completion time of a job, C.sub.j is defined as the time at which all tasks belonging to a job j are finished; par. 0033 the completion time of job j on cannot be less than the sum of the arrival time of the job and the maximum processing time of the job on any processor);
sending each of the execution requests comprising the … [completion] time from the control node to respective multiple calculating nodes to control when the respective multiple calculating nodes start processing the execution requests (par. 0015 … all tasks associated with each job are assigned to clients 104).
Kodialam does not expressly teach: calculating an execute time relative to the time reference when the execution requests are to start processing by at least one processor of the multiple calculating nodes by combining, for each execution request, the time reference with the respective signaling transmission time determined for the respective calculating node such that the execute time is individually offset for each execution request based on its respective signaling transmission time.
However, Suzuki teaches: calculating an execute time relative to the time reference when the execution requests are to start processing by at least one processor of the multiple calculating nodes by combining, for each execution request, the time reference with the respective signaling transmission time determined for the respective calculating node such that the execute time is individually offset for each execution request based on its respective signaling transmission time (par. 0104 … the control section 24 may determine the execution start clock time [execute time] of the placement-target task by adding communication overhead [signaling transmission time] between cores to the task placement consideration clock time [time reference]);
It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine the technique of determining execution start clock time of tasks of Suzuki with the system and method of Kodialam resulting in a system and method that provides for determining execution start time of jobs based on a reference clock time and communication overhead time as in Suzuki. One of ordinary skill in the art would have been motivated to make this combination for the purpose of
dynamically controlling task scheduling, reduce an amount of core idle time and improve the performance of execution of a targeted system (par. 0017).
Kodialam and Suzuki do not expressly describe: querying any one of the multiple calculating nodes for a time reference retrieved from a time source common to all of the multiple calculating nodes; for each execution request, determining a respective signaling transmission time corresponding to a respective one of the multiple calculating nodes as an amount of time required to transmit signals to that respective calculating node.
However, Ito teaches: querying any one of the multiple calculating nodes for a time reference retrieved from a time source common to all of the multiple calculating nodes; for each execution request (par. 0001 the client device 101 transmits a request packet which requests a reference time to the time synchronization server device 103 via the transfer device 102. Then, the client device 101 receives a response packet including information related to a reference time in response to the request packet from the time synchronization server device 103, and corrects the current time), determining a respective signaling transmission time corresponding to a respective one of the multiple calculating nodes as an amount of time required to transmit signals to that respective calculating node (par. [0035] In step S303, the communication delay calculation unit 206 calculates a communication delay. A communication delay may be calculated using time information included in the request packet and response packet, for example. More specifically, as time information, time point t1 at which a request packet is transmitted, time point t2 at which the request packet is received at the time synchronization server device 103, and time point t3 at which a response packet is transmitted by the time synchronization server device 103 are recorded, and time point t4 at which the response packet is received at the receive processing unit 205 is obtained).
It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine the technique of calculating a communication delay of Ito with the system and method of Kodialam and Suzuki resulting in a system and method which provides for obtaining reference clock time an calculating a communication delay for each of a plurality of jobs as in Ito. One of ordinary skill in the art would have been motivated to make this combination for the purpose of improving accuracy in estimating a communication delay (par. 0042).
Kodialam, Suzuki and Ito do not expressly describe: synchronizing each of the execution requests to the time source while combining each of the execution requests into a single output..
However, AAPA teaches: synchronizing each of the execution requests to the time source while combining each of the execution requests into a single output. (par. 0004 Synchronized parallelization is made possible by clearly defined data processing rules, specifying input data scheduled in time and resulting output data scheduled in time. In this way, data processing may be divided over parallel processors using the same processing and scheduling rules, as long as they are synchronized to the same time reference. FIG. 1 shows an example system implementing rules-based parallel data processing, where a block diagram and a graphic diagram are shown to illustrate how the system splits, processes and combines data [into a single output]).
It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to modify the teaching of Kodialam, Suzuki and Ito by incorporating the technique of synchronized parallelization of data processing tasks as set forth by AAPA because it would provide for synchronizing parallel execution of processing tasks by ensuring the data processing task interact and complete in a coordinated, predictable manner based on a same reference time.
Kodialam, Suzuki, Ito and AAPA do not expressly disclose: wherein the calculating system that processes data from the primary system input interface is extended with the secondary system input interface and that calculating system receives a first processing task at the primary system input interface.
However, Ahmed teaches: wherein the calculating system that processes data from the primary system input interface is extended with the secondary system input interface and that calculating system receives a first processing task at the primary system input interface (par. 0007 the rendering core includes a first application program interface configured to receive … a first set of tasks generated by a first set of the processing domains and to provide the first set of tasks to the scheduler. …. a second application program interface configured to receive and manage a second set of tasks; For purposes of examination, it is interpreted, the vehicle interface system as being extended by a second application program interface to handle a second set of tasks).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of providing first and second application interfaces of Ahmed with the system and method of Kodialam resulting in a system and method with provides for receiving first set and second set of tasks via the first and second interfaces as in Ahmad. One of ordinary skill in the art would have been motivated to make this combination for the purpose of integrating the operations of multiple different domains on a single platform while retaining a degree of separation between the domains to ensure security and safety (par. 0036).
As per claim 2, Suzuki further teaches: wherein calculating an execute time is performed by incrementing the time reference by a period of time greater than the longest signaling transmission time necessary for sending the execution requests in parallel to the respective multiple calculating nodes (par. 0091 retains a clock time at which placement of a placement-target task becomes ready for execution, as a task placement consideration clock time which is represented by making a task set execution start clock time a reference clock time. Here, the task set execution start clock time is a clock time at which execution of a relevant task set can be started; par. 0104 … the control section 24 may determine the execution start clock time of the placement-target task by adding communication overhead between cores to the task placement consideration clock time).
As per claim 3, Suzuki further teaches: wherein calculating an execute time is performed by incrementing the time reference by a period of time greater than a sum of the signaling transmission times for sending each execution request to each calculating node sequentially (par. 0104 … the control section 24 may determine the execution start clock time of the placement-target task by adding communication overhead between cores to the task placement consideration clock time).
As per claim 10, it is a control node having similar limitations as claim 1. Thus, claim 10 is rejected for same rationale as claim 1. Suzuki further teaches: at least one processor; and at least one memory storing instructions executable by the at least one processor to perform operations (par. 0062 In FIG. 1, the task allocation device 1 is constituted of a computer device including a central processing unit (CPU) 1001, a random access memory (RAM) 1002, a read only memory (ROM) 1003 and a storage device 1004).
As per claim 11, it is a control node having similar limitations as claim 2. Thus, claim 11 is rejected for same rationale as claim 2.
As per claim 12, it is a control node having similar limitations as claim 3. Thus, claim 12 is rejected for same rationale as claim 3.
Response to Arguments
Applicant's arguments with respect to claims 1 and 10 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent No. 8595732 B2 teaches reducing the response time of flexible highly data parallel task by assigning task sets using dynamic combined longest processing time scheme.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Willy W. Huaracha whose telephone number is (571)270-5510. The examiner can normally be reached on M-F 8:30-5:00pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached on (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WH/
Examiner, Art Unit 2195
/BRADLEY A TEETS/ Supervisory Patent Examiner, Art Unit 2197