DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending for examination.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1, Statutory Category: Yes, the claim 1 is a method for assigning a task that recites a series of steps and therefore falls in the statutory category of a process.
Step 2A- Prong 1: Judicial Exception Recited: Yes, the claim recites: “generating a testing task list based on a received testing task request, the testing task list comprising at least one testing task; and adopting, in response to calculating load information of the currently assigned task executed by all terminals, a different task assignment rule based on the load information to assign a testing task in the testing task list to each terminal, the load information comprising an estimated time consumption and an amount of generated data of the currently assigned task executed by respective terminals.” As drafted, the claim as a whole recites a method including steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily determining/creating/generating a testing task list based on the received request, and adopting/using/applying/selecting/implementing a different task assignment rule based on the calculated load information to assign a testing task in the testing task list to each terminal. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion).
Therefore, yes, the claims do recite judicial exceptions.
Step 2A- Prong 2: Integrated into a practical Application: No, this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “acquiring a currently assigned task of at least one terminal” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea.
Step 2B: Claim provides an Inventive Concept: No. The additional element “acquiring a currently assigned task of at least one terminal” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)) and are well understood, routine, conventional activity (see MPEP § 2106.05(d)). Courts have identified “receiving and transmitting data, storing and retrieving information”, et cetera as well understood, routine, conventional and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f))). These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the “acquiring” step was considered to be extra-solution activity in Step 2A as insignificant data gathering and are well understood, routine, conventional activity in the field. The “acquiring” step is for the purpose of “communication” and “transmitting the data” and these can be reached on one of court case (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) see MPEP § 2106.05(d) II). Accordingly, a conclusion that “acquiring” is well understood, routine, conventional activity is supported under Berkheimer options 2.
For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible.
Independent claims 8 and 15 are rejected for the same reason as claim 1 above. Claim 8 further recites “An electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory is configured to store a plurality of instructions executable by the at least one processor to enable the at least one processor to perform operations”. Claim 15 further recites “A non-transitory computer readable storage medium, storing a computer instruction, wherein the computer instruction, when executed by a processor, causes the processor to perform operations”. These additional elements are directed to generic computing components/functions (MPEP § 2106.05(b) merely applying the abstract idea (MPEP § 2106.05(f)).
With respect to the dependent claim 2, the claim elaborates that assigning, in response to being unable to calculate the load information of the currently assigned task executed by all the terminals and the testing task in the testing task list being greater than a preset quantity, a testing task in the testing task list to each terminal in real time (“assigning” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind).
With respect to the dependent claim 3, the claim elaborates that wherein adopting the different task assignment rule based on the load information to assign the testing task in the testing task list to each terminal comprises: ascertaining a light-load terminal and a heavy-load terminal in the at least one terminal based on the load information; and adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal (“ascertaining a light-load terminal and a heavy-load terminal”, “adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind).
With respect to the dependent claim 4, the claim elaborates that wherein adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal comprises: assigning, in response to the at least one terminal comprising a light-load terminal and a heavy-load terminal, a part of testing tasks in the test task list to the light-load terminal in sequence until load information of the light-load terminal is identical to load information of the heavy-load terminal; and assigning, in response to ascertaining that there is a remaining testing task in the testing task list, the remaining testing task to all the terminals in real time (“assigning, in response to the at least one terminal comprising a light-load terminal and a heavy-load terminal, a part of testing tasks”, “assigning, in response to ascertaining that there is a remaining testing task in the testing task list, the remaining testing task” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind).
With respect to the dependent claim 5, the claim elaborates that wherein adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal comprises: calculating, in response to ascertaining that all the terminals are light-load terminals, a task quantity of the testing tasks in the testing task list and a terminal quantity of all the terminals; and assigning, in response to the task quantity being greater than the terminal quantity and a remainder of the task quantity and the terminal quantity being not zero, M testing tasks in the testing task list to each terminal in preceding terminals of a value of the remainder in all the terminals, and assigning N testing tasks in the testing task list to each terminal except the preceding terminals of the value of the remainder in all the terminals, wherein M is a number obtained by adding 1 to a quotient of the task quantity and the terminal quantity, and N is the quotient of the task quantity and the terminal quantity (“calculating”, “assigning…” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 6, the claim elaborates that wherein adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal further comprises: assigning, in response to the task quantity being less than or equal to the terminal quantity, one testing task in the testing task list to each terminal in preceding terminals of the task quantity in all the terminals (“assigning… one testing task in the testing task list to each terminal” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 7, the claim elaborates that wherein adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal comprises: assigning, in response to ascertaining that all the terminals are heavy-load terminals, a testing task in the testing task list to each terminal in real time (“assigning, in response to ascertaining that all the terminals are heavy-load terminals, a testing task in the testing task list to each terminal in real time” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind).
Dependent claims 9-14 recite the same features as applied to claims 2-7 respectively above, therefore they are also rejected under the same rationale.
Dependent claims 16-20 recite the same features as applied to claims 2-6 respectively above, therefore they are also rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Anaya et al. (US Pub. 2013/0318221 A1) in view of Ahn et al. (US Pub. 2019/0294532 A1) and further in view of LEE et al. (US Pub. 2015/0199214 A1).
Ahn was cited in the IDS filed on 10/06/2022.
As per claim 1, Anaya teaches the invention substantially as claimed including A method for assigning a task (Anaya, [0027] lines 1-4, units of work 106 are distributed to one or more of the sites through the one or more workload distribution modules), comprising:
receiving a task list based on a received task request, the task list comprising at least one task (Anaya, Fig. 5, 202 receive a unit of work; [0020] lines 3-6, A unit of work is one or more transactions and/or processes performed as a group to service one or more requests);
acquiring a currently assigned task of at least one terminal (Anaya, Fig. 2, 212 and 220 workloads, 210 Site one and 218 site two (As at least one terminal); [0029] lines 6-12, the workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art); and
adopting, in response to calculating load information of the currently assigned task executed by all terminals, a different task assignment rule based on the load information to assign a task in the task list (Anaya, [0029] lines 6-20, The workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art. In an embodiment, the workload distribution module 204 uses the metrics in order to distribute one or more units of work 208 for one or more workloads to site one 210 and site two 218. Two or more workloads may each execute on a separate site and be replicated to other sites; [0041] lines 7-23, At block 504 a site that supports the workload is chosen to process the unit of work data. The site is selected based on one or more workload distribution rules for the workload associated with the unit of work. The workload distribution rules are set by users of the system and are based on which configuration has been chosen for executing the workloads as will be described in more detail below. At block 506, it is determined if the site is capable of processing the unit of work data. The determination is made based on one or more user configurable settings and information about the performance and service level agreements (SLA) of the target workload. At block 508, the unit of work data is then transmitted to the site based on the user configurable settings, performance and SLA data for the workload, and the specific workload distribution configuration as described; [0081] lines 15-16, the SLA objectives include one or more of the current transaction processing time at each of the sites, the available processor capacity, the replication latency, and the available network capacity; [0055] lines 1-4, The active/query configuration provides both a low RTO, and the ability to balance query workload transactions across two or more sites; (as adopting, in response to calculating load information of the currently assigned task executed by all terminals (i.e., utilization, site that is capable to performing), a different task assignment rule (i.e., the site is selected based on one or more workload distribution rules for the workload associated with the unit of work) based on the load information to assign a task in the task list (i.e., unit of work/transactions), [0050] lines 1-12, a new workload distribution rule is determined. The workload distribution rule indicates where workloads should be transmitted. In an embodiment, once the workload distribution module 204 determines the site is down, the workload distribution module 204 prompts an operator for a new workload distribution rule before transmitting any workloads to a standby site. In an additional embodiment, the workload distribution module 204 is configured to automatically determine or generate the new workload distribution rule for each workload. In yet another embodiment, the workload distribution module 204 is configured to prompt an operator, and if no response is received during a configurable period of time, the workload distribution module 204 will automatically determine or generate a new workload distribution rule automatically);
the load information comprising an estimated time consumption of the currently assigned task executed by respective terminals (Anaya, [0029] lines 6-20, The workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art. In an embodiment, the workload distribution module 204 uses the metrics in order to distribute one or more units of work 208 for one or more workloads to site one 210 and site two 218. Two or more workloads may each execute on a separate site and be replicated to other sites; [0081] lines 15-16, the SLA objectives include one or more of the current transaction processing time at each of the sites, the available processor capacity, the replication latency, and the available network capacity).
Anaya fails to specifically teach the task list is testing task list, the task is testing task, generating a testing task list based on a received testing task request, the testing task list comprising at least one testing task, and assign a testing task in the testing task list to each terminal.
However, Ahn teaches the task list is testing task list, the task is testing task, generating a testing task list based on a received testing task request, the testing task list comprising at least one testing task and assign a testing task in the testing task list to each terminal. (Ahn, [0061] lines 4-13, test automation system 20 may be realized as a call back function, etc. that are called according to user inputs and may automatically transmit signals to a server according to the user inputs. User inputs may be collected for respective groups corresponding to options included in the registered tests. The test automation system 20 receives the test implementation commands through the test definer 210 and registers first tests corresponding to the test implementation commands; [0062] lines 1-3, The test generator 220 plans and generates the first tests based on data obtained through the test definer 210; also see Fig. 7, test on performed on each of the first user terminal and second user terminal).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya with Ahn because Ahn’s teaching of generating a testing task list including at least one testing task that send to each of the user terminal for testing would have provided Anaya’s system with the advantage and capability to allow the system to easily testing each of the user terminals based on the generated testing list corresponding to different user interface which improving the system performance and efficiency (see Ahn, Fig. 7)
Anaya and Ahn fail to specifically teach the load information comprising amount of generated data of the currently assigned task.
However, LEE teaches the load information comprising amount of generated data of the currently assigned task (LEE, Fig. 2, 304 and 305 output data; [0047] The node 1 310 and the node 3 330 output output stream data 304 and 305, which are operation performing results; [0060] lines 1-10, resource monitoring unit 120 collects the input load amount, the output load amount, and the data processing performance information for each of the tasks 421, 422, and 423, information on a resource use state/resource use state information for each node, the types and the number of the installed performance accelerators, and the resource use state information of each performance accelerator, at a predetermined cycle through the task execution devices 200-1, 200-2, and 200-3 illustrated in FIG. 3, thereby constructing the task reassignment information of the service).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya and Ahn with LEE because LEE’s teaching of determining the output load amount would have provided Anaya and Ahn’s system with the advantage and capability to allow the system to determining the reassignment plan based on the output load amount in order to improving the resource utilization and system efficiency (see LEE, [0060] “thereby constructing the task reassignment information of the service”).
As per claim 8, it is an electronic device claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above. In addition, Anaya further teaches at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory is configured to store a plurality of instructions executable by the at least one processor to enable the at least one processor to perform operations comprising (Anaya, claim 15, A computer program product for maintaining continuous availability, the computer program product comprising: a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising).
As per claim 15, it is a non-transitory computer readable storage medium claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above.
Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Anaya, Ahn and LEE, as applied to claims 1, 8 and 15 respectively above, and further in view of Beeman (US Pub. 2014/0172183 A1) and Gross et al. (US Pub. 2013/0070285 A1).
As per claim 2, Anaya, Ahn and LEE teach the invention according to claim 1 above. Anaya further teaches assigning, in response to being unable to calculate affinity, a task in the task list in real time (Ahn, Claim 7, inspect the unit of work data to determine a value of one or more data items in the unit of work data; determine, based on the value of the one or more data items in the unit of work data, if the unit of work data has affinity to one or more previously received units of work data; and route the unit of work data to a same workload that the one or more previously received units of work data were transmitted to, responsive to determining that the unit of work data has affinity to one or more previously received units of work data; [0029] the workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art; [0082] lines 2-10, monitors SLA metrics of the various workloads, hardware and software at all of the sites and transmits that data to the workload distribution module 204. The workload distribution module 204 uses these metrics to determine the current SLA metrics at each site. Returning to FIG. 9, at block 912, the SLA metrics received by the workload distribution module 204 are used to route workload transactions (As in response to being unable to calculate affinity, assigning in real time (i.e., based on the collected metrics data in real time)).
In addition, Ahn teaches assigning, a testing task in the testing task list to each terminal (Ahn, [0062] lines 1-3, The test generator 220 plans and generates the first tests based on data obtained through the test definer 210; also see Fig. 7, test on performed on each of the first user terminal and second user terminal).
Further, LEE teaches assigning, in response to being to calculate the load information of the currently assigned task executed by all the terminals (LEE, Fig. 2, 304 and 305 output data; [0047] The node 1 310 and the node 3 330 output output stream data 304 and 305, which are operation performing results; [0060] lines 1-10, resource monitoring unit 120 collects the input load amount, the output load amount, and the data processing performance information for each of the tasks 421, 422, and 423, information on a resource use state/resource use state information for each node, the types and the number of the installed performance accelerators, and the resource use state information of each performance accelerator, at a predetermined cycle through the task execution devices 200-1, 200-2, and 200-3 illustrated in FIG. 3, thereby constructing the task reassignment information of the service).
Anaya, Ahn and LEE fail to specifically teach when assigning in real time, it is in response to being unable to calculate the load information and the testing task in the testing task list being greater than a preset quantity.
However, Beeman teaches when assigning in real time, it is in response to being unable to calculate the load information (Beeman, [0066] lines 1-6, determines whether sufficient historical data is available to forecast the electricity consumption of electrical load 132. Hereinafter, forecasts of electricity consumption of electrical load 132 will be referred to as the load forecast; [0069] lines 1-6, sends a signal to communications application program 414 for notifying computer 150 that load forecast cannot be determined. In this case, the PA 100 runs a default schedule).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya, Ahn and LEE with Beeman because Beeman’s teaching of applying the default scheduling if the load information cannot be determined would have provided Anaya, Ahn and LEE’s system with the advantage and capability to allow the system to applying the task scheduling in real time when there is no prior historical data to forecast load in order to improving the system performance and efficiency.
Anaya, Ahn, LEE and Beeman fail to specifically teach when assigning, it is when the testing task in the testing task list being greater than a preset quantity.
However, Gross teaches when assigning, it is when the testing task in the testing task list being greater than a preset quantity (Gross, Fig. 3, 315; [0031] lines 3-20, if the plurality of jobs are to be assigned to four machines, three job threshold parameters, such as 305, 310 and 315, may be determined. In general, the number of job threshold parameters may be one less than the number of machines to which the plurality of jobs may be assigned. Machines may be assigned to one of a plurality of job sections. For example, in the embodiment shown in FIG. 3, a first machine may be assigned to a portion of the plurality of jobs having ordered job numbers less than or equal to a first job threshold parameter 305; a second machine may be assigned to a portion of the plurality of jobs having ordered job numbers greater than the first job threshold parameter 305 and less than or equal to a second job threshold parameter 310; a third machine may be assigned to a portion of the plurality of jobs having ordered job numbers greater than the second job threshold parameter 310 and less than or equal to a third job threshold parameter 315; and a fourth machine may be assigned to a portion of the plurality of jobs having ordered job numbers greater than the third job threshold parameter 315. (as assigning, it is when the testing task in the testing task list being greater than a preset quantity (i.e., task are assigned one to four machines based on the job numbers greater than threshold 315)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya, Ahn, LEE and Beeman with Gross because Gross’s teaching of assigning the tasks to each machines based on the task number greater than a threshold would have provided Anaya, Ahn, LEE and Beeman’s system with the advantage and capability to allow the system to easily determining when should assigning the tasks to all of the terminals based on the threshold which improving the resource utilization and system performance.
As per claim 9, it is an electronic device claim of claim 2 above. Therefore, it is rejected for the same reason as claim 2 above.
As per claim 16, it is a non-transitory computer readable storage medium claim of claim 2 above. Therefore, it is rejected for the same reason as claim 2 above.
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Anaya, Ahn and LEE, as applied to claims 1, 8 and 15 respectively above, and further in view of Liu et al. (US Pub. 2014/0165119 A1).
As per claim 3, Anaya, Ahn and LEE teach the invention according to claim 1 above. Anaya teaches wherein adopting the different task assignment rule based on the load information to assign the task in the task list ([0029] lines 6-20, The workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art. In an embodiment, the workload distribution module 204 uses the metrics in order to distribute one or more units of work 208 for one or more workloads to site one 210 and site two 218. Two or more workloads may each execute on a separate site and be replicated to other sites; [0041] lines 7-23, At block 504 a site that supports the workload is chosen to process the unit of work data. The site is selected based on one or more workload distribution rules for the workload associated with the unit of work. The workload distribution rules are set by users of the system and are based on which configuration has been chosen for executing the workloads as will be described in more detail below. At block 506, it is determined if the site is capable of processing the unit of work data. The determination is made based on one or more user configurable settings and information about the performance and service level agreements (SLA) of the target workload. At block 508, the unit of work data is then transmitted to the site based on the user configurable settings, performance and SLA data for the workload, and the specific workload distribution configuration as described; [0081] lines 15-16, the SLA objectives include one or more of the current transaction processing time at each of the sites, the available processor capacity, the replication latency, and the available network capacity; [0055] lines 1-4, The active/query configuration provides both a low RTO, and the ability to balance query workload transactions across two or more sites)
In addition, Ahn teaches the task list is testing task list, the task is testing task and assign the testing task in the testing task list to each terminal. (Ahn, [0061] lines 4-13, test automation system 20 may be realized as a call back function, etc. that are called according to user inputs and may automatically transmit signals to a server according to the user inputs. User inputs may be collected for respective groups corresponding to options included in the registered tests. The test automation system 20 receives the test implementation commands through the test definer 210 and registers first tests corresponding to the test implementation commands; [0062] lines 1-3, The test generator 220 plans and generates the first tests based on data obtained through the test definer 210; also see Fig. 7, test on performed on each of the first user terminal and second user terminal).
Anaya, Ahn and LEE fail to specifically teach ascertaining a light-load terminal and a heavy-load terminal in the at least one terminal based on the load information; and adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal.
However, Liu teaches ascertaining a light-load terminal and a heavy-load terminal in the at least one terminal based on the load information; and adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal (Liu, [0209] The offline download system 100 includes a distribution server 120, each download server 112 feeds back its load information to the distribution server 120, and the distribution server 120 generates a scheduling rule according to actual load information of the download server 112 and allocates the offline task according to the scheduling rule. To a heavy-loaded download server 112, few or no task is distributed; to a light-loaded download server 112, more tasks are distributed. Therefore, the tasks distributed to a download server 112 depend on the extent of its real-time load, thereby improving utilization of the download servers 112 and making full use of disk spaces; please note: assigning testing task to each terminal are taught by Ahn).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya, Ahn and LEE with Liu because Liu’s teaching of ascertaining the light and heavy load, and assigning the portions of tasks to each of the server respectively would have provided Anaya, Ahn and LEE’s system with the advantage and capability to allow the system to ensuring the load balancing between each servers/terminals which improving the resource utilization and system performance.
As per claim 10, it is an electronic device claim of claim 3 above. Therefore, it is rejected for the same reason as claim 3 above.
As per claim 17, it is a non-transitory computer readable storage medium claim of claim 3 above. Therefore, it is rejected for the same reason as claim 3 above.
Claims 4, 7, 11, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Anaya, Ahn, LEE and Liu, as applied to claims 3, 10 and 17 respectively above, and further in view of Corum et al. (US Pub. 2009/0172693 A1).
As per claim 4, Anaya, Ahn, LEE and Liu teach the invention according to claim 3 above. Anaya, Ahn, LEE and Liu fail to specifically teach assigning, in response to the at least one terminal comprising a light-load terminal and a heavy-load terminal, a part of testing tasks in the test task list to the light-load terminal in sequence until load information of the light-load terminal is identical to load information of the heavy-load terminal; and assigning, in response to ascertaining that there is a remaining testing task in the testing task list, the remaining testing task to all the terminals in real time.
However, Corum teaches assigning, in response to the at least one terminal comprising a light-load terminal and a heavy-load terminal, a part of testing tasks in the test task list to the light-load terminal in sequence until load information of the light-load terminal is identical to load information of the heavy-load terminal (Corum, Fig. 4, 402, 404, 406 load information; [0030] The logarithmic load levels (either communicated by the processing entities or calculated by the controller) are stored (at 408) in the storage (216 in FIG. 2) of the controller. In response to a service request received (at 410), the controller determines whether a "transmission window" of any processing entity is full. In some embodiments, the controller uses a window flow control procedure to set a limit on the number of service requests (for assigning units of work) that can be sent to any particular processing entity. The window flow control procedure specifies a transmission window, which contains a number of service requests that have been submitted to and is currently pending at the corresponding processing entity. A service request is added to the transmission window, which becomes full when the maximum number of service requests has been reached. If the transmission window is not full, then the controller is able to send another service request to assign a unit of work to the corresponding processing entity. A "unit of work" refers to some amount of work that can be assigned to a processing entity; [0031] Processing entity(ies) with full transmission windows are removed from consideration (at 412) in selecting a target set of one or more processing entities. Next, the controller determines (at 414) whether the number of remaining processing entities (with non-full transmission windows) is greater than zero. If not, which means that the transmission windows of all processing entities are full; [0033] lines 1-6, A round robin selection algorithm selects processing entities in sequential order for work assignment, with the selection re-starting from the beginning once an end of the target set of processing entities has been reached (as assigning, in response to the at least one terminal comprising a light-load terminal and a heavy-load terminal, a part of testing tasks in the test task list to the light-load terminal in sequence until load information of the light-load terminal is identical (i.e., full, all entities are full) to load information of the heavy-load terminal); and
assigning, in response to ascertaining that there is a remaining testing task in the testing task list, the remaining testing task to all the terminals in real time (Corum, [0031] lines 3-13, Next, the controller determines (at 414) whether the number of remaining processing entities (with non-full transmission windows) is greater than zero. If not, which means that the transmission windows of all processing entities are full, the new service request received at 410 is stored (at 416) into a buffer of the controller if the buffer is not full. (as remaining tasks); [0006] lines 1-10, performing load balancing across plural processing entities includes receiving load level indications from the plural processing entities, where the load level indications are representations based on applying a concave function on loadings of the plural processing entities. A processing entity is selected from among the plural processing entities to assign work according to the load level indications (as assigning, in response to ascertaining that there is a remaining testing task in the testing task list (remaining task stored in the buffer), the remaining testing task to all the terminals in real time (i.e., based on the load, Fig. 4); please note testing task in the testing task list to all the terminals was taught by Ahn).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya, Ahn, LEE and Liu with Corum because Corum’s teaching of assigning the tasks to light processing entity until the load is same as heavy load entity (i.e., full load) would have provided Anaya, Ahn, LEE and Liu’s system with the advantage and capability to allow the system to efficiently utilizing all the processing entities for processing the tasks which improving the system performance and resource utilization.
As per claim 7, Anaya, Ahn, LEE and Liu teach the invention according to claim 3 above. Ahn teaches assigning, a testing task in the testing task list to each terminal in real time (Ahn, [0061] lines 4-13, test automation system 20 may be realized as a call back function, etc. that are called according to user inputs and may automatically transmit signals to a server according to the user inputs. User inputs may be collected for respective groups corresponding to options included in the registered tests. The test automation system 20 receives the test implementation commands through the test definer 210 and registers first tests corresponding to the test implementation commands; [0062] lines 1-3, The test generator 220 plans and generates the first tests based on data obtained through the test definer 210; also see Fig. 7, test on performed on each of the first user terminal and second user terminal).
Anaya, Ahn, LEE and Liu fail to specifically teach when assigning, it is assigning, in response to ascertaining that all the terminals are heavy-load terminals.
However, Corum teaches when assigning, it is assigning, in response to ascertaining that all the terminals are heavy-load terminals (Corum, [0031] Processing entity(ies) with full transmission windows are removed from consideration (at 412) in selecting a target set of one or more processing entities. Next, the controller determines (at 414) whether the number of remaining processing entities (with non-full transmission windows) is greater than zero. If not, which means that the transmission windows of all processing entities are full, the new service request received at 410 is stored (at 416) into a buffer of the controller if the buffer is not full. If the buffer is full, then the new service request is discarded by the controller; [0035] the loading of a processing entity can be any one of the following: the number of sessions being handled by the processing entity; the amount of total workload handled by the processing entity; the percentage of total capacity of the processing entity consumed; the speed at which data is being processed by the processing entity; and so forth. In the ensuing discussion, it is assumed that processing entity loading is a number of sessions; [0034] the procedure of FIG. 4 is repeated for assigning additional units of work to the processing entities. Tasks 402, 404, and 406 are continually (repeatedly or intermittently) performed to update the load information of the processing entities PE1, PE2, . . . , Pem (as assigning, in response to ascertaining that all the terminals are heavy-load terminals (i.e., all are full), then assigning in real time (i.e., based on rea time load information)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Anaya, Ahn, LEE and Liu with Corum because Corum’s teaching of assigning the tasks in real time based on the real time load information would have provided Anaya, Ahn, LEE and Liu’s system with the advantage and capability to allow the system to continually monitoring the load in real time which improving the system performance and resource utilization.
As per claims 11 and 14, they are electronic device claims of claims 4 and 7 above. Therefore, they are rejected for the same reasons as claims 4 and 7 above.
As per claim 18, it is a non-transitory computer readable storage medium claim of claim 4 above. Therefore, it is rejected for the same reason as claim 4 above.
Allowable Subject Matter
Claims 5-6, 12-13 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if overcomes the rejections under 35 U.S.C. 101 and rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons of Allowable Subject Matter:
The closest prior arts of record Anaya et al. (US Pub. 2013/0318221 A1) teaches a task scheduling system that determining the different scheduling rules based on the load information for purpose of load balancing and target performance (see Anaya, [0029] lines 6-20, The workload distribution module 204 collects metrics from each of site one 210 and site two 218. The metrics collected for each of the workloads include, but are not limited to, processor speed, pending transactions, transaction execution time, system availability, network bandwidth utilization and availability, replication latency, and any other performance-based metrics as is known in the art. In an embodiment, the workload distribution module 204 uses the metrics in order to distribute one or more units of work 208 for one or more workloads to site one 210 and site two 218. Two or more workloads may each execute on a separate site and be replicated to other sites; [0041] lines 7-23; [0081] and [0055])
Ahn et al. (US Pub. 2019/0294532 A1) teaches a testing task system that distributing the testing tests to each user terminal (see Ahn, [0061] lines 4-13; [0062] lines 1-3, The test generator 220 plans and generates the first tests based on data obtained through the test definer 210; also see Fig. 7, test on performed on each of the first user terminal and second user terminal).
LEE et al. (US Pub. 2015/0199214 A1) teaches a load determination mechanism that determine the output load amount and re-scheduling the tasks based on the output load amount (see LEE, Fig. 2, 304 and 305 output data; [0047]; [0060] lines 1-10, resource monitoring unit 120 collects the input load amount, the output load amount, and the data processing performance information for each of the tasks 421, 422, and 423, information on a resource use state/resource use state information for each node, the types and the number of the installed performance accelerators, and the resource use state information of each performance accelerator, at a predetermined cycle through the task execution devices 200-1, 200-2, and 200-3 illustrated in FIG. 3, thereby constructing the task reassignment information of the service).
Corum et al. (US Pub. 2009/0172693 A1) teaches assigning the tasks to light processing entities until the light processing entities are full (see Corum [0031] lines 3-13, Next, the controller determines (at 414) whether the number of remaining processing entities (with non-full transmission windows) is greater than zero. If not, which means that the transmission windows of all processing entities are full, the new service request received at 410 is stored (at 416) into a buffer of the controller if the buffer is not full. (as remaining tasks); [0006] lines 1-10).
Heizer (US Patent. 5,249,290) teaches a process assignment mechanism that determines the maximum number of server processes (M) that can be started by dividing N by HW and rounding up to the next higher integer. (see Heizer, Col 4, lines 14-30; claim 8; claim 16, responsive to a server apparatus determined total number of client service requests, for accessing said table means to select in which range said total number of client service requests lies and thus determines, for the selected range, the number of client service requests or workload that can be assigned to each server process).
The feature “wherein adopting, based on proportions of the light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal comprises: calculating, in response to ascertaining that all the terminals are light-load terminals, a task quantity of the testing tasks in the testing task list and a terminal quantity of all the terminals; and assigning, in response to the task quantity being greater than the terminal quantity and a remainder of the task quantity and the terminal quantity being not zero, M testing tasks in the testing task list to each terminal in preceding terminals of a value of the remainder in all the terminals, and assigning N testing tasks in the testing task list to each terminal except the preceding terminals of the value of the remainder in all the terminals, wherein M is a number obtained by adding 1 to a quotient of the task quantity and the terminal quantity, and N is the quotient of the task quantity and the terminal quantity. wherein adopting, based on proportions of the 0 light-load terminal and the heavy-load terminal in all the terminals, the different task assignment rule to assign the testing task in the testing task list to each terminal further comprises: assigning, in response to the task quantity being less than or equal to the terminal quantity, one testing task in the testing task list to each terminal in preceding terminals of the task quantity in all the terminals” when taken in the context of the claims as a whole, were not found in the prior art teachings.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZUJIA XU/Examiner, Art Unit 2195