Prosecution Insights
Last updated: April 19, 2026
Application No. 18/227,888

TASK ALLOCATION METHOD, APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE

Non-Final OA §103§112
Filed
Jul 28, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Zhejiang Dahua Technology Co. Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections Claims 7-10 and 16-17 are objected to because of the following informalities: in claim 7, at line 9, after “element matrix”, should be ended with “;”, not “.;”. Claim 16 suffers the same problem as claim 7 above. Claims 8-10 and 17 are also objected for being depending on objected claims. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7-10 and 16-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the term “screening”, which is unclear what activity is specifically performed under “screening”, and how it relates to other features and steps in claim 7. In the specification, paragraphs [0064] and [0116] disclose “At block S506: performing screening according to the resource information and the attribute information, and generating the plurality of scheduling paths according to the affinity parameter and the current information element matrix” and “a first generating unit, configured to perform screening according to the resource information and the attribute information, and generate the plurality of scheduling paths according to the affinity parameter and the current information element matrix;”, which provide the description for this term, however, they do not provide a sufficient basis for a clear interpretation of this term. Claim 7 also recites the terms “an affinity parameter” and “a current information element matrix” which are also unclear as what elements and/or relationship they refer to. In the specification, paragraph [0052] discloses “In some embodiments, each task element corresponds to an information element matrix, and each task element and corresponding node device corresponds to an information element matrix. The number of information element values included in the information element matrix may be increased with the update, and the information element values obtained during each update are written into the information element matrix. The information element matrix contains the information element values of previous updates corresponding to the task element and the corresponding node device.”, and paragraph [0068] discloses “In some embodiments, the affinity parameter includes a node device affinity, a task element affinity, and a task element anti-affinity. The affinity parameter may be a parameter determined according to an affinity tag. It is determined whether the task element has affinity with the node device according to the affinity tag included in the node device.”. The above passages can be used to clarify the terms definition. Therefore, claim 7 is indefinite. Claim 16 suffers the same problems as claim 7 above, and therefore is also indefinite. Claims 8-10 and 17 depend on claims 7 and 16 above, but fails to cure the deficiencies of claims 7 and 16, therefore are also indefinite. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (CN 111858029 A – cited in the IDS) in view of Yin (CN 108654089 A). As to claim 1, Zhang teaches a task allocation method, comprising: (claim 1: Storm cluster load balancing based on discrete particle swarm is characterized in that it includes the following steps) initializing allocation parameters, wherein the allocation parameters comprise the number of node devices in a target device cluster and the number of to-be-allocated task elements (Get the number of working nodes s of the Storm cluster in the running state and the number of tasks t to be allocated; paragraph [0007] and claim 1); performing simulated allocation on the to-be-allocated task elements to each node device for processing to generate a candidate scheduling matrix, wherein the candidate scheduling matrix is configured to indicate a simulated allocation result (“initialize the particle swarm to obtain multiple different task allocation methods for each task assigned to the location of the working node; initializing a particle swarm corresponds to generating a candidate scheduling matrix”; paragraph [0008] and claim 1; and “initializing particle velocity and particle position [..] initializing particle position is: using a matrix random generation method to generate m *t matrix rand, and the range of each element in the matrix is 1, 2, 3, ... , s, where m is the population size of the particle swarm, and each row represents a task allocation method”; claim 2, “allocation result” corresponds to “simulated allocation result”); obtaining a load balancing parameter corresponding to the candidate scheduling matrix (“Use each task allocation method in the particle swarm as its own historical best task allocation method Pbest, calculate the fitness value of each task allocation method, and select the task allocation method with the smallest fitness value from the particle swarm as the global historical best task allocation Method Gbest”; paragraph [0009] and claim 1; “load balancing parameter” corresponds to “fitness value”, which is determined for candidate matrices in each iteration), and determining a target scheduling matrix according to the load balancing parameter (“Update each task allocation method according to the preset iterative formula; calculate the fitness value of each task allocation method, and select the task allocation method with the smallest fitness value from the particle swarm as the global historical best task allocation method Gbest” and “Run the Storm cluster according to the global historical best task allocation method Gbest”; claim 1 and paragraphs [0010]-[0013]); and in response to the target scheduling matrix satisfying an allocation condition, allocating the to-be- allocated task elements to the node devices according to the target scheduling matrix (“Repeat steps (14) to (15) until the number of iterations in the iteration formula reaches the preset maximum number of iterations to obtain the global historical best task allocation method Gbest”, and “Run the Storm cluster according to the global historical best task allocation method Gbest”; claim 1 and paragraphs [0012]-[0013]). Zhang does not teach allocating tasks to the idle nodes. Zhang teaches the optimization method to achieve load balancing and optimal utilization by taking into account all the nodes for the allocation, based on the given optimization goal i.e. the fitness function. However, YIN teaches allocating tasks to the idle nodes/servers (“the multiple single task allocated to a plurality of proxy server is in an idle state”; abstract and page 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of YIN to the system of Zhang because YIN teaches a testing method which result in improving the efficiency of the difficulty level evaluation and improving the system stability (abstract), which when apply to the system of Zhang, would will obtained a load balancing of tasks among the servers/devices and improve the system stability. As to claim 2, Zhang as modified by YIN teaches the method according to claim 1, wherein the determining the target scheduling matrix according to the load balancing parameter comprises: in response to the number of current update times of the candidate scheduling matrix being less than a preset maximum number of update times, updating the candidate scheduling matrix according to the load balancing parameter; and in response to the number of current update times of the candidate scheduling matrix being equal to the preset maximum number of update times, taking the candidate scheduling matrix as the target scheduling matrix (see Zhang: wherein steps (13) and (15), the fitness value is calculated by: (21) running a Storm cluster according to a task assignment method, obtaining a plurality of performance indicators for each worker node of the Storm cluster; (22) measure the degree of load balancing with the degree of load dispersion F of all worker nodes of the Storm cluster; (23) With the degree of dispersion F as a function of fitness of the discrete particle population. Repeating steps (14) to (15) until the number of iterations in the iteration formula reaches a preset maximum number of iterations to obtain a global historical optimal task assignment method Gbest. After obtaining each particle optimal solution Pbest and global optimal value Gbest after the iteration loop, it is detennined if a maximum number of iterations is reached, if the maximum number of iterations is reached, then the iterations are stopped, otherwise the velocity and position update iteration process is continued; obtaining an optimal assignment method for assigning t tasks to be assigned to s worker nodes based on a global optimum Gbest obtained after stopping an iteration (equivalent to updating the candidate scheduling matrix according to the load balancing parameters in case the candidate scheduling matrix has not reached a number of updates; in case the candidate scheduling matrix is updated a number of times, as the target scheduling matrix)). As to claim 3, Zhang as modified by YIN teaches the method according to claim 2, wherein the updating the candidate scheduling matrix according to the load balancing parameter comprises: obtaining a t-th load balancing parameter and a t-th information element matrix corresponding to the candidate scheduling matrix generated by a t-th round of allocation processing, wherein an information element matrix is configured to indicate an allocation strategy, t is an integer greater than or equal to 1 and less than or equal to T, and T is the number of update times; determining a (t+1)-th information element matrix according to the t-th load balancing parameter and the t-th information element matrix; and performing a (t+1)-th round of allocation processing according to the (t+1)-th information element matrix to update the candidate scheduling matrix (see Zhang: Step (14) comprises calculating the speed update formula for each task assignment method: V (t+ 1) = v (t) + cl *r1 * {ptest (m)-x (t)} +c2*r2* {Gbest-x (t)} where V (t+l) denotes the particle velocity for the t+l generation, V (t) denotes the velocity formula for the t generation particle, cl, c2 are learning factors, r1 and r2 are random numbers, ptest (m) denotes the ptest value for the mth task assignment method; Update Task Assignment Method: Obtain Update Location for Next Generation based on Speed Update formula: x (t+ 1)= x (t) + v (t+1); Rounding and bounds correction on updated completed x (t+1), having each element of x (t+1) take an integer in the range [1, s] (equivalent to t-th load balancing parameters and t-th pheromone matrix corresponding to the candidate scheduling matrix generated according to the t-th round of assignment processing, wherein the pheromone matrix is used to indicate an assignment strategy, and t is an integer greater than or equal to 1 and less than or equal to T, T being the number of updates; determining a t+1 th pheromone matrix from the t-th load balancing parameters and the t-th pheromone matrix; performing a t+ 1 th round of allocation processing in accordance with the t+ 1 th pheromone matrix to update the candidate scheduling matrices). As a further optimization of the above scheme, the performance metrics in step (21) include CPU occupancy of worker nodes, memory occupancy of worker nodes, network bandwidth occupancy and load performance aware ratio.). As to claim 4, Zhang as modified by YIN teaches the method according to claim 1, wherein the allocating the to-be-allocated task elements to the idle node devices according to the target scheduling matrix comprises: performing simulated allocation on the to-be-allocated task elements to each idle node device for processing according to a plurality of scheduling paths to obtain a plurality of simulated scheduling matrices; and determining the candidate scheduling matrix from the plurality of simulated scheduling matrices (see Zhang: a Pbest and Gbest update module, for computing an updated fitness value for each task assignment method, comparing the fitness values before and after each task assignment method update, assigning a task assignment method with a small fitness value to Ptest (equivalent to an analog assignment of the task elements to each of the node devices according to a plurality of scheduling paths for processing resulting in a plurality of analog scheduling matrices), the task assignment method with the smallest fitness value is selected from the assigned plurality of Ptest to assign to Gbest (equivalent to determining the candidate scheduling matrix from the plurality of analog scheduling matrices); paragraph [0008]). As to claim 5, Zhang as modified by YIN teaches the method according to claim 4, wherein each scheduling path corresponds to an allocation method; a number of the plurality of scheduling paths is set in initial allocation parameters (see Zhang: follows from particle swarm matrix, iteratively updated to arrive at an optimal solution – see rejection of claim 2 above). As to claim 6, Zhang as modified by YIN teaches the method according to claim 4, wherein a scheduling degree of each scheduling path is the same, and the scheduling degree is configured to indicate a number of the to-be-allocated task elements performed with the simulated allocation (see Zhang: follows from particle swarm matrix, iteratively updated to arrive at an optimal solution – see rejection of claim 2 above). As to claim 7, Zhang as modified by YIN teaches the method according to claim 4, wherein the performing simulated allocation on the to-be- allocated task elements to each idle node device for processing according to the plurality of scheduling paths to obtain the plurality of simulated scheduling matrices comprises: determining resource information included in the to-be-allocated task elements and attribute information of the idle node devices; obtaining an affinity parameter and a current information element matrix; performing screening according to the resource information and the attribute information, and generating the plurality of scheduling paths according to the affinity parameter and the current information element matrix; and generating the plurality of simulated scheduling matrices according to the plurality of scheduling paths (see Zhang: a fitness value is calculated by: (21) running a Storm cluster according to a task assignment method, obtaining a plurality of performance indicators for each worker node of the Storm cluster; the degree of load balancing is measured with the degree of load dispersion F of all worker nodes of the Storm cluster. The performance metrics in step (21) include CPU occupancy of the worker node, memory occupancy of the worker node, network bandwidth occupancy, and load performance aware ratio. A Pbest and Gbest update module, for computing an updated fitness value for each task assignment method, comparing the fitness values before and after each task assignment method update, assigning the task assignment method with a small fitness value to Ptest, selecting the task assignment method with the smallest fitness value from the assigned plurality of Ptest to assign to Gbest ( equivalent to sequentially obtaining the load parameters of the analog scheduling matrix; determining the analog scheduling matrix with the smallest value corresponding to the load parameter as the candidate scheduling matrix; paragraphs [0008]-[0023]). As to claim 8, Zhang as modified by YIN teaches the method according to claim 7, wherein the resource information comprises resource catalog information, CPU information, and memory information (see Zhang: the performance metrics in step (21) include CPU occupancy of worker nodes, memory occupancy of worker nodes, network bandwidth occupancy; paragraph [0060] and claim 4. Expanding these metrics to include resource catalog information is regarded as a straightforward possibility to one of ordinary skill in the art). As to claim 9, Zhang as modified by YIN teaches the method according to claim 7, wherein the attribute information comprises information and label information configured to indicate a current resource utilization rate of each idle node device (see Zhang: bandwidth occupancy rates; see also working status vector, load performance perception ratio in [0060]; see also load dispersion degree in [0066]) and considering them in evaluating fitness function and determining iterative updates of the allocation schedule). As to claim 10, Zhang as modified by YIN teaches the method according to claim 7, wherein the generating the plurality of simulated scheduling matrices according to the plurality of scheduling paths comprises: for each scheduling path: in condition of a scheduling path indicating that a to-be-allocated task element is to be performed with simulated allocation to an idle node device, adjusting an allocation parameter value corresponding to the to-be-allocated task element and the idle node device in an initial scheduling matrix to a target parameter value; and after completing the adjustment of the allocation parameter value indicated by the scheduling path, updating the initial scheduling matrix to a simulated scheduling matrix (see Zhang: running the Storm cluster according to the task allocation method in the initialization particle swarm, and obtaining various performance indicators of each worker node of the Storm cluster updating each task allocation method; updating the Pbest of each particle and the Gbest of the particle swarm after iterative updating; obtaining a global historical optimal task allocation method Gbest until the number of iterations reaches a preset maximum number of iterations; see claim 1). As to claim 11, Zhang as modified by YIN teaches the method according to claim 4, wherein the determining the candidate scheduling matrix from the plurality of simulated scheduling matrices comprises: obtaining a load parameter of each simulated scheduling matrix in sequence; and determining a simulated scheduling matrix corresponding to a smallest value of the load parameter as the candidate scheduling matrix (see Zhang: applying fitness function (load dependent), and taking into account the smallest value of the load parameter (see the load dispersion degree F of all working nodes[..] measures the degree of load balancing … the smaller the F value, the greater the degree of load balancing … to obtain F is the minimum value; paragraph [0057]-[0058]; also claim 11, (22) and (23))). As to claim 12, Zhang teaches executing a method to perform (Storm cluster load balancing based on discrete particle swarm is characterized in that it includes the following steps; claim 1): initializing allocation parameters, wherein the allocation parameters comprise the number of idle node devices in a target device cluster and the number of to-be-allocated task elements (Get the number of working nodes s of the Storm cluster in the running state and the number of tasks t to be allocated; paragraph [0007] and claim 1); performing simulated allocation on the to-be-allocated task elements to each idle node device for processing to generate a candidate scheduling matrix, wherein the candidate scheduling matrix is configured to indicate a simulated allocation result (“initialize the particle swarm to obtain multiple different task allocation methods for each task assigned to the location of the working node; initializing a particle swarm corresponds to generating a candidate scheduling matrix”; paragraph [0008] and claim 1; and “initializing particle velocity and particle position [..] initializing particle position is: using a matrix random generation method to generate m *t matrix rand, and the range of each element in the matrix is 1, 2, 3, ... , s, where m is the population size of the particle swarm, and each row represents a task allocation method”; claim 2, “allocation result” corresponds to “simulated allocation result”); obtaining a load balancing parameter corresponding to the candidate scheduling matrix, and determining a target scheduling matrix according to the load balancing parameter (“Use each task allocation method in the particle swarm as its own historical best task allocation method Pbest, calculate the fitness value of each task allocation method, and select the task allocation method with the smallest fitness value from the particle swarm as the global historical best task allocation Method Gbest”; paragraph [0009] and claim 1; “load balancing parameter” corresponds to “fitness value”, which is determined for candidate matrices in each iteration); and in response to the target scheduling matrix satisfying an allocation condition, allocating the to-be- allocated task elements to the idle node devices according to the target scheduling matrix (“Repeat steps (14) to (15) until the number of iterations in the iteration formula reaches the preset maximum number of iterations to obtain the global historical best task allocation method Gbest”, and “Run the Storm cluster according to the global historical best task allocation method Gbest”; claim 1 and paragraphs [0012]-[0013]). Zhang does not teach a non-transitory computer-readable storage medium, storing a program; wherein when the program is executed by a processor and allocating tasks to the idle nodes. Zhang teaches the optimization method to achieve load balancing and optimal utilization by taking into account all the nodes for the allocation, based on the given optimization goal i.e. the fitness function. Zhang further discloses “The present invention is not limited to the above-mentioned specific implementation manners, and various transformations made by those skilled in the art starting from the above-mentioned ideas without creative work all fall within the scope of protection of the present invention.” (paragraph [0104]). However, YIN teaches a non-transitory computer-readable storage medium, storing a program; wherein when the program is executed by a processor (“the present invention also provides a computer readable storage medium, the computer-readable storage medium storing a computer program, the computer program executable by a processor”; page 4), and allocating tasks to the idle nodes/servers (“the multiple single task allocated to a plurality of proxy server is in an idle state”; abstract and page 3). Given the teaching of Zhang regarding the invention can be implemented in different manner, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of YIN to the system of Zhang because YIN teaches a testing method which result in improving the efficiency of the difficulty level evaluation and improving the system stability (abstract), which when apply to the system of Zhang, would will obtained a load balancing of tasks among the servers/devices and improve the system stability. As to claims 13-18, see rejections of claims 2-4, 7, 10 and 11 above, respectively. As to claim 19, Zhang teaches executing a method to perform (Storm cluster load balancing based on discrete particle swarm is characterized in that it includes the following steps; claim 1): initializing allocation parameters, wherein the allocation parameters comprise the number of idle node devices in a target device cluster and the number of to-be-allocated task elements (Get the number of working nodes s of the Storm cluster in the running state and the number of tasks t to be allocated; paragraph [0007] and claim 1); performing simulated allocation on the to-be-allocated task elements to each idle node device for processing to generate a candidate scheduling matrix, wherein the candidate scheduling matrix is configured to indicate a simulated allocation result (“initialize the particle swarm to obtain multiple different task allocation methods for each task assigned to the location of the working node; initializing a particle swarm corresponds to generating a candidate scheduling matrix”; paragraph [0008] and claim 1; and “initializing particle velocity and particle position [..] initializing particle position is: using a matrix random generation method to generate m *t matrix rand, and the range of each element in the matrix is 1, 2, 3, ... , s, where m is the population size of the particle swarm, and each row represents a task allocation method”; claim 2, “allocation result” corresponds to “simulated allocation result”); obtaining a load balancing parameter corresponding to the candidate scheduling matrix, and determining a target scheduling matrix according to the load balancing parameter (“Use each task allocation method in the particle swarm as its own historical best task allocation method Pbest, calculate the fitness value of each task allocation method, and select the task allocation method with the smallest fitness value from the particle swarm as the global historical best task allocation Method Gbest”; paragraph [0009] and claim 1; “load balancing parameter” corresponds to “fitness value”, which is determined for candidate matrices in each iteration); and in response to the target scheduling matrix satisfying an allocation condition, allocating the to-be-allocated task elements to the idle node devices according to the target scheduling matrix (“Repeat steps (14) to (15) until the number of iterations in the iteration formula reaches the preset maximum number of iterations to obtain the global historical best task allocation method Gbest”, and “Run the Storm cluster according to the global historical best task allocation method Gbest”; claim 1 and paragraphs [0012]-[0013]). Zhang does not teach an electronic device, comprising a memory and a processor; wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to perform, and allocating tasks to the idle nodes. However, Zhang teaches the optimization method to achieve load balancing and optimal utilization by taking into account all the nodes for the allocation, based on the given optimization goal i.e. the fitness function. Zhang further discloses “The present invention is not limited to the above-mentioned specific implementation manners, and various transformations made by those skilled in the art starting from the above-mentioned ideas without creative work all fall within the scope of protection of the present invention.” (paragraph [0104]). YIN teaches an electronic device, comprising a memory and a processor; wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to perform (“the device comprising: a processor; for storing processor-executable instructions; wherein the processor is configured to execute” and ““the present invention also provides a computer readable storage medium, the computer-readable storage medium storing a computer program”; pages 3-4), and allocating tasks to the idle nodes/servers (“the multiple single task allocated to a plurality of proxy server is in an idle state”; abstract and page 3). Given the teaching of Zhang regarding the invention can be implemented in different manner, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of YIN to the system of Zhang because YIN teaches a testing method which result in improving the efficiency of the difficulty level evaluation and improving the system stability (abstract), which when apply to the system of Zhang, would will obtained a load balancing of tasks among the servers/devices and improve the system stability. As to claim 20, see rejection of claim 2 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zeng et al. (EP 3 553 656 A1) teaches a resource scheduling method and apparatus, so as to resolve a prior-art problem of low resource usage. The method includes: determining, by a resource scheduler, a dominant share of an ith user, and determining a dominant idle resource of an ath node; then selecting, from N users, a first user with a minimum dominant share, and selecting a first task from tasks, of the first user, to which no resources are allocated; selecting a first node from M nodes according to dominant idle resources of the M nodes, where a dominant idle resource of the selected first node is the same as a resource corresponding to a dominant share of the selected first user; and finally, scheduling, by the resource scheduler, a resource of the selected first node to the selected first user, so that the first user executes the selected first task by using the scheduled resource. Maheswaran et al. (Dynamic Matching and Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems) teaches Dynamic mapping (matching and scheduling) heuristics for a class of independent tasks using heterogeneous distributed computing systems are studied. Two types of mapping heuristics are considered: on-line and batch mode heuristics. Three new heuristics, one for batch and two for on-line, are introduced as part of this research. Simulation studies are performed to compare these heuristics with some existing ones. In total, five on-line heuristics and three batch heuristics are examined. The on-line heuristics consider, to varying degrees and in different ways, task affinity for different machines and machine ready times. The batch heuristics consider these factors, as well as aging of tasks waiting to execute. The simulation results reveal that the choice of mapping heuristic depends on parameters such as: (a) the structure of the heterogeneity among tasks and machines, (b) the optimization requirements, and (c) the arrival rate of the tasks. Sukhoroslov et al. (An experimental study of scheduling algorithms for many-task applications) teaches studying the performance of algorithms for scheduling of many-task applications in distributed computing systems. Two important classes of such applications are considered: bags-of-tasks and workflows. The comparison of algorithms is performed on the basis of discrete-event simulation for various application cases and system configurations. The developed simulation framework based on SimGrid toolkit provides the necessary tools for implementation of scheduling algorithms, generation of synthetic systems and applications, execution of simulation experiments and analysis of results. This allowed to perform a large number of experiments in a reasonable amount of time and to ensure reproducible results. The conducted experiments demonstrate the dependence of the performance of studied algorithms on various application and system characteristics. While confirming the performance advantage of advanced static algorithms, the presented results reveal some interesting insights. In particular, the accuracy of the used network model helped to demonstrate the limitations of simple analytical models for scheduling of data-intensive parallel applications with static algorithms. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC January 15, 2026
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month