Prosecution Insights
Last updated: April 19, 2026
Application No. 18/379,196

METHOD AND SYSTEM FOR DYNAMICALLY SCHEDULING EXECUTION OF TASKS

Non-Final OA §103
Filed
Oct 12, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Hcl Technologies Italy S P A
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-7, 9, 10, 12-15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng et al. (CN 112395053 A – English translation is provided by IP.com) in view of Yuan (CN 113806017 A – English translation is provided by IP.com). As to claim 1, Zeng teaches a method of dynamically scheduling execution of tasks on an Operating System (OS) (a process of generating, by the control unit or the front-end server, the adjustment information according to the average response time of the back-end server to the received data processing task; page 8, 7th paragraph), the method comprising: obtaining, by a scheduling device, a plurality of data corresponding to a set of tasks to be executed on the OS (the control unit or the front-end server subscribes a log file of the back-end server from a log center (the log center is used for uniformly collecting log files of the back-end server), analyzes the number of data processing tasks received by the back-end server every second from the log file, calculates the total response time and the total decoding time of the data processing tasks; page 8, 9th-11th paragraphs); computing, by the scheduling device, a combined normalized weighted value corresponding to the plurality of obtained data (Step B: based on the stored data, the control unit or front end server calculates the average response time (denoted T) of the last X (X is greater than or equal to 1 and less than or equal to N) seconds per second avg latency) And the average decoding time (denoted as T)avg decode time)。;page 8, 12th paragraph); determining, by the scheduling device, a deviation of the combined normalized weighted value from a predefined threshold value, wherein the deviation is indicative of processing load of the OS (“when the average response time is less than or equal to a first threshold (i.e., T)avg latency Less than 1.2Tavg decode time”; page 10, 1st paragraph and “when the average response time is greater than the second threshold (i.e., T)avg latency Greater than 1.5 × Tavg decode time)”; page 10, 3rd paragraph, and “when the average response time is greater than the first threshold value and less than or equal to the second threshold value (i. e. T)avg latency Less than or equal to 1.5Tavg decode time And is greater than or equal to 1.2Tavg decode time)”; page 10, 5th paragraph); and regulating, by the scheduling device, a throughput rate of execution of the set of tasks based on the deviation (“The adjustment information includes adjustment information generated to indicate that the current processing rate is to be added to the first set adjustment value. Therefore, the target processing rate determined by the front-end server according to the current processing rate and the adjustment information can be increased, so that more data processing tasks can be sent, the data processing task amount acquired and processed by the rear-end server can be increased”; page 10, 1st paragraph and “The adjustment information includes adjustment information generated to instruct to subtract the second set adjustment value from the current processing rate, because the waiting time of the data processing task in the back-end server is longer and the amount of the data processing task in the back-end server is larger. Thus, the front-end server adjusts the current processing rate. The target processing rate determined by the information is reduced, so that the sent data processing tasks can be reduced, the load of the back-end server is reduced”; page 10, 3rd paragraph, and “In this case, it means that the amount of data processing tasks in the back-end data server is moderate, and therefore, the adjustment information includes the generated adjustment information indicating that the current processing rate is maintained, in other words, the third setting adjustment value indicated in the adjustment information is 0 in this case.”; page 10, 5th paragraph). Zeng does not teach a plurality of weighted matrices. However, Yuan teaches a plurality of weighted matrices corresponding to a set of tasks to be executed on the system/machine (setting the weight vector according to importance of CPU, memory, network and storage on the physical host comprises: and constructing a contrast matrix according to the importance degrees of the CPU, the memory, the network and the storage, carrying out geometric averaging on the vectors of each row of the contrast matrix, and normalizing the vectors of each row after the geometric averaging to obtain a weight vector.; page 8, 2nd paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Yuan to the system of Zeng because Yuan teaches a method that the balanced utilization and the maximized utilization rate of the basic resources in the cloud data center, thereby reducing the number of physical servers and reducing the energy consumption, and on the other hand, reducing the competition of various basic resources (abstract). As to claim 2, Zeng as modified by Yuan teaches the method of claim 1, wherein each of the plurality of weighted matrices comprises one or more matrix elements corresponding to a plurality of predefined parameters associated with the OS executing the set of tasks (see Zeng: decoding time refers to the time when the data processing tasks are actually decoded in the back-end server, and response time refers to the sum of the time for the data processing task to wait for the decoding processing to be executed in the backend server; page 8, 10th and 11th paragraphs) and (see Yuan: CPU, memory; page 8, 2nd paragraph). As to claim 4, Zeng as modified by Yuan teaches the method of claim 1, wherein determining the deviation comprises comparing the combined normalized weighted value with the predefined threshold value (see Zeng: “when the average response time is less than or equal to a first threshold (i.e., T)avg latency Less than 1.2Tavg decode time”; page 10, 1st paragraph and “when the average response time is greater than the second threshold (i.e., T)avg latency Greater than 1.5 × Tavg decode time)”; page 10, 3rd paragraph, and “when the average response time is greater than the first threshold value and less than or equal to the second threshold value (i.e. T)avg latency Less than or equal to 1.5Tavg decode time And is greater than or equal to 1.2Tavg decode time)”; page 10, 5th paragraph). As to claim 5, Zeng as modified by Yuan teaches the method of claim 1, comprising generating at least one indicator of a plurality of indicators based on the deviation to regulate the throughput rate (see Zeng: the adjustment information can be increased/reduced/0; page 10, 1st, 3rd and 5th paragraphs). As to claim 6, Zeng as modified by Yuan teaches the method of claim 1, wherein the deviation is at least one of a positive deviation or a negative deviation (see Zeng: increased or reduced; page 10, 1st and 3rd paragraphs). As to claim 7, Zeng as modified by Yuan teaches the method of claim 1, further comprising: monitoring, in real-time, characteristics of the OS during execution of the set of tasks; and storing the characteristics of the OS as historical data in an associated database (see Zeng: the control unit or the front-end server subscribes a log file of the back-end server from a log center (the log center is used for uniformly collecting log files of the back-end server), analyzes the number of data processing tasks received by the back-end server every second from the log file, calculates the total response time and the total decoding time of the data processing tasks; page 8, 9th-11th paragraphs). As to claim 9, Zeng teaches a system for dynamically scheduling execution of tasks on an Operating System (OS) (a data processing apparatus … control the data processing task to be sent through the control information; page 5, 10th paragraph), the system comprising: a processor (a processor; page 5, 10th paragraph); and a memory communicatively coupled to the processor (a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus; page 5, 10th paragraph), wherein the memory stores processor-executable instructions, which when executed by the processor, cause the processor to (the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the corresponding operation of the data processing method according to the first aspect; page 5, 10th paragraph): obtain a plurality of data corresponding to a set of tasks to be executed on the OS (the control unit or the front-end server subscribes a log file of the back-end server from a log center (the log center is used for uniformly collecting log files of the back-end server), analyzes the number of data processing tasks received by the back-end server every second from the log file, calculates the total response time and the total decoding time of the data processing tasks; page 8, 9th-11th paragraphs); compute a combined normalized weighted value corresponding to the plurality of obtained data (Step B: based on the stored data, the control unit or front end server calculates the average response time (denoted T) of the last X (X is greater than or equal to 1 and less than or equal to N) seconds per second avg latency) And the average decoding time (denoted as T)avg decode time)。;page 8, 12th paragraph); determine a deviation of the combined normalized weighted value from a predefined threshold value, wherein the deviation is indicative of processing load of the OS (“when the average response time is less than or equal to a first threshold (i.e., T)avg latency Less than 1.2Tavg decode time”; page 10, 1st paragraph and “when the average response time is greater than the second threshold (i.e., T)avg latency Greater than 1.5 × Tavg decode time)”; page 10, 3rd paragraph, and “when the average response time is greater than the first threshold value and less than or equal to the second threshold value (i.e. T)avg latency Less than or equal to 1.5Tavg decode time And is greater than or equal to 1.2Tavg decode time)”; page 10, 5th paragraph); and regulate a throughput rate of execution of the set of tasks based on the deviation (“The adjustment information includes adjustment information generated to indicate that the current processing rate is to be added to the first set adjustment value. Therefore, the target processing rate determined by the front-end server according to the current processing rate and the adjustment information can be increased, so that more data processing tasks can be sent, the data processing task amount acquired and processed by the rear-end server can be increased”; page 10, 1st paragraph and “The adjustment information includes adjustment information generated to instruct to subtract the second set adjustment value from the current processing rate, because the waiting time of the data processing task in the back-end server is longer and the amount of the data processing task in the back-end server is larger. Thus, the front-end server adjusts the current processing rate. The target processing rate determined by the information is reduced, so that the sent data processing tasks can be reduced, the load of the back-end server is reduced”; page 10, 3rd paragraph, and “In this case, it means that the amount of data processing tasks in the back-end data server is moderate, and therefore, the adjustment information includes the generated adjustment information indicating that the current processing rate is maintained, in other words, the third setting adjustment value indicated in the adjustment information is 0 in this case.”; page 10, 5th paragraph). Zeng does not teach a plurality of weighted matrices. However, Yuan teaches a plurality of weighted matrices corresponding to a set of tasks to be executed on the system/machine (setting the weight vector according to importance of CPU, memory, network and storage on the physical host comprises: and constructing a contrast matrix according to the importance degrees of the CPU, the memory, the network and the storage, carrying out geometric averaging on the vectors of each row of the contrast matrix, and normalizing the vectors of each row after the geometric averaging to obtain a weight vector.; page 8, 2nd paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Yuan to the system of Zeng because Yuan teaches a method that the balanced utilization and the maximized utilization rate of the basic resources in the cloud data center, thereby reducing the number of physical servers and reducing the energy consumption, and on the other hand, reducing the competition of various basic resources (abstract). As to claim 10, see rejection of claim 2 above. As to claim 12, see rejection of claim 4 above. As to claim 13, see rejection of claim 5 above. As to claim 14, see rejection of claim 6 above. As to claim 15, see rejection of claim 7 above. As to claim 17, Zeng teaches a non-transitory computer-readable medium storing computer-executable instructions for dynamically scheduling execution of tasks on an Operating System (OS), the computer-executable instructions configured for (the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the corresponding operation of the data processing method according to the first aspect; page 5, 10th paragraph): obtaining a plurality of data corresponding to a set of tasks to be executed on the OS (the control unit or the front-end server subscribes a log file of the back-end server from a log center (the log center is used for uniformly collecting log files of the back-end server), analyzes the number of data processing tasks received by the back-end server every second from the log file, calculates the total response time and the total decoding time of the data processing tasks; page 8, 9th-11th paragraphs); computing a combined normalized weighted value corresponding to the plurality of obtained data (Step B: based on the stored data, the control unit or front end server calculates the average response time (denoted T) of the last X (X is greater than or equal to 1 and less than or equal to N) seconds per second avg latency) And the average decoding time (denoted as T)avg decode time)。;page 8, 12th paragraph); determining a deviation of the combined normalized weighted value from a predefined threshold value, wherein the deviation is indicative of processing load of the OS (“when the average response time is less than or equal to a first threshold (i.e., T)avg latency Less than 1.2Tavg decode time”; page 10, 1st paragraph and “when the average response time is greater than the second threshold (i.e., T)avg latency Greater than 1.5 × Tavg decode time)”; page 10, 3rd paragraph, and “when the average response time is greater than the first threshold value and less than or equal to the second threshold value (i. e. T)avg latency Less than or equal to 1.5Tavg decode time And is greater than or equal to 1.2Tavg decode time)”; page 10, 5th paragraph); and regulating a throughput rate of execution of the set of tasks based on the deviation (“The adjustment information includes adjustment information generated to indicate that the current processing rate is to be added to the first set adjustment value. Therefore, the target processing rate determined by the front-end server according to the current processing rate and the adjustment information can be increased, so that more data processing tasks can be sent, the data processing task amount acquired and processed by the rear-end server can be increased”; page 10, 1st paragraph and “The adjustment information includes adjustment information generated to instruct to subtract the second set adjustment value from the current processing rate, because the waiting time of the data processing task in the back-end server is longer and the amount of the data processing task in the back-end server is larger. Thus, the front-end server adjusts the current processing rate. The target processing rate determined by the information is reduced, so that the sent data processing tasks can be reduced, the load of the back-end server is reduced”; page 10, 3rd paragraph, and “In this case, it means that the amount of data processing tasks in the back-end data server is moderate, and therefore, the adjustment information includes the generated adjustment information indicating that the current processing rate is maintained, in other words, the third setting adjustment value indicated in the adjustment information is 0 in this case.”; page 10, 5th paragraph). Zeng does not teach a plurality of weighted matrices. However, Yuan teaches a plurality of weighted matrices corresponding to a set of tasks to be executed on the system/machine (setting the weight vector according to importance of CPU, memory, network and storage on the physical host comprises: and constructing a contrast matrix according to the importance degrees of the CPU, the memory, the network and the storage, carrying out geometric averaging on the vectors of each row of the contrast matrix, and normalizing the vectors of each row after the geometric averaging to obtain a weight vector.; page 8, 2nd paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Yuan to the system of Zeng because Yuan teaches a method that the balanced utilization and the maximized utilization rate of the basic resources in the cloud data center, thereby reducing the number of physical servers and reducing the energy consumption, and on the other hand, reducing the competition of various basic resources (abstract). As to claim 19, see rejection of claim 7 above. Claims 3, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng et al. (CN 112395053 A – English translation is provided by IP.com) in view of Yuan (CN 113806017 A – English translation is provided by IP.com) further in view of Lu (CN 105260253 A - English translation is provided by IP.com). As to claim 3, Zeng as modified by Yuan teaches the method of claim 2, wherein the plurality predefined parameters comprise a response time (response time; page 8, 9th and 11th paragraphs). Zeng as modified by Yuan does not teach the plurality predefined parameters comprise a capacity of Central Processing Unit (CPU), Input-Output (I/O) rates, a disk response time, and a network speed. However, Lu teaches the plurality predefined parameters comprise a capacity of Central Processing Unit (CPU), Input-Output (I/O) rates, a disk response time, and a network speed (the Key Performance Indicator pre-setting server comprises: response time, CPU busy percentage, memory usage, magnetic disc i/o occupation rate, network rate, and the first weighted value, the second weighted value, the 3rd weighted value, the 4th weighted value, the 5th weighted value that response time, cpu busy percentage, memory usage, magnetic disc i/o occupation rate, network rate difference correspondence are set; page 3, 6th paragraph). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Lu to the system of Zeng as modified by Yuan because Lu teaches a method to determine the server/machine current conditions, which can be used to determine whether to regulate the tasks assignment rate. As to claim 11, see rejection of claim 3 above. As to claim 18, see rejection of claims 2 and 3 above. Claims 8, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zeng et al. (CN 112395053 A – English translation is provided by IP.com) in view of Yuan (CN 113806017 A – English translation is provided by IP.com) further in view of Kuo et al. (US 2019/0138354 A1). As to claim 8, Zeng as modified by Yuan does not teach for a current cycle of task execution scheduling, identifying a similar pattern from the historical data; and regulating a current throughput rate of execution of the set of tasks based on a historical throughput rate for the similar pattern. However, Kuo teaches for a current cycle of task execution scheduling, identifying a similar pattern from the historical data; and regulating a current throughput rate of execution of the set of tasks based on a historical throughput rate for the similar pattern (According to the rendering history, the time pattern of idle resources may be deduced. Once the idle resource is short, the job requiring short execution time is allocated. If the idle resource is longer, the job requiring longer execution time may be arranged; paragraph [0010] and According to the idle time of the idle computation resources, the idle computation resources are allocated to computation tasks requiring different time to complete; paragraph [0013]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Kuo to the system of Zeng as modified by Yuan because Kuo teaches by using history data regarding the time pattern of the resources usage to adjust/allocate tasks, the completion rate of jobs may be improved. As to claim 16, see rejection of claim 8 above. As to claim 20, see rejection of claim 8 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dube et al. (US 2017/0031712 A1) teaches a method for scheduling the execution of a workload in a computing environment. The computer identifies information related to the computing environment, wherein the information comprises at least processors available to execute each computing task of the plurality of computing tasks and storage device proximity to the processors. The computer determines an execution configuration for the computing job based, at least in part, on the received request, the information related to the computing environment, and current utilization of the processors' resources. The computer schedules execution of the execution configuration for the computing job. Chen et al. (US 12,204,934 B2) teaches a method, a device, and a program product for managing multiple computing tasks on a batch basis. A method includes: identifying a task type of the multiple computing tasks in response to receiving a request to use a computing unit in a computing system to perform the multiple computing tasks; acquiring a scheduling time overhead incurred for scheduling the multiple computing tasks for execution by the computing unit; determining, based on the task type and the scheduling time overhead, a batch size for dividing the multiple computing tasks; and dividing the multiple computing tasks into at least one batch based on the batch size. Wang et al. (CN 111078404 A – English translation is provided by IP.com) teaches a method comprises the following steps: constructing a current batch task set based on at least one task corresponding to the same target resource usage amount; acquiring the expected resource usage amount of the tasks in the current batch of task sets; distributing corresponding nodes for the tasks in the task set of the current batch based on the expected resource usage of the tasks in the task set of the current batch; sending a first task execution instruction to a corresponding node; acquiring the actual resource usage of the corresponding node executing task, and obtaining the resource usage of the task according to the expected resource usage of the task in the current batch of tasks and the actual resource usage of the corresponding node executing task; obtaining the resource utilization rate of the task set in the current batch based on the resource utilization rate of the tasks; and determining the expected resource usage amount of the tasks in the next batch of task sets based on the comparison result of the resource usage rate of the current batch of task sets and the resource usage rate threshold. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC February 20, 2026
Read full office action

Prosecution Timeline

Oct 12, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month