Prosecution Insights
Last updated: April 19, 2026
Application No. 18/447,729

JOB SCHEDULE QUALITY PREDICTION AND JOB SCHEDULING

Non-Final OA §101§103
Filed
Aug 10, 2023
Examiner
TRUONG, LECHI
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
766 granted / 879 resolved
+32.1% vs TC avg
Strong +37% interview lift
Without
With
+37.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
911
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 879 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-25 are presented for the examination. § 101 2. 35 U.S.C. 101 reads as follows Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to Claims 1, 8, 15 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ defining a batch job for execution on the computer system, and defining batch job parameters defining limits on the scheduling of the corresponding batch job”, “predicting a job schedule quality for candidate job schedule”, “selecting a job schedule from the multiple candidate job schedules according to the predicted job schedule qualities” recite a mental process since “define” and “predict”, “ select” are functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits, executing a job of the multiple batch jobs according to the selected candidate job schedule ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs comprising: multiple candidate job schedules, a job schedule satisfying the defined limits ” - this generally have been a mental process although multiple client systems, a job schedule could be a generic computer component in the spec describes it as actual computer in computer hardware and “executing a job of the multiple batch jobs according to the selected candidate job schedule” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. As to Claims 7, 9 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ classify batch jobs into one of multiple categories with a different expected load”, “ defining a start time range and/or end time range”, recite a mental process since “classify”, “ define” are function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits, executing a job of the multiple batch jobs according to the selected candidate job schedule ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs comprising: multiple candidate job schedules, a job schedule satisfying the defined limits ” - this generally have been a mental process although multiple client systems, a job schedule could be a generic computer component in the spec describes it as actual computer in computer hardware and “executing a job of the multiple batch jobs according to the selected candidate job schedule” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. As to Claim 10 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ detecting a failed batch job execution” recite a mental process since “ detect” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits, executing a job of the multiple batch jobs according to the selected candidate job schedule, rescheduling the batch job at a point in time with improved predicted availability and/or error rates compared to a threshold. ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs comprising: multiple candidate job schedules, a job schedule satisfying the defined limits ” - this generally have been a mental process although multiple client systems, a job schedule could be a generic computer component in the spec describes it as actual computer in computer hardware and “executing a job of the multiple batch jobs according to the selected candidate job schedule, rescheduling the batch job at a point in time with improved predicted availability and/or error rates compared to a threshold.” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. As to Claims 14 , 15, 19, 20, 24, 25 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ predict an individual induced load for the multiple batch jobs” , “predicts an individual induced load for the multiple batch jobs”, “predict” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits, executing a job of the multiple batch jobs according to the selected candidate job schedule, adding individual loads for a particular time induced by batch jobs scheduled to run at the particular time, a job scheduler then approximating the job schedule ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs comprising: multiple candidate job schedules, a job schedule satisfying the defined limits , a job scheduler then approximating the job schedule ” - this generally have been a mental process although multiple client systems, a job schedule could be a generic computer component in the spec describes it as actual computer in computer hardware and “executing a job of the multiple batch jobs according to the selected candidate job schedule, adding individual loads for a particular time induced by batch jobs scheduled to run at the particular time.” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. As to Claims 17, 22 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ defining a minimum number of executions of the batch job for a set time period,”, recite a mental process since “ define” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits, executing a job of the multiple batch jobs according to the selected candidate job schedule, increasing said minimum number of executions for the batch job thus obtaining additional individual measurements ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “receiving multiple batch job registrations from multiple client systems, computing a job schedule for executing the multiple batch jobs comprising: multiple candidate job schedules, a job schedule satisfying the defined limits ” - this generally have been a mental process although multiple client systems, a job schedule could be a generic computer component in the spec describes it as actual computer in computer hardware and “executing a job of the multiple batch jobs according to the selected candidate job schedule, increasing said minimum number of executions for the batch job thus obtaining additional individual measurements” - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. 8. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. See MPEP 2106.05(d). Thus, the claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 4, 5, 6, 9, 16, 21 are rejected under 35 U.S.C. 103 as being unpatentable over Xu( US 12056524 B2) in view of Bahl( US 20200117504 A1) and further in SANKARAN ( US 20190347125 A1 ) As to claim 1, Xu teaches A computer-implemented method for scheduling batch jobs on a computer system( Batch processing is the running of a group of jobs, which can run automatically on a computer without user interaction or can be scheduled as computer resources permit. A batch job is a scheduled program or set of programs that is assigned to run on the computer, col 1, ln 15-22), a batch job registration defining a batch job for execution on the computer system( Batch jobs are often queued during working hours and then executed during the evening or weekend when sufficient computer resources are available. Once a batch job is submitted, the batch job enters a queue[a batch job registration] where the batch job waits until the computer is ready to process the batch job. If the batch job queue contains a multitude of batch jobs waiting to be processed, then the computer may, for example, process batch jobs in chronological order, by priority, or a combination of both., col 1, ln 20-30) , and predicting a job schedule quality for candidate job schedule using a predictive model previously trained on job schedule performance data( Batch processing is the running of a group of jobs, which can run automatically on a computer without user interaction or can be scheduled as computer resources permit. A batch job is a scheduled program or set of programs that is assigned to run on the computer, col 1, ln 15-22/ the batch job manager of illustrative embodiments utilizes a set of machine learning models of an artificial intelligence component to generate predictive models 504 based on training data collected from previously run batches of jobs, col 12, ln 47-55/ Batch job manager 218 may utilize the artificial intelligence component to generate batch end time predictive models that predict the batch end time of the batch of jobs during running of the batch of jobs, col 7, ln 23-39); Bahl teaches defining batch job parameters defining limits on the scheduling of the corresponding batch job( A schedule for executing batch jobs may simply assign one or more batch jobs for execution by one or more dedicated servers, para[0005], ln 1-3/ changing periodicity constraints [parameters]associated with one or more computation jobs, and other modifications that cause changes to utilized computing resources and/or execution durations, para[0043], ln 9-15/ The new periodicity constraints specify that job A is executed two times a week, job B is executed seven times a week, job C is executed twice a week, job D is executed four times a week, job E is executed four times a week, job F is executed once a week, and job G is executed seven times a week, para[0068], ln 7-16); computing a job schedule for executing the multiple batch jobs, computing the job schedule comprising: multiple candidate job schedules, a job schedule satisfying the defined limits( Scheduler 114 is configured to access data in memory, such as data store 106, and to generate potential job execution schedules using such data as the periodicity constraints, categorization data, and organization data. Scheduler 114 according to one embodiment is also configured to evaluate potential schedules and select one or more schedules to formalize and implement to execute computation jobs, para[0036]/ During time segment 8 of schedule 300, scheduler 114 receives an instruction to modify the job schedule based on new periodicity constraints for job types A-G. The new periodicity constraints specify that job A is executed two times a week, job B is executed seven times a week, job C is executed twice a week, job D is executed four times a week, job E is executed four times a week, job F is executed once a week, and job G is executed seven times a week. In response to receiving the instruction, scheduler accesses data including present schedule 300, past interval 400 of FIG. 4, and filtered interval 500 of FIG. 5. The request was received during time segment 8, and job executor 112 continues to cause execution of the computation jobs according to the next time segment 9 of present schedule 300. During the time segment 9, scheduler 114 iteratively populates resulting ordered arrangements (blocks 214-218), generates new schedules (block 220), and evaluates the schedules to identify and select a particular schedule for implementation (block 222), para[0068]/ Using this fitness function, a schedule with a lower score or value implies a lesser usage of computing resources with fewer periodicity constraint violations. Accordingly, as scheduler 114 uses the fitness function to calculate fitness scores for different schedules, the scheduler also ranks the schedules by the fitness scores and selects a schedule for implementation that has the best fitness score, or at least an acceptable fitness score that meets some predetermined threshold requirement, para[0078], ln 14-23); and selecting a job schedule from the multiple candidate job schedules according to the predicted job schedule qualities( Illustratively, a schedule maps computation jobs to computing resources for execution over the course of a week, and the weekly timeframe is divided into time segments, such as daily time segments, quarter-day time segments (6 hours), or hourly time segments. The schedule assigns each computation job to be executed by a computing resource during one or more time segments. At block 212, scheduler 114 determines a finite timing to implement the new or modified schedule, which for instance may amount to a next full time segment. In such an example, scheduler 114 receives an instruction to generate a schedule during a first time segment. The instruction may be triggered by a requested change to one or more of the computation jobs. In response, scheduler 114 identifies a second time segment immediately following the first time segment. During the identified second time segment, scheduler 114 performs functions such as the operations of blocks 214-222 to generate potential schedules and selects a particular schedule for implementation, such that the selected schedule can be implemented as represented by block 224 at the conclusion of the second time segment. If additional time is allowed to perform the functions of blocks 214-222, then scheduler 114 can perform additional iterations of blocks 214-222 to help select an even more optimal schedule for execution, para[0052], ln 7-15 to para[0053], Fig. 2 ), and executing a job of the multiple batch jobs according to the selected candidate job schedule( to evaluate potential schedules and select one or more schedules to formalize and implement to execute computation jobs, para[0036], ln 5-9/ selects a particular schedule for implementation, such that the selected schedule can be implemented as represented by block 224 at the conclusion of the second time segment. If additional time is allowed to perform the functions of blocks 214-222, then scheduler 114 can perform additional iterations of blocks 214-222 to help select an even more optimal schedule for execution, para[0053], ln 13-19 Fig. 2). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Xu with Bahl to incorporate the above feature because this is a desire to generate effective schedules within a finite time while helping to ensure that necessary computer resources are reserved for each computation job (avoiding starvation) and maintaining periodicity of computation job. SANKARAN teaches receiving multiple batch job registrations from multiple client systems for defining a batch job( In one implementation, in shared mode, a DSA client uses the ENQCMD or ENQCMDS instructions to submit descriptors to the work queue, para[0485]/ Batch processing: some implementations support submitting descriptors in a “batch.” A batch descriptor points to a set of virtually contiguous work descriptors (i.e., descriptors containing actual data operations). When processing a batch descriptor, DSA fetches the work descriptors from the specified memory and processes them, para[0097]/ FIG. 35 illustrates one implementation of a data streaming accelerator (DSA) device comprising multiple work queues 3511-3512 which receive descriptors submitted over an I/O fabric interface 3501 (e.g., such as the multi-protocol link 2800 described above)., para[0473], ln 1-6/ some of the descriptors queued in the work queues 3511-3512 are batch descriptors 3515 which contain/identify a batch of work descriptors. The arbiter 3513 forwards batch descriptors to a batch processing unit 3516 which processes batch descriptors by reading the array of descriptors 3518 from memory, using addresses translated through translation cache 3520, para[0474], ln 1-9/ For example, the arbiter 3513 may be configured to implement various QoS and/or fairness policies for dispatching descriptors from each of the work queues 3511-1012 to each of the engines 3550, para[0473], ln 15-20/ the DSA supports submitting multiple descriptors at once. A batch descriptor contains the address of an array of work descriptors in host memory and the number of elements in the array. The array of work descriptors is called the “batch.” Use of Batch descriptors allows DSA clients to submit multiple work descriptors using a single ENQCMD, ENQCMDS, or MOVDIR64B instruction and can potentially improve overall throughput, para[0545], ln 1-8). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Xu and Bahl with SANKARAN to incorporate the above feature because this allows a quality of service (QoS) level to be specified for each work queue and this It may assign different work queues to different applications, allowing the work from different applications to be dispatched from the work queues with different priorities. As to claim 3, Bahl teaches a job schedule quality comprises a predicted load for an involved system resource of the computer system, induced by the candidate job schedule, and/or a predicted availability of a system involved in the processing of the candidate job scheduler( para[0078], ln 14-23/ para[0053]/ para[0036], ln 13-19 Fig. 2/ para[0068]) for the same reason as to claim 1 above. As to claim 4, Xu teaches the predictive model is configured to predict an individual induced load for the multiple batch jobs( col 14, n 5-34) , and Bahl teaches job schedule quality comprises adding individual loads for a particular time induced by batch jobs scheduled to run at the particular time( para[0043], ln 9-15/ [0068], ln 7-16) for the same reason as to claim 1 above. As to claim 5, Xu teaches the predictive model first predicts an individual induced load for the multiple batch jobs( col 7, ln 23-40), Bahl teaches a job scheduler then approximating the job schedule( para[0036] for the same reason as to claim 1 above. As to claim 6, Bahl teaches a job scheduler generates multiple job schedules, which a quality is then predicted for( para[0036]/ para[0053], ln 1-9) for the same reason as to claim 1 above. As to claim 9, Balh teaches the job parameters comprise one or more of a time Horizon, defining a start time range and/or end time range, and an execution guarantee indicating a minimal and/or maximum number of executions for a time period(para[0047]/ para[0072]/ 1-24 para[0006], ln 16-22/ para[0078], ln 14-23) for the same reason as to claim 1 above . As to claims 16, 21, they are rejected for the same reason as to claim 1 above. In additional, Xu teaches processor( a processor to carry out aspects of the present invention., col 2, ln 18-20), non-transitory computer-readable medium( computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer-readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire, col 2, ln 28-45). Claim(s) 2, 11, 12, 13, 14, 15, 17, 22, 18, 19, 20, 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Xu( US 12056524 B2) in view of Bahl( US 20200117504 A1) in SANKARAN ( US 20190347125 A1 ) and further in view of Seeger( US 10748072 B1). As to claim 2, Xu teaches the computer system is a cloud system( col 4, ln 35-39) , SANKARAN teaches the multiple batch jobs sharing the same computer hardware( para[0437], ln 11-16/ Fig. 34), Seeger teaches the client systems connect to the computer system to register one or more batch jobs( The administrative or control plane portion of the MLS may include a request handler 880, which accepts client requests 811, and takes different actions depending on the nature of the analysis requested. For at least some types of requests, the request handler may insert corresponding job objects into batch job queue 842, as indicated by arrow 812., col 16, ln 13-20/ ( In at least some implementations, job queue 842 may be managed as a first-in-first-out (FIFO) queue, with the further constraint that the dependency requirements of a given job must have been met in order for that job to be removed from the queue. In some embodiments, jobs created on behalf of several different clients[multiple client systems] may be placed in a single queue, while in other embodiments multiple queues may be maintained (e.g., one queue in each data center of the provider network being used, or one queue per MLS customer). Asynchronously with respect to the submission of the requests 811, the next job whose dependency requirements have been met may be removed from job queue 842 in the depicted embodiment, as indicated by arrow 813, and a processing plan comprising a workload distribution strategy may be identified for it. With respect to the forecasting iterations discussed in the context of FIG. 7, respective sets of forecasting jobs may be created and queued for each iteration in some embodiments. The workload distribution strategy layer 875, which may also be a component of the MLS control plane as mentioned earlier, may determine the manner in which the lower level operations of the job are to be distributed among one or more compute servers (e.g., servers selected from pool 885), and/or the manner in which the data analyzed or manipulated for the job is to be distributed among one or more storage devices or servers. As indicated by arrow 814, the workload distribution strategy layer 875 may also be utilized by forecasting coordinator 881 in some embodiments, e.g., to help identify the set of servers to be used for the forecasting. For example, as discussed in the context of FIG. 7, in at least one embodiment forecasting for respective partitions of a large inventory may be implemented in a parallelized manner. After the processing plan has been generated and the appropriate set of resources to be utilized for the batch job has been identified, operations may be scheduled on the identified resources. Results of some batch jobs or real-time analyses may be stored as MLS artifacts within repository 820 in some embodiments, as indicated by arrow 847, col 16, ln 47-67 to col 17, ln 1-18). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Xu, Bahl and SANKARAN with Seeger to incorporate the above feature because this provides or registers their own modules (which may be defined as user-defined functions) for input record handling, feature processing, or for implementing additional machine learning algorithms than are supported natively by the MLS. As to claim 11, it is rejected for the same reason as to claim 2 above. In additional, Seeger teaches performing multiple iterations of: receiving multiple batch job registrations from multiple client systems(The administrative or control plane portion of the MLS may include a request handler 880, which accepts client requests 811, and takes different actions depending on the nature of the analysis requested. For at least some types of requests, the request handler may insert corresponding job objects into batch job queue 842, as indicated by arrow 812., col 16, ln 13-20/ ( In at least some implementations, job queue 842 may be managed as a first-in-first-out (FIFO) queue, with the further constraint that the dependency requirements of a given job must have been met in order for that job to be removed from the queue. In some embodiments, jobs created on behalf of several different clients[multiple client systems] may be placed in a single queue, while in other embodiments multiple queues may be maintained (e.g., one queue in each data center of the provider network being used, or one queue per MLS customer). Asynchronously with respect to the submission of the requests 811, the next job whose dependency requirements have been met may be removed from job queue 842 in the depicted embodiment, as indicated by arrow 813, and a processing plan comprising a workload distribution strategy may be identified for it. With respect to the forecasting iterations discussed in the context of FIG. 7, respective sets of forecasting jobs may be created and queued for each iteration in some embodiments. The workload distribution strategy layer 875, which may also be a component of the MLS control plane as mentioned earlier, may determine the manner in which the lower level operations of the job are to be distributed among one or more compute servers (e.g., servers selected from pool 885), and/or the manner in which the data analyzed or manipulated for the job is to be distributed among one or more storage devices or servers. As indicated by arrow 814, the workload distribution strategy layer 875 may also be utilized by forecasting coordinator 881 in some embodiments, e.g., to help identify the set of servers to be used for the forecasting. For example, as discussed in the context of FIG. 7, in at least one embodiment forecasting for respective partitions of a large inventory may be implemented in a parallelized manner. After the processing plan has been generated and the appropriate set of resources to be utilized for the batch job has been identified, operations may be scheduled on the identified resources. Results of some batch jobs or real-time analyses may be stored as MLS artifacts within repository 820 in some embodiments, as indicated by arrow 847, col 16, ln 47-67 to col 17, ln 1-18), executing the job schedule while measuring the job schedule performance( to evaluate potential schedules and select one or more schedules to formalize and implement to execute computation jobs, para[0036], ln 5-9/ selects a particular schedule for implementation, such that the selected schedule can be implemented as represented by block 224 at the conclusion of the second time segment. If additional time is allowed to perform the functions of blocks 214-222, then scheduler 114 can perform additional iterations of blocks 214-222 to help select an even more optimal schedule for execution, para[0053], ln 13-19 Fig. 2) for the same reason as to claim 2 above. As to claim 12, Seeger teaches at least one of the batch job registrations has a parameter defining a minimum number of executions of the batch job for a set time period( col 16, ln 15-25/ col 16, ln 32-45) , Bahl teaches measuring the job schedule performance comprises measuring the individual performance of the multiple batch jobs in the job schedule, defining a minimum number of executions of the batch job for a set time period ( para[0029], ln 1-20/ para[0030]/ para[0036]/ para[0045]), increasing said minimum number of executions for the batch job thus obtaining additional individual measurements( para[0043]/ para[0044], ln 7-16/ para[0070], ln 6-16) for the same reasons as to claims 1and 2. As to claim 13, Seeger teaches a job schedule quality comprises a predicted load( col 16, ln 31-40), Bahl teaches an involved system resource of the computer system, induced by the candidate job schedule, and/or a predicted availability of a system involved in the processing of the candidate job schedule( para[0006], ln 1-21) for the same reasons as to claim 1 and 2 above. As to claim 14, Seeger teaches an individual induced load for the multiple batch jobs ( col 17, ln 11-16) , Xu teaches the predictive model is configured to predict an individual induced load for the multiple batch jobs( col 1, ln 47-55), Bahl teaches job schedule quality comprises adding individual loads for a particular time induced by batch jobs scheduled to run at the particular time( para[0068], para[0070], ln 3-22) for the same reasons as to claim 1 and 2 above. As to claim 15, it is rejected for the same reason as to claim 5 above. As to claim 17, Seeger teaches at least one of the batch job registrations has a parameter defining a minimum number of executions of the batch job for a set time period( col 16, ln 15-25/ col 16, ln 32-45) and Bahl teaches defining a minimum number of executions of the batch job for a set time period( para[0047]/ para[0072]/ 1-24 para[0006], ln 16-22/ para[0078], ln 14-23), the method comprising increasing said minimum number of executions for the batch job thus obtaining additional individual measurements ( para[0043]/ para[0044], ln 7-16/ para[0070], ln 6-16) for the same reason as to claims 1 and 2 above. As to claim 22, it is rejected for the same reason as to claim 17 above. As to claims 18, 19, 20, 23-25, they are rejected for the same reasons as to claims 13, 14, 15 above. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Xu ( US 12056524 B2) in view of Bahl( US 20200117504 A1) in SANKARAN ( US 20190347125 A1 ) and further in view of Mahamuni( US 11656932 B2). As to claim 7, Mahamuni teaches predictive model is configured to classify batch jobs into one of multiple categories with a different expected load(an AI engine 223, which may apply one or more machine learning techniques or cognitive computing models, either in conjunction with or as part of the knowledge base 243, to arrive at one or more predicted batch job failures, root cause conclusions for a batch job failure, and/or recommended actions for remediating existing or predicted batch job failures., col 19, ln 27-37/ the AI engine 223 may optimize the dynamic code patch process for learning process invocation by classify the workloads of the batch jobs into different categories that exhibit different behavior over time. For example, where a workload of a batch job exhibits linear invocation of processes for each task based on the historical time series data, no prediction of the code paths may need to be performed by the AI engine 223; whereas workloads of the batch jobs exhibiting varying invocations of processes based on application parameters may received predictions of dynamic invocation patterns, col 27, ln 60-67 to col 28, n 1-5). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Xu, Bahl and SANKARAN with Mahamuni to incorporate the above feature because this is predicting, preventing and remediating failures of batch jobs being executed and/or queued for processing at future scheduled time . Claim(s) 8, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Xu ( US 12056524 B2) in view of Bahl( US 20200117504 A1) in SANKARAN ( US 20190347125 A1 ) and further in view of LU( US 20230161620 A1). As to claim 8 , Lu teaches a batch job comprises data synchronization jobs across multiple client systems, the method further comprising predicting the likelihood of a change in the client system at a given time, the quality of a job schedule reducing for the data synchronization jobs scheduled with lower likelihood of change( The job scheduler is only responsible for task division, and no longer implements fine-grained real-time monitoring on computing nodes. For large-scale synchronous jobs represented by a BSP model, the Push mode is adopted to push tasks from the job scheduler to computing nodes, which ensures that large jobs can get sufficient resources smoothly. For high-throughput jobs which can be divided into fine-grained small tasks, computing nodes monitor their own resource usage, and actively request executable tasks from the job scheduler by adopting the Pull mode, so as to improve the resource utilization rate., para[0067], ln 47-57/ . Firstly, part of the scheduling tasks are separated from the task scope of the traditional master node, which reduces the workload of the master node and the possibility that the master node becomes the bottleneck of the system, thus improving the scalability of the system. Secondly, after the master node of the system has pushed some jobs, the resource usage in the computing node has changed; and at this point, if the Pull mode is used, the computing node does not need to actively report the resource usage to the master node, thus reducing the communication overhead of the system caused by message transmission. In addition, when the master node of the system pushes jobs, for some jobs, node resources do not meet the job requirements; at this point, the jobs cannot be pushed to the computing node, which will result in idleness of resources on the computing node; however, if the Pull mode is adopted, the computing node actively pulls small jobs which can be executed on it, thus making full use of the idle resources and improving the resource utilization rate of the system. Finally, the invention comprehensively considers large jobs and small jobs, and implements reasonable scheduling strategies for both, especially the design that the computing node actively pulls small jobs for execution will effectively reduce the average waiting time of jobs, thus improving the job throughput rate of the system., para[0048], ln 16-25). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Xu, Bahl and SANKARAN with Lu to incorporate the above feature because this provides the heterogeneity of system resources adopted to complete all scheduling tasks simply by relying on a master node, the running efficiency and scalability of the whole system will be seriously affected. As to claim 10, Balh teaches detecting a failed batch job execution, and rescheduling the batch job at a point in time with improved predicted availability and/or error rates compared to a threshold( para[0006], ln 1-22) for the same reason as to claim 1 above, Conclusion US 11347544 B1 teaches while waiting a scheduler may check if there are more work items coming into queues, which can all be batched together for processing. Deadline may be more of an absolute measure of when the work items need to be processed. US 10748072 teaches As shown, forecaster 410 may comprise a model type selector 420, a model library 425, model training components 440, model testing/evaluation components 450, interface management components 460, and an execution platform pool 470 in the depicted embodiment. US 10824959 B1 teaches a classification model may be trained using the training set. At least in some embodiments, the classification algorithm may be selected (either by the client or by the service) based primarily on the expected quality of the predictions— US 20200117504 A1 teaches changing periodicity constraints associated with one or more computation jobs, and other modifications that cause changes to utilized computing resources and/or execution durations/ The new periodicity constraints specify that job A is executed two times a week, job B is executed seven times a week, job C is executed twice a week, job D is executed four times a week, job E is executed four times a week, job F is executed once a week, and job G is executed seven times a week. US 10824959 B1 teaches created on behalf of several different clients may be placed in a single queue, while in other embodiments multiple queues may be maintained (e.g., one queue in each data center of the provider network being used, or one queue per MLS customer). Asynchronously with respect to the submission of the requests 211, the next job whose dependency requirements have been met may be removed from job queue 242 in the depicted embodiment, as indicated by arrow 213. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LECHI TRUONG whose telephone number is (571)272-3767. The examiner can normally be reached 10-8 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Young Kevin can be reached on (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LECHI TRUONG/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
Jan 04, 2026
Non-Final Rejection — §101, §103
Apr 03, 2026
Applicant Interview (Telephonic)
Apr 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602245
QUANTUM ISOLATION ZONESLC
2y 5m to grant Granted Apr 14, 2026
Patent 12602255
Transaction Method and Apparatus with Fixed Execution Order
2y 5m to grant Granted Apr 14, 2026
Patent 12596580
METHOD AND SYSTEM FOR OPTIMIZING GPU UTILIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12596952
QUANTUM RESOURCE ACCESS CONTROL THROUGH CONSENSUS
2y 5m to grant Granted Apr 07, 2026
Patent 12583106
AUTOMATION WINDOWS FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+37.1%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 879 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month