Prosecution Insights
Last updated: April 19, 2026
Application No. 17/929,671

BUBBLE SORTING FOR SCHEDULING TASK EXECUTION IN COMPUTING SYSTEMS

Non-Final OA §103
Filed
Sep 02, 2022
Examiner
HU, SELINA ELISA
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
2 granted / 3 resolved
+11.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to applicant’s amendment filed on 12/18/2025. Claims 1-20 are pending and examined. Response to Arguments Applicant’s arguments, filed 12/18/2025, with respect to 35 U.S.C. 102 and 103 have been fully considered and are not persuasive. Applicant argues that the cited references do not disclose every element of the claims as currently recited and that the claims are allowable. Examiner respectfully disagrees, see 103 rejections below for a detailed analysis. Examiner interprets Hosmani to disclose most of the amended limitations, as new jobs being submitted to the job scheduler from a client device correlates to one or more submitter/submittee sets of runnables. The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it is inserted into the existing order of jobs and can be executed without delay, corresponds to the moving of the one or more runnables being performed in view of one or more coupling constraints which control the movement of runnables to maintain relative execution relationships included in the first execution schedule. Although Hosmani in view of Gounares and Anderson may not explicitly teach every element of the claims as currently recited, such as the limitation specifying the modification occurring prior to the execution of the process, modifying first execution schedules prior to the execution of the process is a popular scheduling method as evidenced by Wood. Wood’s initial state of the work item queue correlates to the first execution schedule. The workflow scheduler adding a batch of scheduled operations to the queue, which can include conditions on each work item and therefore correlate to the process, correlates to modifying the first execution schedule prior to the execution of the process. Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with Wood because individually adding thousands or millions of operations to a work item queue as they are received can raise the possibility of starving other tenants for resources. Adding batches of scheduled operations can therefore reduce the possibility of resource starvation. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-9, and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable by Hosmani et al. (U.S. Patent No. US 20180276040 A1), hereinafter “Hosmani” in view of Wood et al. (U.S. Patent No. US 20220276893 A1), hereinafter “Wood.” With regards to Claim 1, Hosmani teaches: A system comprising: one or more processors to perform operations comprising: determining a first execution schedule for execution of a plurality of runnables of a process of a computing application using a heterogeneous runtime system (Paragraphs 15-16 and 20, “An event-driven job scheduler 100 may schedule jobs for execution using suitable computing resources 191, and some of the jobs may have dependency relationships… A job definition may describe one or more tasks to be performed by computing resources 191 in the provider network 190. The tasks within a job definition may include entirely different tasks (e.g., tasks having different program code) and/or tasks that run the same program code for different input data. For a particular task, a job definition may include or reference program instructions to be executed in processing the task... The computing resources 191 may include compute instances, storage instances, and so on. The computing resources 191 may include components or other functionalities implementing job execution 192 for jobs scheduled using the job scheduler 100. In one embodiment, computing resources having particular configurations may be selected, such as compute instances of particular instance types and/or software configurations with particular parameter values.” The plurality of jobs required to have a dependency relationship correlates to a plurality of runnables, and the dependency relationship between the jobs corresponds to the process. The job describing one or more tasks being performed by computing resources, which include particular instance or configuration types, correlates to the process being executed using a heterogenous runtime system. Therefore, the event-driven job scheduler scheduling jobs for execution correlates to determining an execution schedule for execution of a plurality of runnables); modifying the first execution schedule to generate a second execution schedule, the modifying including moving one or more submitter/submittee sets of runnables of the plurality of runnables to populate one or more gaps in the first execution schedule (Paragraphs 15, 17, 41 and 45, “One or more workloads of jobs may be received from a particular client device 110 in one batch or in multiple batches over a period of time… Newly submitted jobs that lack dependencies may be added to the execution schedule 140 without adding corresponding nodes to the graph 130… if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140. The job 112B may be inserted into the existing order of jobs in the schedule 140 in any appropriate location, e.g., as based (at least in part) on the order of job submission to the scheduler 100. For example, based on the respective timestamps for jobs 112A, 112B, and 112C, the job 112B may be inserted in the schedule 140 between the earlier-submitted job 112A and the later-submitted job 112C... As shown in the example of FIG. 5, a new job 112H may be submitted to the job scheduler 100 after evaluation of the execution event 195 discussed above. The submission of the new job 112H may represent a job submission event 196.” New jobs being submitted to the job scheduler from a client device correlates to one or more submitter/submittee sets of runnables. The insertion of job 112B into the existing order of jobs in the schedule between earlier-submitted job 112A and later-submitted job 112C correlates to moving one or more runnables of the plurality of runnables to populate one or more gaps in the first execution schedule. Moving the job into the execution schedule and therefore modifying the execution schedule correlates to modifying the first execution schedule to generate a second execution schedule), the moving of the one or more submitter/submittee sets of runnables being performed in view of one or more coupling constraints related to the one or more submitter/submittee sets in which the one more coupling constraints control movement of runnables as part of populating the one or more gaps in the generation of the second execution schedule to maintain, in the second execution schedule, relative execution relationships included in the first execution schedule of respective runnables included in the submitter/submittee sets (Paragraphs 15, 38 and 41, “One or more workloads of jobs may be received from a particular client device 110 in one batch or in multiple batches over a period of time… Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140. The job 112B may be inserted into the existing order of jobs in the schedule 140 in any appropriate location, e.g., as based (at least in part) on the order of job submission to the scheduler 100.” New jobs being submitted to the job scheduler from a client device correlates to one or more submitter/submittee sets of runnables. The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it is inserted into the existing order of jobs and can be executed without delay, corresponds to the moving of the one or more runnables being performed in view of one or more coupling constraints which control the movement of runnables to maintain relative execution relationships included in the first execution schedule); and causing the heterogenous runtime system to execute the process according to the second execution schedule as part of one or more navigation, control, or localization operations performed by a machine (Paragraph 18, 41, and 46, “The nodes in the graph 130 and the zero-order nodes in the execution schedule 140 may be maintained in memory at the job scheduler 100, where synchronization and locking techniques may be used for concurrency control… When the graph 130 is evaluated and the execution schedule 140 is modified, the job scheduler 100 may insert a newly runnable job into the predetermined order without necessarily having to resort the entire schedule… The cancellation or failure indication may represent a job cancellation event 197. The cancellation or failure indication may be generated based on user input (e.g., requesting cancellation of job 112P) or generated automatically based on a failed execution of the job 112P. The node corresponding to job 112P may be removed from the graph 130 in response to the event 197… Similarly, the automatic and programmatic evaluation and analysis may remove any nodes dependent on the job 112Q, and so on. In this manner, the graph 130 may be updated efficiently to remove a chain of nodes dependent on a canceled or failed job. The remainder of the graph 130 may be untouched in this particular process. In one embodiment, cancellation of dependent jobs such as the job 112Q may be performed based (at least in part) on a policy. Alternatively, the policy may dictate that the job 112Q (and any other dependent jobs) should remain in the graph in a pending state (subject to evaluation) in light of the job cancellation event 197, e.g., by treating the event 197 as completion of the earlier-submitted job 112P. Such a policy may be globally applicable to many clients, specific to one client, or specific to particular jobs.” The job scheduler inserting a newly runnable job into the execution schedule corresponds to the second execution schedule. The execution of job 112P generating a failure indicator which represents a job cancellation event, in combination with the automatic and programmatic evaluation and analysis removing a chain of nodes dependent on the failed job based on a policy, correlates to executing the process according to the second execution schedule as part of control operations performed by the machine) in which the second execution schedule maintains compliance of one or more safety parameters corresponding to the navigation, control, or localization operations (Paragraph 18 and 46, “The nodes in the graph 130 and the zero-order nodes in the execution schedule 140 may be maintained in memory at the job scheduler 100, where synchronization and locking techniques may be used for concurrency control… Similarly, the automatic and programmatic evaluation and analysis may remove any nodes dependent on the job 112Q, and so on. In this manner, the graph 130 may be updated efficiently to remove a chain of nodes dependent on a canceled or failed job. The remainder of the graph 130 may be untouched in this particular process. In one embodiment, cancellation of dependent jobs such as the job 112Q may be performed based (at least in part) on a policy. Alternatively, the policy may dictate that the job 112Q (and any other dependent jobs) should remain in the graph in a pending state (subject to evaluation) in light of the job cancellation event 197, e.g., by treating the event 197 as completion of the earlier-submitted job 112P. Such a policy may be globally applicable to many clients, specific to one client, or specific to particular jobs.” The automatic and programmatic evaluation and analysis removing a chain of nodes dependent on the failed job based on a policy and the graph and execution schedule being schedule being maintained by the scheduler using synchronization and locking techniques for concurrency control correlates to the second execution schedule maintaining compliance of one or more safety parameters corresponding to control operations). Hosmani does not explicitly teach that the modifying of the first execution schedule is done prior to execution of the process. However, modifying first execution schedules prior to the execution of the process is a popular scheduling method as evidenced by Wood (Paragraphs 60-62, “Using a technique described in conjunction with FIGS. 6-8. workflow scheduler 108 breaks operation requests from different tenants into batches, interleaving them into work item queue 112 for processing… In some cases, a requested operation may be implemented with multiple work items… Operations that are implemented with multiple work items may include conditions on each work item that the tenant scheduler will check before adding a work item to a batch… When executed by a compute node, workflow scheduler 108 may select a batch of scheduled operations 406B to be added to work item queue 112.” The initial state of the work item queue correlates to the first execution schedule. The workflow scheduler adding a batch of scheduled operations to the queue, which can include conditions on each work item and therefore correlate to the process, correlates to modifying the first execution schedule prior to the execution of the process). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with modifying, prior to execution of the process, the first execution schedule to generate a second execution schedule as taught by Wood because individually adding thousands or millions of operations to a work item queue as they are received can raise the possibility of starving other tenants for resources. Adding batches of scheduled operations can therefore reduce the possibility of resource starvation (Wood: paragraph 62). With regards to Claim 3, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 1, wherein; the one or more submitter/submittee sets include at least one first runnable for execution on a first compute engine of the heterogenous system that triggers execution of at least one second runnable on a second compute engine of the heterogenous system (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the one or more submitter/submittee sets including at least one runnable for execution on a first compute engine that triggers execution of at least one second runnable on a second compute engine of the heterogenous system); and The one or more coupling constraints require that a first processing queue that includes the at least one first runnable and that is on the first compute engine matches a second processing queue that includes the at least one second runnable and that is on the second compute engine (Paragraph 25, “the job scheduler 100 may implement one or more job queues associated with particular queue identifier(s), e.g., as provided by a client and mapped to a particular compute environment. A job queue may include a set of related secondary queues…” The one or more job queues mapped to a particular compute environment correlate to a first processing queue that includes the first runnable and is on the first compute engine. The set of related secondary queues to the job queue which have a particular compute environment correlates to matching a second processing queue that includes the second runnable and is on the second compute engine). With regards to Claim 4, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 1, wherein the moving of one or more runnables is constrained by one or more of: a dependency constraint that prevents child runnables from being scheduled to begin execution prior to corresponding parent runnables finishing execution (Paragraphs 38 and 41, “if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job and unable to be scheduled until it has no more DependsOn relationships corresponds to a dependency constraint that prevents child runnables from being scheduled for execution prior to the parent runnables execution completion); or a level constraint that restrains movement of the one or more runnables based at least on hierarchal levels associated with the one or more runnables as indicated by a compute graph that includes the plurality of runnables. With regards to Claims 12 and 18, the machine of Claim 4 performs the same steps as the method and manufacture of Claims 12 and 18 respectively, and Claims 12 and 18 are therefore rejected using the same rationale set forth above in the rejection of Claim 4. With regards to Claim 5, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 1, wherein the operations further comprise determining an initial sequence of runnables based on a critical path within a compute graph that includes the plurality of runnables (Paragraph 17, “the job scheduler 100 may build and maintain a directed acyclic graph (DAG) 130 representing jobs provided by the client 110 along with job-level dependencies… Upon evaluation of corresponding nodes, jobs with no unsatisfied dependencies may be deemed runnable and scheduled for execution based (at least in part) on an execution schedule 140. Newly submitted jobs that lack dependencies may be added to the execution schedule 140 without adding corresponding nodes to the graph 130… The order of jobs in the execution schedule 140 may be based on time of submission to the job scheduler 100 or any other suitable criteria.” The directed acyclic graph representing jobs and dependencies corresponds to a compute graph including a plurality of runnables. The initial state of the overall DAG which includes dependencies of all jobs and represents the initial sequence of runnables corresponds to the critical path within the compute graph), and further wherein the determining the first execution schedule is based at least on the initial sequence of runnables (Paragraph 17, “the job scheduler 100 may build and maintain a directed acyclic graph (DAG) 130 representing jobs provided by the client 110 along with job-level dependencies… Upon evaluation of corresponding nodes, jobs with no unsatisfied dependencies may be deemed runnable and scheduled for execution based (at least in part) on an execution schedule 140. Newly submitted jobs that lack dependencies may be added to the execution schedule 140 without adding corresponding nodes to the graph 130… The order of jobs in the execution schedule 140 may be based on time of submission to the job scheduler 100 or any other suitable criteria.” The directed acyclic graph representing jobs and dependencies corresponds to a compute graph including a plurality of runnables. The initial state of the overall DAG which includes dependencies of all jobs and represents the initial sequence of runnables corresponds to the critical path within the compute graph. Jobs with no unsatisfied dependencies added to the execution schedule based on the order of submission or other criteria corresponds to the first execution schedule based on the initial sequence of runnables). With regards to Claims 13 and 19, the machine of Claim 5 performs the same steps as the method and manufacture of Claims 13 and 19 respectively, and Claims 13 and 19 are therefore rejected using the same rationale set forth above in the rejection of Claim 5. With regards to Claim 6, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 1, wherein the determining the first execution schedule is based at least on individual rankings of one or more of the plurality of runnables (Paragraph 50, “an execution schedule may be determined for the runnable job… The order of jobs in the execution schedule may be based on time of submission to the job scheduler or any other suitable criteria. Determining the execution schedule for the runnable job may include adding the job to an existing execution schedule that includes one or more other runnable jobs, e.g., by inserting the runnable job at the end of the list, at the beginning of the list, or between other jobs in the list. As shown in 750, execution of the runnable job may be initiated based (at least in part) on the execution schedule. For example, execution of the runnable job may be initiated when no other jobs outrank the runnable job in a queue of runnable jobs. The execution may be performed using one or more computing resources (e.g., virtual compute instances) of a provider network.” The execution of the runnable job being initiated based on the execution schedule when no other jobs outrank the runnable job corresponds to determining the first execution schedule based on individual rankings of one or more of the pluralities of runnables). With regards to Claims 14 and 20, the machine of Claim 6 performs the same steps as the method and manufacture of Claims 14 and 20 respectively, and Claims 14 and 20 are therefore rejected using the same rationale set forth above in the rejection of Claim 6. With regards to Claim 7, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 6, wherein the individual rankings of the one or more of the plurality of runnables are based at least on respective relationships of the one or more of the plurality of runnables with respect to a critical path within a compute graph that includes the plurality of runnables (Paragraph 25, “when a job definition is first submitted, the job may be placed initially in a submitted queue. If a job in the submitted queue has no dependencies, the scheduler 100 may move that job to a runnable queue… A job having one or more dependencies may be moved from the submitted queue to a pending queue. Individual jobs may be associated with various states such as submitted, pending, runnable, running, succeeded, failed, and so on. A change from one state to another state may constitute an event that causes evaluation of a relevant portion of the graph 130.” The job definition being submitted and placed in a submitted, runnable, pending, etc. queue along with other jobs corresponds to the individual ranking of the one or more runnables based on respective relationships of the one or more runnables. The dependencies and current state of a submitted job being represented and updated in the DAG corresponds to the relationship with one or more runnables with respect to the critical path in the compute graph). With regards to Claim 15, the machine of Claim 7 performs the same steps as the method of Claim 15, and Claim 15 is therefore rejected using the same rationale set forth above in the rejection of Claim 7. With regards to Claim 8, Hosmani in view of Wood teaches the system of Claim 1 above. Hosmani further teaches: The system of claim 1, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Paragraphs 23-24, “the computing resources 191 may be part of a compute environment that is managed on behalf of the client 110 by a compute environment management system that includes the job scheduler 100… the compute environment specification may include data associating a compute environment with a virtual private cloud (VPC) representing a virtual network, e.g., within the provider network 190.” The computing resources being a part of a compute environment which includes data associating with a virtual private cloud correlates to a system implemented at least partially using cloud computing resources). With regards to Claim 9, Hosmani teaches: A method comprising: determining a first execution schedule for execution of a plurality of runnables, the plurality of runnables corresponding to a process executed using a plurality of compute engines (Paragraphs 15-16 and 20, “An event-driven job scheduler 100 may schedule jobs for execution using suitable computing resources 191, and some of the jobs may have dependency relationships… A job definition may describe one or more tasks to be performed by computing resources 191 in the provider network 190. The tasks within a job definition may include entirely different tasks (e.g., tasks having different program code) and/or tasks that run the same program code for different input data. For a particular task, a job definition may include or reference program instructions to be executed in processing the task... The computing resources 191 may include compute instances, storage instances, and so on. The computing resources 191 may include components or other functionalities implementing job execution 192 for jobs scheduled using the job scheduler 100. In one embodiment, computing resources having particular configurations may be selected, such as compute instances of particular instance types and/or software configurations with particular parameter values.” The plurality of jobs required to have a dependency relationship correlates to a plurality of runnables, and the dependency relationship between the jobs corresponds to the process. The job describing one or more tasks being performed by computing resources correlates to the process being executed by a plurality of compute engines. Therefore, the event-driven job scheduler scheduling jobs for execution correlates to determining an execution schedule for execution of a plurality of runnables); modifying the first execution schedule to generate a second execution schedule, the modifying including moving one or more runnables of the plurality of runnables to populate one or more gaps in the first execution schedule (Paragraph 41, “if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140. The job 112B may be inserted into the existing order of jobs in the schedule 140 in any appropriate location, e.g., as based (at least in part) on the order of job submission to the scheduler 100. For example, based on the respective timestamps for jobs 112A, 112B, and 112C, the job 112B may be inserted in the schedule 140 between the earlier-submitted job 112A and the later-submitted job 112C.” The insertion of job 112B into the existing order of jobs in the schedule between earlier-submitted job 112A and later-submitted job 112C correlates to moving one or more runnables of the plurality of runnables to populate one or more gaps in the first execution schedule. Moving the job into the execution schedule and therefore modifying the execution schedule correlates to modifying the first execution schedule to generate a second execution schedule), the moving of the one or more runnables being such that movement of the runnables as part of the populating of the one or more gaps maintains, in the second execution schedule, execution relationships between submitter/submittee sets of runnables that are included in the first execution schedule (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the moving of the one or more runnables populating one or more gaps maintaining execution relationships between submitter/submittee sets of runnables in the first execution schedule); and causing the plurality of compute engines to execute the process according to the second execution schedule in which the second execution schedule maintains one or more safety parameters associated with execution of the process (Paragraph 18 and 46, “The nodes in the graph 130 and the zero-order nodes in the execution schedule 140 may be maintained in memory at the job scheduler 100, where synchronization and locking techniques may be used for concurrency control… Similarly, the automatic and programmatic evaluation and analysis may remove any nodes dependent on the job 112Q, and so on. In this manner, the graph 130 may be updated efficiently to remove a chain of nodes dependent on a canceled or failed job. The remainder of the graph 130 may be untouched in this particular process. In one embodiment, cancellation of dependent jobs such as the job 112Q may be performed based (at least in part) on a policy. Alternatively, the policy may dictate that the job 112Q (and any other dependent jobs) should remain in the graph in a pending state (subject to evaluation) in light of the job cancellation event 197, e.g., by treating the event 197 as completion of the earlier-submitted job 112P. Such a policy may be globally applicable to many clients, specific to one client, or specific to particular jobs.” The automatic and programmatic evaluation and analysis removing a chain of nodes dependent on the failed job based on a policy and the graph and execution schedule being schedule being maintained by the scheduler using synchronization and locking techniques for concurrency control correlates to executing the process according to the second execution schedule which maintains one or more safety parameters associated with the execution of the process). Hosmani does not explicitly teach that the modifying of the first execution schedule is done prior to execution of the process. However, modifying first execution schedules prior to the execution of the process is a popular scheduling method as evidenced by Wood above (Paragraphs 60-62). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with modifying, prior to execution of the process, the first execution schedule to generate a second execution schedule as taught by Wood because individually adding thousands or millions of operations to a work item queue as they are received can raise the possibility of starving other tenants for resources. Adding batches of scheduled operations can therefore reduce the possibility of resource starvation (Wood: paragraph 62). With regards to Claim 11, Hosmani in view of Wood teaches the method of claim 9 above. Hosmani further teaches: wherein; a particular submitter/submittee set of runnables includes a first runnable for execution on a first compute engine of the plurality of compute engines that triggers execution of a second runnable on a second compute engine of the plurality of compute engines (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the one or more submitter/submittee sets including at least one runnable for execution on a first compute engine that triggers execution of at least one second runnable on a second compute engine of the plurality of compute engines); and a first processing queue that includes the first runnable and that is on the first compute engine matches a second processing queue that includes the second runnable and that is on the second compute engine (Paragraph 25, “the job scheduler 100 may implement one or more job queues associated with particular queue identifier(s), e.g., as provided by a client and mapped to a particular compute environment. A job queue may include a set of related secondary queues…” The one or more job queues mapped to a particular compute environment correlate to a first processing queue that includes the first runnable and is on the first compute engine. The set of related secondary queues to the job queue which have a particular compute environment correlates to matching a second processing queue that includes the second runnable and is on the second compute engine). With regards to Claim 16, Hosmani teaches: At least one processor comprising: determining a first execution schedule for execution of a plurality of runnables, the plurality of runnables corresponding to a process of a computing application executed using a plurality of compute engines of a runtime system, the first execution schedule dictating timing and execution order of the plurality of runnables across the plurality of compute engines to coordinate the plurality of compute engines operating together to execute the process (Paragraphs 15-16 and 20, “An event-driven job scheduler 100 may schedule jobs for execution using suitable computing resources 191, and some of the jobs may have dependency relationships… A job definition may describe one or more tasks to be performed by computing resources 191 in the provider network 190. The tasks within a job definition may include entirely different tasks (e.g., tasks having different program code) and/or tasks that run the same program code for different input data. For a particular task, a job definition may include or reference program instructions to be executed in processing the task... The computing resources 191 may include compute instances, storage instances, and so on. The computing resources 191 may include components or other functionalities implementing job execution 192 for jobs scheduled using the job scheduler 100. In one embodiment, computing resources having particular configurations may be selected, such as compute instances of particular instance types and/or software configurations with particular parameter values.” The plurality of jobs required to have a dependency relationship correlates to a plurality of runnables, and the dependency relationship between the jobs corresponds to the process. The job describing one or more tasks being performed by computing resources, which include particular instance or configuration types, correlates to the process being executed using a runtime system. Therefore, the event-driven job scheduler scheduling jobs for execution correlates to determining an execution schedule for execution of a plurality of runnables which dictates timing and execution order of the plurality of compute engines operating together); and modifying the first execution schedule to generate a second execution schedule, the modifying including moving one or more runnables of the plurality of runnables to populate one or more gaps in the first execution schedule (Paragraph 41, “if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140. The job 112B may be inserted into the existing order of jobs in the schedule 140 in any appropriate location, e.g., as based (at least in part) on the order of job submission to the scheduler 100. For example, based on the respective timestamps for jobs 112A, 112B, and 112C, the job 112B may be inserted in the schedule 140 between the earlier-submitted job 112A and the later-submitted job 112C.” The insertion of job 112B into the existing order of jobs in the schedule between earlier-submitted job 112A and later-submitted job 112C correlates to moving one or more runnables of the plurality of runnables to populate one or more gaps in the first execution schedule. Moving the job into the execution schedule and therefore modifying the execution schedule correlates to modifying the first execution schedule to generate a second execution schedule), the moving of the one or more runnables being performed in view of a coupling constraint related to at least one first runnable for execution on a first compute engine of the plurality of compute engines that triggers execution of at least one second runnable on a second compute engine of the plurality of compute engines (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the moving of the one or more runnables being performed in view of a coupling constraint related to at least one first runnable for execution on a first compute engine of the plurality of compute engines that triggers execution of at least one second runnable on a second compute engine of the plurality of compute engines); and causing the runtime system to execute the process according to the second execution schedule in which the second execution schedule maintains one or more safety parameters associated with execution of the process (Paragraph 18 and 46, “The nodes in the graph 130 and the zero-order nodes in the execution schedule 140 may be maintained in memory at the job scheduler 100, where synchronization and locking techniques may be used for concurrency control… Similarly, the automatic and programmatic evaluation and analysis may remove any nodes dependent on the job 112Q, and so on. In this manner, the graph 130 may be updated efficiently to remove a chain of nodes dependent on a canceled or failed job. The remainder of the graph 130 may be untouched in this particular process. In one embodiment, cancellation of dependent jobs such as the job 112Q may be performed based (at least in part) on a policy. Alternatively, the policy may dictate that the job 112Q (and any other dependent jobs) should remain in the graph in a pending state (subject to evaluation) in light of the job cancellation event 197, e.g., by treating the event 197 as completion of the earlier-submitted job 112P. Such a policy may be globally applicable to many clients, specific to one client, or specific to particular jobs.” The automatic and programmatic evaluation and analysis removing a chain of nodes dependent on the failed job based on a policy and the graph and execution schedule being schedule being maintained by the scheduler using synchronization and locking techniques for concurrency control correlates to the second execution schedule maintaining one or more safety parameters associated with the execution of the process). Hosmani does not explicitly teach that the modifying of the first execution schedule is done prior to execution of the process. However, modifying first execution schedules prior to the execution of the process is a popular scheduling method as evidenced by Wood above (Paragraphs 60-62). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with modifying, prior to execution of the process, the first execution schedule to generate a second execution schedule as taught by Wood because individually adding thousands or millions of operations to a work item queue as they are received can raise the possibility of starving other tenants for resources. Adding batches of scheduled operations can therefore reduce the possibility of resource starvation (Wood: paragraph 62). Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable by Hosmani in view of Wood and Gounares et al. (U.S. Patent No. US 20120222043 A1), hereinafter “Gounares.” With regards to Claim 2, Hosmani in view of Wood teaches the system of Claim 1 as referenced above. Hosmani further teaches: wherein; the one or more submitter/submittee sets include at least one first runnable for execution on a first compute engine of the heterogenous system that triggers execution of at least one second runnable on a second compute engine of the heterogenous system (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the one or more submitter/submittee sets including at least one runnable for execution on a first compute engine that triggers execution of at least one second runnable on a second compute engine of the heterogenous system); and Hosmani in view of Wood does not explicitly teach: the one or more coupling constraints require that the at least one first runnable and the at least one second runnable are moved together. However, Gounares teaches: the one or more coupling constraints require that the at least one first runnable and the at least one second runnable are moved together (Paragraphs 72-73 and 76, “The idle queue 408 may receive elements in block 424 and store the executable elements. Each executable element in the idle queue 408 may be waiting a dependency, which may be the completion of another executable element, a message passed from another executable element, an input from a device, an interrupt or other alert, or some other dependency… When a dependency is received in block 426, the corresponding executable element may be retrieved from the idle queue in block 428 and moved to the runnable queue in block 430… The queue manager 402 may examine the elements in the idle queue in block 442 to identify any elements that may no longer have dependencies. For example, a first executable element may be processing and two different executable elements may be dependent on the first executable element, so both of the executable elements with the dependency may be added to the idle queue. When the first element finishes processing, one of the two other elements may be launched.” The two different executable elements that are added to the idle queue and subsequently launched through the runnable queue corresponds to the first and second runnable being moved together). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with the one or more coupling constraints require that the at least one first runnable and the at least one second runnable are moved together as taught by Gounares because managing and grouping together executable elements that are likely to be executed in the near future increases the performance of the scheduler’s applications. The performance gain is especially noticeable where applications have a high number of executable elements such as millions of executable elements (Gounares: paragraph 10). With regards to Claim 10, Hosmani in view of Wood teaches the system of Claim 9 as referenced above. Hosmani further teaches: wherein; a particular submitter/submittee set of runnables includes a first runnable for execution on a first compute engine of the plurality of compute engines that triggers execution of a second runnable on a second compute engine of the plurality of compute engines (Paragraphs 38 and 41, “Runnable nodes and jobs may have no unmet DependsOn dependency relationships and may be executed without delay, e.g., without necessarily waiting for conditions associated with other jobs to be met. Satisfaction of a dependency may be determined based (at least in part) on an event (such as event 194) associated with a job that the dependency involves. For example, if a particular job was dependent on completion of an earlier-submitted job, and that earlier-submitted job has completed execution, then the particular job may be deemed runnable by evaluation of its corresponding node in the graph 130 in response to an execution event associated with the earlier-submitted job... if the evaluation of the node determines that job 112B has no more unmet DependsOn relationships, then the scheduler 100 may move the job into the execution schedule 140.” The particular job being dependent on the completion of an earlier-submitted job before it is deemed runnable and moved to the execution schedule, where it can be executed without delay, corresponds to the one or more submitter/submittee sets including at least one runnable for execution on a first compute engine that triggers execution of at least one second runnable on a second compute engine of the plurality of compute engines); and Hosmani in view of Wood does not explicitly teach: the first runnable and the second runnable are moved together to maintain a particular execution relationship between first runnable and the second runnable. However, Gounares teaches: the first runnable and the second runnable are moved together to maintain a particular execution relationship between first runnable and the second runnable (Paragraphs 72-73 and 76, “The idle queue 408 may receive elements in block 424 and store the executable elements. Each executable element in the idle queue 408 may be waiting a dependency, which may be the completion of another executable element, a message passed from another executable element, an input from a device, an interrupt or other alert, or some other dependency… When a dependency is received in block 426, the corresponding executable element may be retrieved from the idle queue in block 428 and moved to the runnable queue in block 430… The queue manager 402 may examine the elements in the idle queue in block 442 to identify any elements that may no longer have dependencies. For example, a first executable element may be processing and two different executable elements may be dependent on the first executable element, so both of the executable elements with the dependency may be added to the idle queue. When the first element finishes processing, one of the two other elements may be launched.” The two different executable elements that are added to the idle queue and subsequently launched through the runnable queue corresponds to the first and second runnable being moved together to maintain a particular execution relationship). Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Hosmani with the first runnable and the second runnable are moved together to maintain a particular execution relationship between first runnable and the second runnable as taught by Gounares because managing and grouping together executable elements that are likely to be executed in the near future increases the performance of the scheduler’s applications. The performance gain is especially noticeable where applications have a high number of executable elements such as millions of executable elements (Gounares: paragraph 10). Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Seo et al. (US Patent No. US 8806498 B2); teaching a method of scheduling a plurality of works to plurality of processing cores through the use of a dependency, runnable, finish, and idle queue. The scheduler resolves dependencies in real time upon receipt of finished works and move any works that have their dependencies resolved from the dependency queue to the runnable queue. Additionally, if any of the cores are idle, one of the runnable works in the runnable queue are transmitted to the idle core. If the runnable queue is empty, dependency resolving is used within the dependency queue based on the finish queue to move runnable works to the runnable queue. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SELINA ELISA HU/Examiner, Art Unit 2193 /Chat C Do/Supervisory Patent Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Sep 02, 2022
Application Filed
May 29, 2025
Non-Final Rejection — §103
Aug 22, 2025
Interview Requested
Sep 04, 2025
Examiner Interview Summary
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 08, 2025
Response Filed
Sep 18, 2025
Final Rejection — §103
Dec 18, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585485
Warm migrations for virtual machines in a cloud computing environment
2y 5m to grant Granted Mar 24, 2026
Patent 12563114
CONTENT INITIALIZATION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month