Prosecution Insights
Last updated: April 18, 2026
Application No. 17/929,672

BRANCH AND BOUND SORTING FOR SCHEDULING TASK EXECUTION IN COMPUTING SYSTEMS

Non-Final OA §102§103
Filed
Sep 02, 2022
Examiner
NGUYEN, AN-AN NGOC
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/07/2025 has been entered. Response to Arguments 2. Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 3. Claims 1-3, 5, 7-11, 13, and 15-18 are rejected under 35 U.S.C. 102 as being anticipated by Tascione et al. US 20190130056 A1. 4. With regard to claim 1, Tascione teaches: A system comprising: one or more processing units to perform operations comprising: identifying, based at least on application data associated with a computing application that includes a set of runnables, a plurality of scheduling branches associated with scheduling execution of at least a subset of runnables of the set of runnables ([0024] In some implementations, the vehicle autonomy system can include one or more computing devices configured to implement a plurality of tasks within one or more modules of an operational autonomy stack to detect objects of interest within the sensor data and determine a motion plan for the autonomous vehicle relative to the objects of interest. The one or more modules of an operational autonomy stack can correspond, for example, to a perception system, a prediction system, a motion planning system, and a vehicle controller that cooperate to perceive the surrounding environment of the autonomous vehicle and determine a motion plan for controlling the motion of the autonomous vehicle accordingly; Examiner’s Note: Each module is a branch that implements tasks, which are runnables.); selecting a scheduling branch from the plurality of scheduling branches based at least on a coupling constraint that is applied to related runnables of at least the subset of runnables ([0029] More particularly, in some implementations, the event controller can be further configured to determine an event type associated with each event detected during implementation of the operational autonomy stack. In some implementations, the event type can be associated with a module of the one or more modules of the operational autonomy stack. For example, if an event happened because a task crashed within the prediction module of the operational autonomy stack, then the event type can be associated with the prediction system. When the event controller is configured to determine an event type for each detected event, the second memory can be further configured to store the event type in the autonomy data logs; Examiner’s Note: Select module based on event type, which is analogous with a coupling constraint.), wherein: the related runnables include a first runnable that is designated for execution on a first compute engine and that triggers execution of a second runnable on a second compute engine ([0107] At 804, one or more computing devices within a computing system can determine an autonomy scenario for testing associated with a simulated autonomy stack. In some implementations, the simulated autonomy stack can include a second plurality of tasks. In some implementations, the second plurality of tasks within the simulated autonomy stack can correspond to at least a subset of a first plurality of tasks implemented in an operational autonomy stack (e.g., the first plurality of tasks implemented at 704). In some implementations, when the data logs accessed at 802 include an indication of an event type associated with an event that triggered storage of the data logs, such event type can help determine an autonomy scenario for testing at 804; [0108] At 806, one or more computing devices within a computing system can determine a first task for execution within the simulated autonomy stack (e.g., second plurality of tasks). In some implementations, the first task for execution within the simulated autonomy stack (e.g., second plurality of tasks) can correspond to a task within the first plurality of tasks (e.g., tasks associated with an operational autonomy stack). In some implementations, the first task for execution within the second plurality of tasks is determined at 806 to reduce a total run time of the second plurality of tasks relative to a total run time of the first plurality of tasks; [0109] In some implementations, such as when the data logs accessed at 802 include an identification of an event type that triggered storage of the data logs, such event type can help determine the first task for execution at 806. For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack); [0110] At 808, one or more computing devices within a computing system can schedule tasks within the simulated autonomy stack into a task order. In some implementations, scheduling the second plurality of tasks into a task order at 808 can be determined at least in part from the bookmarks stored in the data logs for the first plurality of tasks (e.g., the bookmarks determined at 712 and stored in the data logs at 714 as depicted in FIG. 7); Examiner’s Note: The A first plurality of tasks implemented on the operational autonomy stack (first runnable executed on the first compute engine) might have a subset of a second plurality of tasks that are implemented on the simulated autonomy stack (second runnable executed on the second compute engine). The event type determines which module the task is associated with. “For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack).” Task order of the second plurality of tasks is determined by the bookmarks for the first plurality of tasks. The bookmarks of the first plurality of tasks triggers the execution of the second plurality of tasks.), and the application of the coupling constraint to the related runnables ensures that the scheduling branch, as selected, maintains that the first runnable as executed on the first compute engine triggers execution of the second runnable on the second compute engine ([0107] At 804, one or more computing devices within a computing system can determine an autonomy scenario for testing associated with a simulated autonomy stack. In some implementations, the simulated autonomy stack can include a second plurality of tasks. In some implementations, the second plurality of tasks within the simulated autonomy stack can correspond to at least a subset of a first plurality of tasks implemented in an operational autonomy stack (e.g., the first plurality of tasks implemented at 704). In some implementations, when the data logs accessed at 802 include an indication of an event type associated with an event that triggered storage of the data logs, such event type can help determine an autonomy scenario for testing at 804; [0108] At 806, one or more computing devices within a computing system can determine a first task for execution within the simulated autonomy stack (e.g., second plurality of tasks). In some implementations, the first task for execution within the simulated autonomy stack (e.g., second plurality of tasks) can correspond to a task within the first plurality of tasks (e.g., tasks associated with an operational autonomy stack). In some implementations, the first task for execution within the second plurality of tasks is determined at 806 to reduce a total run time of the second plurality of tasks relative to a total run time of the first plurality of tasks; [0109] In some implementations, such as when the data logs accessed at 802 include an identification of an event type that triggered storage of the data logs, such event type can help determine the first task for execution at 806. For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack); [0110] At 808, one or more computing devices within a computing system can schedule tasks within the simulated autonomy stack into a task order. In some implementations, scheduling the second plurality of tasks into a task order at 808 can be determined at least in part from the bookmarks stored in the data logs for the first plurality of tasks (e.g., the bookmarks determined at 712 and stored in the data logs at 714 as depicted in FIG. 7); Examiner’s Note: Event type is analogous to a coupling constraint. The A first plurality of tasks implemented on the operational autonomy stack (first runnable executed on the first compute engine) might have a subset of a second plurality of tasks that are implemented on the simulated autonomy stack (second runnable executed on the second compute engine). The event type determines which module the task is associated with. “For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack).” Task order of the second plurality of tasks is determined by the bookmarks for the first plurality of tasks. The bookmarks of the first plurality of tasks triggers the execution of the second plurality of tasks.); and determining a deterministic execution schedule of the set of runnables that is fixed across multiple different execution iterations of the computing application based at least on the scheduling branch as selected ([0031] Generation of the disclosed autonomy data logs by an autonomy bookkeeper system can be advantageous in order to provide real-life data for testing subsequent changes to autonomy system tasks or other system features to ensure that an event is not repeated in the future. For example, an entity associated with an autonomous vehicle (e.g., a vehicle owner, a service provider, a fleet manager, etc.) may like to test modifications to an autonomy stack that were created in response to one or more events detected by the event controller. In order to test the modifications to the autonomy stack, the disclosed autonomy data logs can help deterministically recreate a simulated autonomy stack having deterministically scheduled inputs and outputs from the data logs. As such, modifications to the autonomy stack can be characterized appropriately, and successful modifications can be used to replace tasks within an original autonomy stack with newly modified and verified tasks from a simulated autonomy stack; Examiners’ Note: The autonomy stack is an iteration of a deterministically scheduled inputs and outputs (runnables). An autonomy tack is able to be saved and recreated, indicated that it is fixed across multiple different execution iterations.). 5. With regard to claim 2, Tascione further teaches: wherein the coupling constraint requires the application of the coupling constraint to the related runnables further ensures that the scheduling branch, as selected, maintains that a first processing queue that includes the first runnable and that is on the first compute engine matches a second processing queue that includes the second runnable and that is on the second compute engine ([0006] The operations also include scheduling the second plurality of tasks into a task order determined at least in part from the bookmarks stored in the data logs for the first plurality of tasks. The operations also include controlling the flow of inputs to and outputs from the second plurality of tasks based at least in part on the task order, wherein one or more inputs to the second plurality of tasks correspond to one or more inputs or outputs obtained from the data logs; [0107] At 804, one or more computing devices within a computing system can determine an autonomy scenario for testing associated with a simulated autonomy stack. In some implementations, the simulated autonomy stack can include a second plurality of tasks. In some implementations, the second plurality of tasks within the simulated autonomy stack can correspond to at least a subset of a first plurality of tasks implemented in an operational autonomy stack (e.g., the first plurality of tasks implemented at 704). In some implementations, when the data logs accessed at 802 include an indication of an event type associated with an event that triggered storage of the data logs, such event type can help determine an autonomy scenario for testing at 804; [0108] At 806, one or more computing devices within a computing system can determine a first task for execution within the simulated autonomy stack (e.g., second plurality of tasks). In some implementations, the first task for execution within the simulated autonomy stack (e.g., second plurality of tasks) can correspond to a task within the first plurality of tasks (e.g., tasks associated with an operational autonomy stack). In some implementations, the first task for execution within the second plurality of tasks is determined at 806 to reduce a total run time of the second plurality of tasks relative to a total run time of the first plurality of tasks; [0109] In some implementations, such as when the data logs accessed at 802 include an identification of an event type that triggered storage of the data logs, such event type can help determine the first task for execution at 806. For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack); [0110] At 808, one or more computing devices within a computing system can schedule tasks within the simulated autonomy stack into a task order. In some implementations, scheduling the second plurality of tasks into a task order at 808 can be determined at least in part from the bookmarks stored in the data logs for the first plurality of tasks (e.g., the bookmarks determined at 712 and stored in the data logs at 714 as depicted in FIG. 7); Examiner’s Note: A determined task order is similar to a queue. Event type is analogous to a coupling constraint. The first plurality of tasks implemented on the operational autonomy stack (first runnable executed on the first compute engine) might have a subset of a second plurality of tasks that are implemented on the simulated autonomy stack (second runnable executed on the second compute engine). The event type determines which module the task is associated with. “For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack).” Task order of the second plurality of tasks is determined by the bookmarks for the first plurality of tasks. The bookmarks of the first plurality of tasks triggers the execution of the second plurality of tasks.). 6. With regard to claim 3, Tascione further teaches: wherein the selecting of the scheduling branch is further based at least on one or more of: a total time constraint related to respective execution times of the plurality of scheduling branches; a runtime constraint related to minimum execution times of runnables that may trigger a scheduling branch ([0038] More particularly, in some implementations, the simulation conductor system can determine a first task for execution within the second plurality of tasks based at least in part on an event type stored within the data logs, wherein the first task for execution within the second plurality of tasks corresponds to a corresponding task within the first plurality of tasks. For example, if an event type identifies an event corresponding to task failure in the prediction system of an operational autonomy stack, then the first task for execution within the second plurality of tasks might correspond to a task within the prediction system of the simulated autonomy stack as opposed to an earlier task (e.g., one within the perception system of the simulated autonomy stack). A substantial amount of time and processing resources can be saved by only running the portion of a simulated autonomy stack needed to test whether a fault is corrected as opposed to running a full simulation based on the entire simulated autonomy stack. As such, a total run time of the second plurality of tasks can be reduced relative to the first plurality of tasks; Examiners’ Note: A task corresponding to a task failure in the prediction system of the operational autonomy stack triggers execution; however, only a portion of the simulated autonomy stack is run in order to save time and resources.); a bubble avoidance constraint related to avoiding scheduling gaps; a dependency constraint related to avoiding runnable dependency violations; or scheduling prioritization with respect to a critical path within a compute graph that includes the set of runnables. 7. With regard to claim 5, Tascione further teaches: wherein the runtime constraint is reduced as compared to another runtime constraint that was used in determining a prior execution schedule ([0038] A substantial amount of time and processing resources can be saved by only running the portion of a simulated autonomy stack needed to test whether a fault is corrected as opposed to running a full simulation based on the entire simulated autonomy stack. As such, a total run time of the second plurality of tasks can be reduced relative to the first plurality of tasks.). 8. With regard to claim 7, Tascione further teaches: wherein the determining the execution time includes removing one or more execution constraints with respect to one or more runnables included in the scheduling branch (0038] A substantial amount of time and processing resources can be saved by only running the portion of a simulated autonomy stack needed to test whether a fault is corrected as opposed to running a full simulation based on the entire simulated autonomy stack. As such, a total run time of the second plurality of tasks can be reduced relative to the first plurality of tasks; [0094] An identification of Task 1 in a second plurality of tasks can be defined at least in part by a task configuration file 408 provided as an input to processing thread 402. A flow of task inputs 404 to processing thread 402 for the execution of Task 1 can be controlled in accordance with a task processing configuration 410. In some implementations, the task processing configuration 410 is defined by an original configuration file 412, which can include at least the bookmarks included within autonomy data logs. In some implementations, the task processing configuration 410 can be defined by a replacement configuration file 414 which can be substituted as a replacement for original configuration file 412. In some implementations, the task processing configuration 410 can be defined by an overlay configuration file 416 which can be merged with original configuration file 412. The option of an overlay configuration file 416 can be useful when it is desirable to change a small subset of configuration values but would otherwise like to use whatever configuration was used in the autonomy data logs. In some implementations, specific tasks within an original task processing configuration 412 can be disabled via a “disable task” command 418.). 9. With regard to claim 8, Tascione further teaches: wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine ([0008] The autonomous vehicle further includes an event controller configured to detect one or more events associated with execution of at least a portion of the operational autonomy stack. The autonomous vehicle further includes a second memory configured to store data logs corresponding to the inputs and outputs for each task during a predetermined window of time corresponding to each event. The event controller is configured to transfer the inputs and outputs for each task during the predetermined window of time corresponding to each event from the first memory to the second memory upon detection of each event; [0024] The one or more modules of an operational autonomy stack can correspond, for example, to a perception system, a prediction system, a motion planning system, and a vehicle controller that cooperate to perceive the surrounding environment of the autonomous vehicle and determine a motion plan for controlling the motion of the autonomous vehicle accordingly.); a perception system for an autonomous or semi-autonomous machine ([0059] The perception system 110 can identify one or more objects that are proximate to the autonomous vehicle 100 based on sensor data received from the one or more sensors 104 and/or the map data 118. In particular, in some implementations, the perception system 110 can determine, for each object, state data that describes a current state of such object. As examples, the state data for each object can describe an estimate of the object's: current location (also referred to as position); current speed (also referred to as velocity); current acceleration; current heading; current orientation; size/footprint (e.g., as represented by a bounding shape such as a bounding polygon or polyhedron); class (e.g., vehicle versus pedestrian versus bicycle versus other); yaw rate; and/or other state information.); a system for performing simulation operations ([0013] FIG. 2 depicts an example simulation computing system according to example embodiments of the present disclosure; [0033] According to another aspect of the present disclosure, a simulation conductor system can generally provide an interface between the simulation system and a memory storing autonomy data logs (e.g., autonomy data logs generated by a bookkeeper system within a vehicle autonomy system). The simulation conductor system can manage playback of autonomy data logs based on the logged determinism bookmarks by providing an application programming interface (API) for serving data inputs/outputs to a second plurality of tasks in the correct order. The simulation conductor system can maintain a state machine based on the contents of the bookmarks to determine execution of one or more wait, read, and/or write calls for inputs/outputs from the data logs when implementing a next task in a determined task order. In some implementations, a simulation conductor system can include one or more of a scenario controller, a task controller, an amendment system, and a simulation characterization system.); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for presenting at least one of augmented reality content, virtual reality content, or mixed reality content; a system for hosting one or more real-time streaming applications; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. 10. Regarding claim 9, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale. 11. Regarding claim 10, it is rejected under the same reasoning as claim 2 above. Therefore, it is rejected under the same rationale. 12. Regarding claim 11, it is rejected under the same reasoning as claim 3 above. Therefore, it is rejected under the same rationale. 13. Regarding claim 13, it is rejected under the same reasoning as claim 5 above. Therefore, it is rejected under the same rationale. 14. Regarding claim 15, it is rejected under the same reasoning as claim 7 above. Therefore, it is rejected under the same rationale. 15. Regarding claim 16, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale. 16. Regarding claim 17, it is rejected under the same reasoning as claim 2 above. Therefore, it is rejected under the same rationale. 17. Regarding claim 18, it is rejected under the same reasoning as claim 3 above. Therefore, it is rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 18. Claims 4, 6, 12, 14, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tascione et al. US 20190130056 A1, as applied in claim 1, in view of Priyadarshi WO 2020139959 A1. 19. With regard to claim 4, Tascione teaches the system of claim 3 along with coupling constraints, specifically runtime constraints but fails to explicitly teach of a total time constraint, which is based at least on an execution time of a previously determined execution schedule. However, in analogous art, Priyadarshi teaches: wherein the total time constraint is based at least on an execution time of a previously determined execution schedule ([0090] At block 704, input may be loaded from the input data 610 to one or more channels. The input data 610 may include messages previously generated by sensors of the sensor array 121, messages generated to simulate output of the sensor array 121, some combination thereof, etc. The messages may be timestamped or sequentially arranged such that they can be processed by the graph 606 and published to channels in a particular sequence. In some embodiments, a program or nodelet may be implemented to load messages from the input data 610 into various channels according to the timestamps for the messages, to simulate the operation of the sensor array 121. The task scheduler 602 can determine the timestamp associated with the message loaded to the channel, and set the simulated clock 604 to the time corresponding to the timestamp. The simulated clock 604 can then maintain that time until the task scheduler 602 sets the simulated clock 604 to a different value; [0096] At block 714, a nodelet can execute an operation in response to the callback from the task scheduler 602. In the present example, nodelet 430 can execute to process a previously received or generated message, and/or to perform some other operation. The task scheduler 602 may increment the simulated clock 604 to T + 2x. In the present example, only nodelet 430 may be running during the frame beginning at time T + 2x. In some embodiments, the simulated clock 604 is not advanced until after all of the callbacks that are scheduled to occur on or before time Ύ + x have completed processing, or until after the occurrence of some other event). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tascione with the teachings of Priyadarshi wherein the total time constraint is based at least on an execution time of a previously determined execution schedule. Tascione teachings of a deterministic scheduling method for use in autonomous vehicles. Scheduling is done based on a coupling constraint. Similarly, Priyadarshi teaches of a deterministic simulation of distributed systems, such as vehicle-based processing systems (Abstract). Moreover, Priyadarshi teaches of a total time constraint based on an execution time of a previously determined execution schedule ([0096]). Together, Tascione and Priyadarshi teach of a total time constraint that is based on at least an execution time of a previously determined execution schedule. This ensures that when execution schedules are repeated, they are executed within a certain amount of time similar to a previous iteration. 20. With regard to claim 6, Tascione teaches the system of claim 1 but fails to explicitly teach wherein the operations further comprise determining an execution time of the scheduling branch, and further wherein the selecting of the scheduling branch is based at least on the execution time. However, in analogous art, Priyadarshi teaches: wherein the operations further comprise determining an execution time of the scheduling branch, and further wherein the selecting of the scheduling branch is based at least on the execution time ([0007] Another aspect includes systems, methods, and/or non-transitory computer- readable media that provide features for distributed system execution using a serial timeline. The features include receiving input data that simulates output of a vehicle-based sensor. A first nodelet, of a vehicle-based processing system comprising a plurality of executable nodelets, is to perform a first operation using the input data. A second nodelet of the vehicle-based processing system is to perform a second operation using the input data, wherein the second nodelet is configured to operate independently of the first nodelet. The first nodelet is scheduled to perform the first operation during a first period of time, wherein no other nodelet of the plurality of executable nodelets is permitted to execute during the first period of time. The second nodelet is scheduled to perform the second operation during a second period of time following the first period of time, wherein no other nodelet of the plurality of executable nodelets is permitted to execute during the second period of time. The first nodelet is executed to perform the first operation during the first period of time, wherein the first operation generates output data to be processed by a third nodelet of the plurality of executable nodelet. The third nodelet is scheduled to perform a third operation during a third period of time following the second period of time. The second nodelet is executed to perform the second operation during the second period of time. In addition, the third nodelet is executed to perform the third operation during the third period of time; [0025] Additional aspects of the present disclosure relate to scheduling the operation of nodelets such that individual nodelets operate only within defined, serially-occurring timeframes (also referred to simply as “frames” for convenience); Examiner’s Note: Each nodelet is a node on a scheduling branch. Each nodelet is executed at a specific time. The execution/scheduling of the nodelets is determined by their operation time.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tascione with the teachings of Priyadarshi wherein the operations further comprise determining an execution time of the scheduling branch, and further wherein the selecting of the scheduling branch is based at least on the execution time. Tascione teachings of a deterministic scheduling method for use in autonomous vehicles. Scheduling is done based on a coupling constraint. Similarly, Priyadarshi teaches of a deterministic simulation of distributed systems, such as vehicle-based processing systems (Abstract). Moreover, Priyadarshi teaches of selecting a scheduling branch based on an execution time. The nodelets in Priyadarshi execute at specific times. When a task wants to be executed on a particular nodelet, it has to be executed during that nodelet’s time of operation ([0007]; [0025]). Together, Tascione and Priyadarshi teach of selecting a scheduling branch based on execution time. This ensures that when tasks are to be executed at a particular time, they are executed on the appropriated branch currently operating. 21. Regarding claim 12, it is rejected under the same reasoning as claim 4 above. Therefore, it is rejected under the same rationale. 22. Regarding claim 14, it is rejected under the same reasoning as claim 6 above. Therefore, it is rejected under the same rationale. 23. Regarding claim 19, it is rejected under the same reasoning as claim 4 above. Therefore, it is rejected under the same rationale. 24. Regarding claim 20, it is rejected under the same reasoning as claim 6 above. Therefore, it is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN-AN N NGUYEN whose telephone number is (571)272-6147. The examiner can normally be reached Monday-Friday 8:00-5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AIMEE LI can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AN-AN NGOC NGUYEN/Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Sep 02, 2022
Application Filed
Nov 15, 2022
Response after Non-Final Action
Apr 24, 2025
Non-Final Rejection — §102, §103
Jun 25, 2025
Interview Requested
Jul 11, 2025
Examiner Interview Summary
Jul 11, 2025
Applicant Interview (Telephonic)
Jul 14, 2025
Response Filed
Jul 30, 2025
Final Rejection — §102, §103
Oct 28, 2025
Interview Requested
Nov 06, 2025
Examiner Interview Summary
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 07, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §102, §103
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Response Filed
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561130
MAINTENANCE MODE IN HCI ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12511156
CREDIT-BASED SCHEDULING USING LOAD PREDICTION
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+50.0%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month