Prosecution Insights
Last updated: April 19, 2026
Application No. 18/185,047

METHOD FOR A CONFIGURATION IN A NETWORK

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
MAHMUD, GOLAM
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Robert Bosch GmbH
OA Round
5 (Non-Final)
61%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
157 granted / 258 resolved
+2.9% vs TC avg
Strong +31% interview lift
Without
With
+30.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
8.6%
-31.4% vs TC avg
§103
59.1%
+19.1% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103
DETAILED ACTION This office action is a response to a communication made on 01/21/2026. Claims 10-13 are canceled. Claims 1 and 9 are currently amended. Claims 1-9 and 14-15 are pending for this application. Request for Continued Examination (RCE) under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/20226 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 9 and 14-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments, see remarks on page 6-8, filed 01/21/2026, with respect to the rejection(s) of claim(s) 1 and 9 under 103 have been considered and regarding the amended feature of “wherein the slack is calculated based on a difference between the ascertained execution time and one of the local time allowances” are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Balasubramanian (US 6687257) in view of Lin et al. (US 2018/0107507) in view of Collin et al. (US 2014/0007004) in view of Muller (US2004/0139431), and further in view of Binns et al. (US 2008/0028415 A1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Balasubramanian (US 6687257) in view of Lin et al. (US 2018/0107507), hereinafter “Lin” in view of Collin et al. (US 2014/0007004), hereinafter “Collin” in view of Muller (US2004/0139431), and further in view of Binns et al. (US 2008/0028415 A1), hereinafter “Binns”. With respect to claim 1, Balasubramanian discloses a method for a configuration in a network, at least one real-time application including multiple tasks in the network (Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Col-4, II. 56-58 and Col-5, II. 38-40, teaches FIG. 1 as allocated to a distributed real-time operating System and different application programs…FIG. 1, a distributed control system 10 includes multiple nodes 12a, 12b and 12c for executing a control program comprised of multiple applications, Col-12, II. 43-50, teaches enrolling a task in the task list (i.e. task chain) not only determines the order of execution but allocates a particular amount of processor resources to that task. New tasks are received again by a scheduler 94 retaining a history of the execution of the task according to task identification (TID) in memory 98 and enrolling the tasks in one of the time slots of the task queue 119 to be forwarded to the processor 26 at the appropriate moment), the method comprising the following steps for the at least one real-time application: receiving at least one message unit from a preceding task of the real-time application (Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Col-10. II. 28-31 and II. 38-43 teaches the scheduler 94 receives the messages 91 (i.e. message unit) and places them in the queue 90 and includes memory 98 holding a history of execution of messages identified to their tasks… Each message 91 associated with an application program for which a time constraint exists (guaranteed tasks) to be transmitted by the communication card…, Col-11, II. 59-61, teaches the messages with identical priorities are examined to determine which has the earliest (i.e. preceding) LATEST STARTING TIME, Col-12, II. 43-50, teaches enrolling a task in the task list (i.e. task chain) not only determines the order of execution but allocates a particular amount of processor resources to that task. New tasks are received again by a scheduler 94 retaining a history of the execution of the task according to task identification (TID) in memory 98 and enrolling the tasks in one of the time slots of the task queue 119 to be forwarded to the processor 26 at the appropriate moment); ascertaining an execution time of the preceding task based on the message unit received (Col-10. II. 28-31 and II. 55-58 teaches the scheduler 94 receives the messages 91 and places them in the queue 90 and includes memory 98 holding a history of execution of messages identified to their tasks… the scheduling data 100 may also include an execution period (EP) (i.e. an execution time) indicating the length of time anticipated to be necessary to execute the message for transmission on the network31, Col-11, II. 59-61, teaches the messages with identical priorities are examined to determine which has the earliest (i.e. preceding task) LATEST STARTING TIME); prioritizing the message unit based on the ascertained slack (Col-10, II. 48-54, teaches a high priority for messages associated with time (i.e. slack) critical tasks. The priority 96 is taken from the priority of the application program 34 of which the message 91 form a part and is determined prior to application program based on the importance of its control task as determined by the user) and forwarding the message unit in the network as a function of the prioritization to a following task of the real-time application (Col-3, II. 31-34 and 54-57, teaches the execution of tasks for real-time control over multiple, Spatially Separated control components…each new message may be associated with an execution time necessary to transmit the new message on the network and the Scheduler may first locate an insertion point of the messages into the queue according to priority). However, Balasubramanian remain silent on evaluating the ascertained execution time, the evaluating including at least one comparison of the ascertained execution time to at least one time allowance, to determine an instantaneous slack of the real-time application, wherein: the at least one time allowance includes local time allowances, wherein the slack is calculated based on a difference between the ascertained execution time and one of the local time allowances. Lin discloses evaluating the ascertained execution time (¶0073, teaches Cτ i is the execution time, ¶0099, teaches computes response time to determine whether it is smaller than or equal to the activation period), the evaluating including at least one comparison of the ascertained execution time to at least one time allowance, to determine an instantaneous slack of the real-time application (¶0089, teaches response time of a task (i.e. local time allowance) may be smaller than or equal (i.e. compare) to the activation period of the task, ¶0098-¶0099, teaches the mapping engine 130 may determine the slack (i.e. instantaneous slack) of a path using the method 700 of FIG. 7. In block 702, the mapping engine 130 may determine a deadline value (i.e. time allowance) associated with the path, in block 704, the mapping engine 130 may determine a latency associated with the tasks of the path, and in block 706, the mapping engine 130 may compute a difference between the deadline value and the latency…computes response time to determine whether it is smaller than or equal to the activation period); wherein: the at least one time allowance includes local time allowances (¶0098-¶0099, teaches determine a deadline value (i.e. time allowance) associated with the path… the response time (i.e. local time allowance) or deadline may initially be set to an activation period, which may be predefined, ¶0108, teaches the mapping engine 130 may allocate tasks based on response time (i.e. local time allowances). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the in invention to modify Balasubramanian’s messages and execution period with evaluating the ascertained execution time, the evaluating including at least one comparison of the ascertained execution time to at least one time allowance, to determine an instantaneous slack of the real-time application, the at least one time allowance includes local time allowances of Lin, in order to help understand whether the application is running within its expected timeframe or if there’s a delay, and maintain the real time nature of the application by ensuring that essential tasks are completed promptly (Lin). However, Balasubramanian in view of Lin remain silent on multiple tasks chained in the network according to a task chain, a preceding task in the task chain. Collin discloses multiple tasks chained in the network according to a task chain (¶0049, teaches the task chain management system 12 may be configured to chain, link and/or otherwise connect one or more tasks in a task chain. A task is added to the task chain as each task is performed and/or launched, ¶0050, teaches the task chain may comprise: tasks related to a topic, a quantity of tasks, such as the last twenty tasks performed, tasks performed over a particular time period or the like. Further, the task chain management system 12 may maintain a single task chain or may maintain a plurality of task chains); a preceding task in the task chain (¶0053, teaches the task chain management system 12 may then be configured to determine which of the example first task or the second task to leave connected to the task chain containing the parent task. In some example embodiments, the older or prior task (e.g. the first task) may be removed from the task chain and the newer or more recent task (e.g. the second task) may remain connected to the task chain), Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the in invention to modify Balasubramanian’s the execution of tasks for real-time in view of Lin’s system with a task chain, and a preceding task in the task chain of Collin, in order to ensure execution is structured so each task is executed in the proper order and enable the user to navigate through all of the tasks in the chain without concern for the software package that generated the task (Collin, ¶0027). However, Balasubramanian in view of Lin, and further in view of Collin remain silent on a structure of the task chain determines that at least some of the tasks in the task chain are weighted differently from each other so that the local time allowances are apportioned according to the weighting. Muller discloses a structure of the task chain determines that at least some of the tasks in the task chain are weighted differently from each other so that the local time allowances are apportioned according to the weighting (¶0008, teaches each of the identified tasks can be weighted according to a likelihood that a newly scheduled task when combined with the weighted identified tasks would interfere with completing the weighted identified tasks, ¶0012, teaches weighting and computing steps can be performed for each of a set of invited participants in the new task, for each of a set of specified time frames (i.e. local time allowances), ¶0022, teaches the graduated availability process 200 can weight the scheduling of pre-existing tasks such as events 170A, 170B, to-dos 180A, 180B, and 190A, 190B (i.e. structure of task chain) according to pre-specified levels of importance relating to the impact of the types of tasks. More particularly, where some tasks are to be considered more likely to contribute to overloading than others, a greater weighting can be applied thereto). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the in invention to modify Balasubramanian’s the execution of tasks for real-time in view of Lin’s the response time (i.e. local time allowance) or deadline may initially be set to an activation period in view of Collin’s task chain implementing differently as static or dynamic with a structure of the task chain determines that at least some of the tasks in the task chain are weighted differently from each other so that the local time allowances are apportioned according to the weighting of Muller, in order to optimize efficiency, success of the task chain, and ensures they are prioritized in scheduling and time allocation (Muller). Balasubramanian Col-3, II. 54-55 teaches each new message may be associated with an execution time necessary to transmit the new message on the network, Lin ¶0089, teaches response time of a task (i.e. local time allowance) may be smaller than or equal (i.e. compare) to the activation period of the task, ¶0095 teaches the path slack value of each path may reflect a difference between a deadline and a latency (i.e. response time as local time allowances) of the path, and may be computed or pre-determined. However, Balasubramanian in view of Lin in view of Collin, and further in view of Muller remain silent on wherein the slack is calculated based on a difference between the ascertained execution time and one of the local time allowances. Binns discloses wherein the slack is calculated based on a difference between the ascertained execution time and one of the local time allowances (¶0102, teaches A process has a worst-case execution time (i.e. local time allowance) equal to C. At t=0, the process begins executing and consumes C(t) execution time. Between t1 and t2, the process halts execution. The process resumes execution at t2 and halts execution at t=T, consuming C−C(t) execution time. Timeline slack is equal to the period T (i.e. execution time) minus the worst-case execution time C (i.e. upper bound of execution time as local time allowance)). Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Balasubramanian’s the execution of tasks for real-time in view of Lin’s the response time (i.e. local time allowance) or deadline may initially be set to an activation period in view of Collin’s in view of Muller’s system with wherein the slack is calculated based on a difference between the ascertained execution time and one of the local time allowances of Binns, in order to measure how much of that allocated window remains unused (Binns). For claim 9, it is a non-transitory computer readable medium claim corresponding to the method of claim 1. Therefore claim 9 is rejected under the same ground as claim 1. With respect to claim 2, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 1, wherein the real-time application is a first real-time application of at least two real-time applications distributed in the network (Balasubramanian, see Fig. 1 and Fig. 2, teaches a distributed control system 10 includes multiple nodes 12 a, 12 b and 12 c for executing a control program comprised of multiple applications… the distributed real-time operating system 32 of the present invention may be used such as may be centrally located in one node 12 or in keeping with the distributed nature of the control system distributed among the nodes 12 a, 12 b and 12 c), the at least two real-time applications being executed at least partially in parallel (Balasubramanian, See Fig. 2, applications are executing in parallel), the evaluating of the ascertained execution time further including: carrying out a further comparison of the ascertained slack of the first real-time application to an instantaneous slack of a second real-time application of the real-time applications, to determine a comparison result via the slacks of the real-time applications (Balasubramanian, see Fig. 2, teaches multiple real-time applications, Lin, ¶0108, teaches the path slack value used in block 604 is further computed (i.e. comparison) based on the response time, as discussed elsewhere herein. The method 600 may repeat the response time determination and use it for allocation for any other paths being processed, provided sufficient information for the response time computation is available. Otherwise, the method 600 may use activation time or other values to approximate the response time); wherein the prioritization is carried out as a function of the comparison result, to forward the message unit with higher priority in the network when the slack of the first real-time application is less than the slack of the second real-time application (Balasubramanian, Col-3, II. 54-57, teaches each new message may be associated with an execution time necessary to transmit the new message on the network and the Scheduler may first locate an insertion point of the messages into the queue according to priority, Lin, ¶0089, teaches response time of a task (i.e. local time of allowance) may be smaller than or equal (i.e. compare) to the activation period of the task, ¶0095-¶0096, teaches the mapping engine 130 computes a path slack value for each path of a set of paths of a functional computing model…tasks of paths having the lowest slack values are given the highest priority and the tasks of paths having the highest slack values are given the lowest priority, ¶0107, teaches the mapping engine 130 determines a priority of each task of the set of tasks of each path based on the path slack value of the path). With respect to claim 3, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 1, wherein the prioritization is also carried out depending on at least one of the following criteria: the structure of the chaining of the tasks (Lin, ¶0064, teaches the mapping engine 130 defines the path by an interleaving sequence of tasks and signals, Muller, ¶0022, teaches the graduated availability process 200 can weight the scheduling of pre-existing tasks such as events 170A, 170B, to-dos 180A, 180B, and 190A, 190B (i.e. structure of task chain) according to pre-specified levels of importance relating to the impact of the types of tasks), a function of the real-time applications (Balasubramanian, see Fig. 2, teaches the memory resources of each node of FIG. 1 as allocated to a distributed real-time operating System and different application programs), a relevance of the real-time-critical execution of the real-time applications (Balasubramanian, Col-10, II. 17-19, teaches of particular importance, messages which require completion on a timely basis and which therefore have a high priority may nevertheless be queued behind lower-level messages without time criticality), a safety relevance of the real-time applications (Lin, ¶0048, teaches safety consideration, Balasubramanian, see Fig. 2, teaches the memory resources of each node of FIG. 1 as allocated to a distributed real-time operating System and different application programs), a static prioritization of the real-time applications (Balasubramanian, Col-9, II. 45-53, teaches in the case of the Static resources Such as memory, the allocation may Simply be a checking of the hardware resource list 44 to See if Sufficient memory is available. In dynamic resources Such as the processors and the network, the modeling may deter mine whether Scheduling may be performed Such as will allow the necessary completion-timing constraints t given the inter-arrival period t of the particular application and other applications, Collin ¶0038, teaches the task chaining apparatus 10 may be implemented as a “native” executable running on the processor 20, along with one or more static or dynamic libraries). With respect to claim 4, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 1, wherein the at least one of the local time allowances defines an upper limit for the execution time of an individual task (Lin, ¶0089, teaches response time of a task (i.e. local time allowance defines an upper limit for the execution time) may be smaller than or equal (i.e. compare) to the activation period of the task), the real-time application being a first real-time application of at least two real-time applications distributed in the network (Balasubramanian, see Fig. 1 and Fig. 2, teaches a distributed control system 10 includes multiple nodes 12 a, 12 b and 12 c for executing a control program comprised of multiple applications… the distributed real-time operating system 32 of the present invention may be used such as may be centrally located in one node 12 or in keeping with the distributed nature of the control system distributed among the nodes 12 a, 12 b and 12 c), wherein, for each respective real-time application of the real-time applications, a global time allowance is predefined which each defines an upper limit for an overall execution time of the respective real-time application (Balasubramanian, Col-13, II. 47-50, teaches the window timer (i.e. global time allowance) are realized by the operating system 32 but as will be understood in the art may also be implemented by discrete circuitry such as an application specific integrated circuit (ASIC), see Col-14, II. 2-10), wherein , and prior to the evaluation, the following step being carried out: determining the local time allowance based on the global time allowance of the first real-time application (Balasubramanian, Col-13, II. 47-50, teaches the window timer (i.e. global time allowance) are realized by the operating system 32, see Fig. 2, teaches multiple applications, Lin, ¶0089, teaches response time of a task (i.e. local time allowance) may be smaller than or equal (i.e. compare) to the activation period of the task) and based on the structure of the chained tasks of the first real-time application ((Lin, ¶0064, teaches the mapping engine 130 defines the path by an interleaving sequence of tasks and signals), the prioritization being carried out in order to honor the global time allowance for each respective real-time application (Balasubramanian, see Fig. 2, teaches multiple real time applications, Col-14, II. 2-11 and 22-27, teaches the interrupt window timer 126 is checked to See if the amount of remaining interrupt window is Sufficient to allow processing of the current interrupt based on its expected execution period. The execution periods may be entered by the control System programmer and keyed to the interrupt type and number. If Sufficient time remains in the interrupt window, the execution period is Subtracted from the interrupt window and, as determined by decision block 132, then the interrupt manager 122 proceeds to process block 134… the interrupt window is Subtracted from the bandwidth of the processor 26 that may be allocated to user tasks and therefore the allocation of bandwidth for guaranteeing the execution of user tasks is done under the assumption that the full interrupt window will be used by interrupts taking the highest priority). With respect to claim 5, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 1, wherein the steps of the method are carried out for at least two further real-time applications in the network (Balasubramanian, see Fig. 2), the respective message units being forwarded dynamically as a function of the prioritization based on the respective slacks (Balasubramanian, Col-3, II. 48-57, teaches provide a Scheduling of the communication circuit that is both dynamically responsive to timing constraints and which is responsive to Statically imposed priorities thus allowing both efficient use of resources and the ability to guarantee the execution of critical tasks… each new message may be associated with an execution time necessary to transmit the new message on the network and the Scheduler may first locate an insertion point of the messages into the queue according to priority, Lin, ¶0041, teaches the set of tasks 206 includes groups of tasks 202 that have been respectively allocated to different ECUs (e.g., ECU 1, ECU 2, and ECU3), and in FIG. 2B, the set of tasks 206 includes groups of tasks 202 that have been respectively allocated for processing to different OS instances (e.g., OS 1, OS 2, and OS 3). With respect to claim 6, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 5, wherein the chained tasks of the real-time applications are executed in different nodes of the network (Balasubramanian, Balasubramanian, Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Col-5, II. 64-67, teaches distributed control System application programs are allocated between memories 24a, 24b and 24c to be executed on the respective nodes 12a, 12b and 12c, Lin,¶0041, teaches the set of tasks 206 includes groups of tasks 202 that have been respectively allocated to different ECUs (e.g., ECU 1, ECU 2, and ECU3), and in FIG. 2B, the set of tasks 206 includes groups of tasks 202 that have been respectively allocated for processing to different OS instances (e.g., OS 1, OS 2, and OS 3), ¶0064, teaches the mapping engine 130 defines the path by an interleaving sequence of tasks and signals), and the steps of the method for the real-time applications are carried out at each of the nodes by a network configurator (Balasubramanian, Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Col-6, II. 3-7, teaches a program in memory 24a would monitor Signals A and B and Send a message indicating both were true, or in this example Send a message indicating the State of Signals A and B to node 12c via a path through communication cards 28a, 28b, 28b' and 28c, wherein communication cards are network configurator). With respect to claim 7, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 6, wherein the network includes different routes between the nodes, the routes via which the message unit is forwarded in order to arrive at one of the nodes being decided by the prioritization (Lin, ¶0069, teaches tasks 302 are allocated to computing units 306, whether virtual or physical, and their priorities are assigned. If the source and destination tasks 302 of a signal 304 are allocated to different computing units 306, the signal 304 is mapped to a message 308 on the communication network 302, and its priority is assigned, ¶0107-¶0108, teaches the mapping engine 130 then allocates in block 506, based on the priority of each task, the tasks of each of the paths of the set to corresponding computing units from a set of computing units of an architectural platform, such as the computing device 126 of a vehicle…determine a computing unit to which to assign the first task based on the path slack value of one or more of the paths (i.e. different routes) of the set), each of the message units in in the form of a data packet and each of the nodes is in the form of a computing node for executing the tasks). With respect to claim 8, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method as recited in claim 2, wherein the real-time applications are parts of a middleware or of a vehicle operating system or of an autonomous driving function or of a programmable controller (Balasubramanian, Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Lin, ¶0033, teaches the platform(s) 112 are capable of transporting from one point to another. Non-limiting examples of the platform(s) 112 include a vehicle, an automobile, a bus, a boat, a plane, a bionic implant, or any other platforms with non-transitory computer electronics (e.g., a processor, a memory or any combination of non-transitory computer electronics)) , at least one of the tasks of a real-time application of the real-time applications being performed to acquire sensor values (Balasubramanian, Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Lin, ¶0035-¶0036, teaches the vehicle data is collected from multiple sensors 114 coupled to different components of the platform(s) 112 for monitoring operating states of these components, the sensor(s) 114 may include any type of sensors suitable for the platform(s) 112. The sensor(s) 114 may be configured to collect any type of signal data suitable to determine characteristics of a platform 112 and/or its internal and external environments), at least another of the tasks of the real-time application being performed to process the acquired sensor values (Balasubramanian, Col-3, II. 31-34. teaches the execution of tasks for real-time control over multiple, Spatially Separated control components, Lin, ¶0035-¶0036, teaches the vehicle data is collected from multiple sensors 114 coupled to different components of the platform(s) 112 for monitoring operating states of these components, the sensor(s) 114 may include any type of sensors suitable for the platform(s) 112. The sensor(s) 114 may be configured to collect any type of signal data suitable to determine characteristics of a platform 112 and/or its internal and external environments) and at least one other of the tasks of the real-time application being performed to control a machine based on the processing (Balasubramanian, see Fig. 2, Lin, ¶0038, teaches the computing unit(s) 118 may include electronic control units (ECU) implemented in the platform 112 such as a vehicle). With respect to claims 14 and 15, Balasubramanian in view of Lin in view of Collin in view of Muller, and further in view of Binns discloses the method and the non-transitory computer-readable medium as recited in claims 1 and 9, wherein the structure of the chained tasks includes information about a computing intensity or a maximum calculation duration of the chained tasks (Balasubramanian, Col-7, II. 34-41, teaches the completion-timing constraint is a maximum period of time as a maximum calculation duration of the chained tasks, Collin, ¶0049, teaches the task chain management system 12 may be configured to chain, link and/or otherwise connect one or more tasks in a task chain, Binns, ¶0245, teaches if the per dispatch execution time of αp,j is cpu_time, which includes mutex execution time but not context swap time, and αp,j is executed a maximum of k times in any time interval of duration Tj, ¶0344, teaches a task execution timeline illustrating task execution without slack stealing. Partitions P1 and P2 are defined as critical. Partition P3 is non-essential. P1 uses 10 ms of time; P2 uses 10 ms; and P3 uses a minimum of 5 ms and a maximum of 20 ms.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GOLAM MAHMUD whose telephone number is (571)270-0385. The examiner can normally be reached Mon-Fri 8.00-5.00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 5712703037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GOLAM MAHMUD/Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Feb 22, 2024
Non-Final Rejection — §103
May 28, 2024
Response Filed
Sep 20, 2024
Final Rejection — §103
Dec 17, 2024
Request for Continued Examination
Jan 01, 2025
Response after Non-Final Action
Jan 23, 2025
Non-Final Rejection — §103
May 23, 2025
Examiner Interview Summary
May 23, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Response Filed
Jul 17, 2025
Final Rejection — §103
Dec 22, 2025
Interview Requested
Jan 05, 2026
Applicant Interview (Telephonic)
Jan 05, 2026
Examiner Interview Summary
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587442
INFORMATION PROCESSING APPARATUS, METHOD OF REGISTERING DEVICE CONNECTED TO INFORMATION PROCESSING APPARATUS IN SERVER, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12563008
CAPTURING AND UTILIZING CONTEXT DATA IN CROSS-CHANNEL CONVERSATION SERVICE APPLICATION COMMUNICATION SESSIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12556478
BGP Segment Routing optimization by packing multiple prefixes in an update
2y 5m to grant Granted Feb 17, 2026
Patent 12537741
TEMPLATE XSLT BASED NETCONF DATA COLLECTOR
2y 5m to grant Granted Jan 27, 2026
Patent 12531775
ROOT CAUSING NETWORK ISSUES USING CHAOS ENGINEERING
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
61%
Grant Probability
92%
With Interview (+30.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month