Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,902

Computer System and Method for Executing an Automotive Customer Function

Non-Final OA §103§112
Filed
Jun 20, 2023
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Tttech Auto AG
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to claims filed 20 June 2023. Claims 1-36 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1-36 are objected to because of the following informalities: Throughout the claims, terms are abbreviated, but are subsequently and repeatedly referred to by their non abbreviated name. For example, “A computer system (CS)” in line 1, is referred to as “the computer system (CS)” in lines 4, 17, 18, etc, when it should simply read “the CS”. Appropriate correction is required. Claims 1, and 19 are objected to because of the following informalities: In line 3, “comprising an automobile, is controlled” should read “comprising an automobile is controlled”. Further, in lines 8-9, “an application” should read “an application of the applications”. Further, in line 12, “a computation chain” should read “the computation chain.” Further, in line 22, “one or more cores” should read “one or more processing cores of the processing cores.” Further, in lines 22, and 25, “a container” should read “a container of the containers.” Further, in line 23, “execution of the tasks of the application” should read “execution of the one or more tasks of the application.” Further, in line 26, “said container” should read “said inactive container.” Further, in line 27, “configured to executed” should read “configured to be executed.” Further, in line 29 “his” should read “its”. Further, in line 38, “configured to executed” should read “configured to be executed”. Appropriate correction is required. Claims 10 and 28 are objected to because of the following informalities: In lines 2, and 3, the claims recite the term “preferably”, which should be removed. Appropriate correction is required. Claims 17 and 35 are objected to because of the following informalities: In line 3, “WCET” should read “worst case execution time (WCET)”. Appropriate correction is required. Claims 18, and 36 are objected to because of the following informalities: In line 3, “it’s” should read “its”. Appropriate correction is required. The examiner has made every attempt to identify as many minor informalities in the claims as possible, and requests that the applicant assist in identifying and correcting any additional informalities that were missed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-36 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, and 19 (line numbers correspond to claim 1), i. In line 5, it is not particularly pointed out or distinctly claimed what is meant by “one or more processing cores, “core” (Core 1, Core 2, Core 3).” It is not clear whether the scope of the claim is a single core, one or more cores up to three cores, one or more cores, or exactly three cores. For examination purposes the examiner will interpret this as one or more cores. ii. In line 6, it is not particularly pointed out or distinctly claimed what is meant by “applications (APP1, APP2, APP3).” It is not clear whether the scope of the claim is two or more applications up to three applications, two or more applications, or exactly three applications. For examination purposes the examiner will interpret this as two or more applications. iii. In lines 7-8, it is not particularly pointed out or distinctly claimed what is meant by ”a multitude of different tasks (T1.1 - T1.3, T2.1 - T2.9, T3.1 - T3.4).” It is not clear how many tasks are represented by the dashed line. Further, it is not clear whether the scope of the claim is two or more tasks up to six tasks, two or more tasks, exactly six tasks, or exactly nine tasks. For examination purposes the examiner will interpret this as two or more tasks. iv. In line 35, it is not particularly pointed out or distinctly claimed what is meant by “the sequence of tasks to be executed.” There is a lack of antecedent basis for this term. For examination purposes, the examiner will interpret this as a sequence of the one or more tasks to be executed. v. In line 5, it is not particularly pointed out or distinctly claimed what is meant by “for each task, which has to be executed, the core or cores which the container provided, on which core or cores the task has to be executed.” There is a lack of antecedent basis for the term “the core or cores which the container provided.” Are these the same cores that are reserved for the execution of the tasks? If so, these cores are not provided by the container. Further, this limitation makes no sense. Do we “decide…the core or cores which the container provided”, or do we “decide…which core or cores the task has to be executed [on]?” For examination purposes, the examiner will interpret this as deciding, for each task, which core or cores reserved for the containers that the task will be executed on. Regarding claims 12, and 30 (line numbers correspond to claim 12), i. In line 3, it is not particularly pointed out or distinctly claimed what is meant by “the container cycle.” There is a lack of antecedent basis for this term. For examination purposes, the examiner will interpret this as a container cycle. Regarding claims 13, and 31 (line numbers correspond to claim 13), i. In line 6, it is not particularly pointed out or distinctly claimed what is meant by “the template.” There is a lack of antecedent basis for this term. For examination purposes, the examiner will interpret this as a template. Regarding claims 16, and 34 (line numbers correspond to claim 16), i. In lines 5-6, it is not particularly pointed out or distinctly claimed what is meant by “the communication to happen between containers” There is a lack of antecedent basis for this term. For examination purposes, the examiner will interpret this as a communication to happen between containers. Regarding claims 2-18, and 20-36, they are dependent upon rejected base claims, and fail to resolve the deficiencies thereof. They are therefore rejected for similar rationales. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-10, 12-16, 18-25, 27-28, 30-34, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG et al. Patent No.: US 10,432,552 B2 (hereafter ZHANG), in view of FURUICHI et al. Pub. No.: US 2023/0091346 A1 (hereafter FURUICHI), in view of SARANGAM et al. Pub. No.: US 2019/0007280 A1 (hereafter SARANGAM). Regarding claim 1, ZHANG teaches the invention substantially as claimed, including: A computer system (CS) for executing a customer function (CUS)…wherein the customer function (CUS) generates customer function output data (OUT-DAT)…wherein the computer system (CS) comprises: one or more processing cores, "core" (Core 1, Core 2, Core 3) ([Column 2, Lines 20-21] A mapping of the service function to a respective physical node along a specified forwarding path. [Column 9, Lines 29-33] In some embodiments, each physical node may include network elements (e.g., OTN switch, router) and/or compute servers (i.e., processing cores”) and storage elements (e.g., datacenters) capable of invoking a subset of service functions selected from a catalog of service functions), wherein the customer function (CUS) comprises applications (APP1, APP2, APP3), wherein each application (APP1, APP2, APP3) of the customer function (CUS) comprises a multitude of different tasks (T1.1 - T1.3, T2.1 - T2.9, T3.1 - T3.4), wherein during the execution of an application (APP1, APP2, APP3) one or more tasks of said application are executed ([Column 9, Lines 33-41] Some examples of the service functions (i.e., “applications”) provided in these multi-domain networks include firewalls, deep packet inspection (DPI), network address translation (NAT), load balancers, and parental control functions. In one example, a service function chain (i.e., “customer function”) may include a firewall (i.e., a firewall service function implements a multitude of “tasks” including monitoring network traffic, making a determination that traffic meets or does not meet criteria, and allows/blocks traffic, amongst other tasks, as defined by “What is a Firewall? Types of Firewalls and How they Work”. Available at https://www.fortinet.com/resources/cyberglossary/firewall. Available on 31 October 2020 ), a deep packet inspection (DPI) service function (i.e., a deep packet service function implements a multitude of “tasks” including examination of data packet content, comparing packet content to rules, and determining how to handle any identified threats, amongst other tasks, as defined by “Deep Packet Inspection (DPI)-Meaning and More”. Available at https://www.fortinet.com/resources/cyberglossary/dpi-deep-packet-inspection. Available on 27 January 2021), a parental control service function, and an anti-virus service function, each of which may be provided by nodes in a different network domain), wherein said applications (APP1, APP2, APP3) are executed in form of a computation chain (CHA) one after the other in a defined sequence ([Column 2, Lines 17-21] The method may include obtaining, by a resource orchestrator in a network, a service function chain specifying, for each of two or more service functions (i.e., at least three “applications”, see “third service function in the service function chain”), a mapping of the service function to a respective physical node along a specified forwarding path (i.e., executing the service functions executes the tasks associated with those service functions)), wherein a computation chain (CHA) receives customer function input data (IN-DAT) at its start and generates customer function output data (OUT-DAT), which are provided at the end of the execution of the computation chain (CHA) ([Column 28, Lines 25-27] The method may include receiving a packet flow and performing the services in the service function chain on behalf of each packet in the packet flow (i.e., packets of the packet flow represent “customer function input data” as network packet data that a user or customer wishes to apply the service function chain on. Furthermore, Fig. 5 shows a path that a packet takes from being input at the first service function in the service function chain to the third service function, representing “customer function output data”)), and wherein during execution of the customer function (CUS) said computation chain (CHA) is executed once or several times, wherein the computer system (CS) provides containers (CON1, CON2, CON3) ([Column 28, Lines 16-20] At 1206, the method may include the source orchestrator provisioning resources for the SFC request by setting up containers and/or virtual machines on the allocated nodes for execution during the calculated time durations and setting up the selected forwarding path for the SFC (i.e., the containers in the container chain are executed at least once)), wherein the computer system (CS) is configured to activate and de-activate said containers, so that a container is active or inactive, wherein all tasks of the applications are assigned to containers (CON1, CON2, CON3) ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., instantiation of resources just in time activates the container, and relinquishing the resources after completion deactivates the container)), and wherein all tasks of each specific application are assigned to exactly one specific container ([Column 41, Lines 10-17] At 1604, the method may include the source orchestrator instantiating a respective container for the first service function in the SFC on the first physical node allocated for performing the first service function at its starting time prior to arrival of the SFC packet flow at the first physical node. At 1606, method 1600 may include resources on the first physical node beginning to perform the first service function when the first SFC packet is received at the first node (i.e., each service function, representing applications and their respective tasks, are allocated to a respective single container)), wherein in a timeframe, during which a container is active, one or more cores of the computer system are exclusively reserved for the execution of the tasks of the application of said container, and wherein the computer system (CS) is configured such that when a container is inactive, the tasks of said container cannot be executed on the computer system ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., resources are “reserved” during a duration of time, representing a “timeframe” during which the container is used, or “active”, and the resources are relinquished outside of the duration of time where the container is “inactive”, thereby disabling the container’s ability to execute service functions and their associated tasks)), wherein the computer system is configured to executed the containers (CON1, CON2, CON3) according to the sequence of the applications (APP1, APP2, APP3), so that a container is activated before his immediately following container, and wherein a container and its immediately following container of a computation chain are not allowed to overlap in time, and wherein the computer system is configured to executed the tasks of each container ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., executing the SFC executes the containers in sequence and corresponding tasks of the service functions)). While ZHANG discloses executing containerized applications generally, ZHANG does not explicitly disclose: executing a customer function (CUS) comprising an automotive customer function, wherein the customer function (CUS) generates customer function output data (OUT-DAT), based on which a machine comprising an automobile, is controlled. However, in analogous art that similarly teaches executing containerized applications, FURUICHI teaches: executing a customer function (CUS) comprising an automotive customer function, wherein the customer function (CUS) generates customer function output data (OUT-DAT), based on which a machine comprising an automobile, is controlled ([0054] An operator or owner (i.e., “customer”) of a controlled area receiving unmanned vehicles from a third party vehicle provider to operate in the controlled area loads application containers to control the unmanned vehicle to perform user approved operations (i.e., “automotive customer functions” executed by the container system and used to control the vehicle)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined FURUICHI’s teaching of executing containerized applications to control vehicles, with ZHANG’s teaching of executing containerized applications, to realize, with a reasonable expectation of success, a system that executes containerized applications, as in ZHANG, to control vehicle operations, as in FURUICHI. A person having ordinary skill would have been motivated to make this combination to enable autonomous vehicles to operate in controlled areas without exposing confidential information gathered while in the controlled area (FURUICHI [0054]). While ZHANG discloses scheduling tasks of applications on containers, ZHANG and FURUICHI does not explicitly disclose: wherein for each container a task sequencer is provided, wherein said task sequencer is activated when its container is activated, and wherein the task sequencer of a container decides ("task-sequencer-decision"), -which of the tasks of the application of the container have to be executed, -the sequence of tasks to be executed, and -for each task, which has to be executed, the core or cores which the container provided, on which core or cores the task has to be executed, and wherein the computer system is configured to executed the tasks of each container according to said task-sequencer-decision of the task sequencer of each of the containers. However, in analogous art that similarly schedules tasks on multiple containers, SARANGAM teaches: wherein for each container a task sequencer is provided wherein said task sequencer is activated when its container is activated ([Abstract] Work submission…queues (i.e., “task sequencers”) are implemented in software for each VM or container (i.e., each VM or container implements at least a submission queue)), and wherein the task sequencer of a container decides ("task-sequencer-decision"), -which of the tasks of the application of the container have to be executed ([0036] Generally, a given WE may correspond to a single packet or a sequence of packets, depending on the particular implementation, noting a given implementation may support WEs for both single packets and sequences of packets. The WEs are added to the work submission queue for the tenant VM. In FIG. 2a , this is depicted by a WE 254 that is added to work submission queue 212 (i.e., adding a work element to the submission queue for a tenant VM or container represents a determination that the work element is to be executed)), -the sequence of tasks to be executed ([0037] The work submission queue is implemented as a circular First-in, First-out (FIFO) queue (i.e., FIFO represents an order, or “sequence” of work elements to execute)), and -for each task, which has to be executed, the core or cores which the container provided, on which core or cores the task has to be executed, and wherein the computer system is configured to executed the tasks of each container according to said task-sequencer-decision of the task sequencer of each of the containers ([0050] After DMAing the data, the network controller will determine which queue resource to use, perform work operations such as checksum offload or TSO (if any) defined in the WE, and then send the packet outbound to the network via the network port (i.e., performing work operations by the container represents executing the “task” of the container). [0003] Under another virtualization approach, container-based OS virtualization is used that employs virtualized “containers” without use of a VMM or hypervisor. Instead of hosting separate instances of operating systems on respective VMs, container-based OS virtualization shares a single OS kernel across multiple containers, with separate instances of system and software libraries for each container. As with VMs, there are also virtual resources allocated to each container (i.e., each container represents underlying processing resources, or “cores” allocated to the container)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined SARANGAM’s teaching of submission queues for containers that organize tasks to be executed by the container, with the combination of ZHANG and FURUICHI’s teaching of executing containerized applications, to realize, with a reasonable expectation of success, a system that executes containerized applications with corresponding tasks, as in ZHANG and FURUICHI, through use of queues that organize the tasks for execution, as in SARANGAM. A person having ordinary skill would have been motivated to make this combination so that the system can better manage network control plane operations by submitting work requests using queues that do not experience blocking (SARANGAM [0065]). Regarding claim 2, ZHANG further teaches: the computation chain is executed several times in parallel, wherein the computer system (CS) is configured such that the same containers of different computation chains (CHA) do not overlap in time ([Column 11, Lines 57-59] multiple parallel SFCs may be selected for execution, according to a user preference or an applicable SFC selection policy). Regarding claim 3, ZHANG further teaches: the computer system is configured to activate each container and/or each computation chain according to a time-triggered schedule ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., estimated arrival times and time durations represent a “time-triggered schedule”)). Regarding claim 4, ZHANG further teaches: the computer system is configured to activate the containers and/or the computation chains with activation signals, wherein said activation signal is event-triggered ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., estimated arrival times and time durations represent “events” that govern when to activate and deactivate containers)). Regarding claim 5, ZHANG further teaches: a priority is assigned to each container, so that if a container with a higher priority than the active container is activated, the active container is deactivated and the container with higher priority is activated ([Column 40, Lines 33-37] The scheduler may attempt to preempt (evict) lower priority containers to make scheduling of pending containers possible. These and other types of queuing delays for service function instantiation may not be suitable for mission-critical applications (i.e., evicting lower priority containers deactivates them to make resources available for higher priority containers)). Regarding claim 6, SARANGAM further teaches: each task sequencer makes its task- sequencer-decision based on a configuration of the task sequencer ([0037] Generally, a work submission queue may be implemented using various types of queue data structures, such as an array, linked list, etc. In one embodiment, the work submission queue is implemented as a circular First-in, First-out (FIFO) queue (i.e., queues may be “configured” to execute sequences of work elements in any order, including FIFO)). Regarding claim 7, SARANGAM further teaches: the configuration comprises priorities of the tasks of the container, wherein a priority is assigned to each task ([0023] The QoS traffic classification implemented by the traffic classifier policies generally may include corresponding QoS policies under which certain traffic classes are prioritized over other traffic classes. In one embodiment, a weighted round-robin scheduling algorithm is implemented to select when packets from the different egress queues are scheduled for outbound transfer onto the network. Under a weighted round-robin scheme, higher weighting is applied to queues for higher QoS (higher priority) traffic classes relative to queues for lower QoS (lower priority) traffic classes. In addition to weighted round-robin, other types of scheduling algorithms may be implemented (i.e., packets of a particular traffic class and corresponding work element are prioritized)). Regarding claim 9, ZHANG further teaches: tasks (T1.1 - T1.3, T2.1 - T2.9, T3.1 - T3.4) of a container (CON1, CON2, CON3) are executed in sequence and/or in parallel and/or at least partially overlapping in time ([Column 9, Lines 33-41] Some examples of the service functions (i.e., “applications”) provided in these multi-domain networks include firewalls, deep packet inspection (DPI), network address translation (NAT), load balancers, and parental control functions. In one example, a service function chain may include a firewall, a deep packet inspection (DPI) service function, a parental control service function, and an anti-virus service function, each of which may be provided by nodes in a different network domain (i.e., each of the multiple tasks that make up each service function are executed at least sequentially, such as the firewall service function which sequentially implements monitoring network traffic, making a determination that traffic meets or does not meet criteria, and allowing/blocking traffic)). Regarding claim 10, SARANGAM further teaches: for each application one or preferably more different arrangements for the execution of tasks, so-called "templates" (TEMP2.1, TEMP2.2, TEMP2.3, TEMP2.10), are provided, wherein preferably each template for an application guarantees a correct order of the execution of the task, and wherein a configuration comprises one or more templates or wherein a configuration is a template ([0036] Generally, a given WE may correspond to a single packet or a sequence of packets, depending on the particular implementation, noting a given implementation may support WEs for both single packets and sequences of packets. The WEs are added to the work submission queue for the tenant VM. In FIG. 2a , this is depicted by a WE 254 that is added to work submission queue 212 (i.e., adding a work element to the submission queue for a tenant VM or container represents a determination that the work element is to be executed). [0037] Generally, a work submission queue may be implemented using various types of queue data structures, such as an array, linked list, etc. In one embodiment, the work submission queue is implemented as a circular First-in, First-out (FIFO) queue (i.e., various “configurations” of queues, representing “templates” indicating different “correct” orders of task execution are provided, including at least one configuration in FIFO order)) Regarding claim 12, SARANGAM further teaches: a task sequencer is configured to choose one of the templates provided for its container, at the start of the container or at the start of the container cycle, and/or to switch between different templates while the container is active ([0037] Generally, a work submission queue may be implemented using various types of queue data structures, such as an array, linked list, etc. In one embodiment, the work submission queue is implemented as a circular First-in, First-out (FIFO) queue (i.e., the organization of the data structure used for the submission queue is selected prior to the container executing)). Regarding claim 13, SARANGAM further teaches: at least one task-sequence-adaption task may be provided for a container, which task-sequence-adaption task is executed while the container is active, wherein the task-sequence-adaption task is configured to receive information from and/or about the computer system, and/or to analyse data and/or the progress of time, and wherein the task-sequence-adaption task is configured to cause the task sequencer to change the template according to the information from and/or about the computer system and/or according to a result of the analysis of said data and/or the progress of time ([0045] Next, during a seventh operation performed in a block 310, the traffic classifier parses the WE metadata to determine the QoS Traffic Class (TC) for the packet flow to which the packet belongs using applicable traffic classification policy rules. [0046] In connection with processing the WE, during in eight operation performed in a block 312, the classified WE is placed in the egress queue corresponding to the QoS traffic class for the packet. [0047] During a ninth operation, as depicted by in a block 314, the VM scheduler uses a scheduling algorithm to select a “winning” egress queue and pulls the WE from the bottom of the queue (the current location pointed to by the head pointer) and adds it to the egress buffer queue (i.e., prior to processing the work element, the work elements are taken from the submission queue according to a first ordering (FIFO), representing a first “template”, meta data, representing “received information” is analyzed, and a new ordering based on classification policy rules and scheduling algorithms is applied to the work elements in the subsequent egress queue, and egress buffer queue thereby changing the “template”)). Regarding claim 14, ZHANG further teaches: the computer system comprises resources, wherein the resources comprise - memory, and/or - communication means, such as communication channels, e.g. between processors and/or between cores, and/or - software, such as an operating system, scheduler(s) for tasks, container, etc., and wherein at least some of said resources and/or at least parts of said resources or all of said resources are exclusively assigned to a specific container (CON1, CON2, CON3), when said container is active, so that when said specific container is active, only tasks of an application (AAP1, APP2, APP3) of said container can use said exclusively assigned resources (([Column 2, Lines 20-21] A mapping of the service function to a respective physical node along a specified forwarding path. [Column 9, Lines 29-33] In some embodiments, each physical node may include network elements (e.g., OTN switch, router) (i.e., “communication means”) and/or compute servers and storage elements (e.g., datacenters) (i.e., memory) capable of invoking a subset of service functions selected from a catalog of service functions). Regarding claim 15, ZHANG further teaches: each container (CON1, CON2, CON3) receives its input data at its activation point in time and/or provides its output data to the computer system (CS) before the de-activation point in time ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow (i.e., a container receives an input packet at the estimated packet arrival time, and finishes processing at the expiration of the time duration)). Regarding claim 16, ZHANG further teaches: the de-activation point in time of a container (CON1, CON2) of a computational chain (CHA) and the activation point in time of the directly following container (CON2, CON3) of said computational chain (CHA) are arranged in a temporal distance which is sufficient to ensure all latency requirements of all computation chains while allowing at least sufficient time for the communication to happen between containers ([Column 27, Lines 37-45] A just-enough-time approach may be used for configuring and provisioning resources for service function chains in which resources for implementing various service functions, such as virtual machines or containers, are instantiated just in time to be used (e.g., by reserving resources based on estimated arrival times and time durations for each of the service functions in the service function chain) and are subsequently relinquished (e.g., paused or deleted) after processing the last packet of the SFC packet flow. [Column 26, Lines 16-22] In some embodiments, the systems and methods described herein for “just-enough-time” mapping and provisioning of service function chains may take advantage of this information to guarantee latency for mission-critical applications, reduce queuing delays for setup, and increase resource efficiency (i.e., time duration ensures packet processing between containers in a chain occurs while still guaranteeing latency requirements are met)). Regarding claim 18, ZHANG further teaches: each application and/or container communicates exclusively with the computer system, and only at the start and at the end of it's execution ([Column 13, Lines 36-48] Upon the arrival of an SFC request, a source orchestrator may send the SFC request to all participating orchestrators and may coordinate all orchestrators to execute the compute( ) function in each superstep. During each superstep, these compute functions may be executed substantially in parallel on vertices (nodes) in different domains, but they may synchronize with each other at the end of each superstep. For example, before moving on to the next superstep, the resource orchestrators may ensure that message exchanges for the current superstep have ceased and that all of the vertices received the controller messages that they were supposed to receive from the other vertices over the control channels (i.e., the source orchestrator, representing the “computer system” communicates with containers once when it sends the SFC request to all participating orchestrators, at the “start of execution,” and then again to synchronize, at the end of the superstep, or “end of execution”)). Regarding claims 19-25, 27-28, 30-34, and 36, they comprise limitations similar to those of claims 1-7, 9-10, 12-16, and 18, and are therefore rejected for similar rationale. Claims 8 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG, in view of FURUICHI, in view of SARANGAM, as applied to claims 1 and 19 above, and in further view of BEQUET et al. Pub. No.: US 2021/0141623 A1 (hereafter BEQUET). Regarding claim 8, while ZHANG, FURUICHI, and SARANGAM teach execution of application tasks using containers, ZHANG, FURUICHI, and SARANGAM do not explicitly teach: each task sequencer determines dependencies of tasks within its container and checks every time the execution of a task is finished, which task can be executed next, based on the configuration of the task sequencer. However, in analogous art that similarly executes tasks using containers, BEQUET teaches: each task sequencer determines dependencies of tasks within its container and checks every time the execution of a task is finished, which task can be executed next, based on the configuration of the task sequencer ([0011] Receive, at the at least one processor and from a requesting device via a network, a request to perform a job flow, wherein: the job flow is defined in a job flow definition that specifies a set of tasks to be performed via execution of a corresponding set of task routines during a performance of the job flow, and that specifies data dependencies among the set of tasks…The at least one processor is also caused to, within a first performance container, execute instructions of a first instance of a performance routine to cause the at least one processor to, in response to the storage of the job performance request message within the job queue, perform operations including: based on the data dependencies among the set of tasks, derive an order of performance of the set of tasks that specifies at least a first task of the set of tasks to be performed (i.e., the processor acts as a task sequencer that determines dependencies between tasks an order of execution by a container)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined BEQUET’s teaching of determining task execution sequence based on task dependencies, with the combination of ZHANG, FURUICHI, and SARANGAM’s teaching of executing tasks of applications using containers, to realize, with a reasonable expectation of success, a system that executes tasks of applications using containers, as in ZHANG, FURUICHI, and SARANGAM, according to an order of execution based on task dependency, as in BEQUET. A person having ordinary skill would have been motivated to make this combination to reduce job failures by scheduling jobs based on their dependencies (BEQUET [0009]). Regarding claim 26, it comprises limitations similar to claim 8, and is therefore rejected for similar rationale. Claims 11 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG, in view of FURUICHI, in view of SARANGAM, as applied to claims 1 and 19 above, and in further view of MCCLYMONT Pub. No.: US 2021/0103477 A1 (hereafter MCCLYMONT). Regarding claim 11, while ZHANG, FURUICHI, and SARANGAM discuss execution of application tasks by containers, ZHANG, FURUICHI, and SARANGAM do not explicitly teach: an external component, "sequence auditor", is provided, which external component receives after each execution of a container the sequence in which the tasks were executed or information about said sequence and compares this sequence or information to the template, according to which the tasks have been executed, in order to detect incorrect execution orders. However, in analogous art that similarly teaches execution of tasks by containers, MCCLYMONT teaches: an external component, "sequence auditor", is provided, which external component receives after each execution of a container the sequence in which the tasks were executed or information about said sequence and compares this sequence or information to the template, according to which the tasks have been executed, in order to detect incorrect execution orders ([0099] Process 400 further comprises aggregating the execution results with other execution results from one or more other containers to form aggregated execution results, where at least one of the one or more actions being performed are based on the aggregated execution results. In some implementations, where the execution results were sorted based on one or more of whether the execution results resulted from executing a certain type of script, or respective types of the execution results, performing the one or more actions may include performing the one or more actions based on determining whether the set of rules are satisfied based on sorting the execution results (i.e., execution results are received in an “incorrect execution order” and are sorted into a correct order so as to satisfy the set of rules)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined MCCLYMONT’s teaching of sorting execution output into correct order, with the combination of ZHANG, FURUICHI, and SARANGAM’s teaching of executing jobs of applications in containers, to realize, with a reasonable expectation of success, a system that executes jobs of applications in containers, as in ZHANG, FURUICHI, and SARANGAM, and reassembles the outputs in correct execution order, as in MCCLYMONT. A person having ordinary skill would have been motivated to make this combination to ensure an output correctly complies with a desired set of rules (MCCLYMONT [0011]). Regarding claim 29, it comprises limitations similar to claim 11, and is therefore rejected for similar rationale. Claims 17 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG, in view of FURUICHI, in view of SARANGAM, as applied to claims 1 and 19 above, and in further view of NIXON et al. Patent No.: US 11,635,980 B2 (hereafter NIXON). Regarding claim 17, while ZHANG, FURUICHI, and SARANGAM discuss execution of application tasks within containers, ZHANG, FURUICHI, and SARANGAM does not explicitly teach: the timeframe of a container, which is a sum of the durations of the container time-slots of said container (CON1, CON2, CON3),corresponds to the WCET or at least to the WCET of the tasks of the application (APP1, APP2, APP3) which is executed in said container. However, in analogous art that similarly discusses execution of tasks within containers, NIXON teaches: the timeframe of a container, which is a sum of the durations of the container time-slots of said container (CON1, CON2, CON3),corresponds to the WCET or at least to the WCET of the tasks of the application (APP1, APP2, APP3) which is executed in said container ([Column 23, Lines 49-67] Custom calculation engines and containers such as engine 304 and container 302 may be validated by the control system manufacturer or another entity to ensure that the custom calculation engine and container do not negatively impact execution of function block diagrams or the associated control strategy. The validation analysis may include, for example, (1) analyzing computational complexity of the custom control algorithm; (2) analyzing dependencies of the container and the custom calculation engine (for static and/or dynamic dependencies); (3) analyzing worst-case execution time of a dynamic computational algorithm executed within the container; (4) analyzing information security to ensure that the container is not recording, redirecting, or consuming data other than what is defined within the shadow block interface with the container; (5) analyzing compile-time runtime analysis of the custom calculation engine and container instantiation to ensure that the deployed custom calculation engine container image instance is well-behaved during execution (i.e., determining whether the timeframe of execution for a given container is “well-behaved” means analyzing the worst-case execution time of an algorithm comprising plural tasks that the container executes)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined NIXON’s teaching of establishing an execution timeframe for a container based on a worst case execution time of the container, with the combination of ZHANG, FURUICHI, and SARANGAM’s teaching of establishing an execution timeframe for a container, to realize, with a reasonable expectation of success, a system that establishes an execution timeframe for a container, as in ZHANG, FURUICHI, and SARANGAM. A person of ordinary skill would have been motivated to make this combination to ensure that containers are well behaved when executing. Regarding claim 35, it comprises similar limitations to claim 17, and is therefore rejected for similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
Oct 30, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month