Prosecution Insights
Last updated: April 19, 2026
Application No. 18/082,873

METHODS AND SYSTEMS FOR EFFICIENT AND SECURE NETWORK FUNCTION EXECUTION

Final Rejection §103§112
Filed
Dec 16, 2022
Examiner
NGUYEN, STEVEN C
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
UNIVERSITY OF SOUTHERN CALIFORNIA
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
254 granted / 413 resolved
+3.5% vs TC avg
Strong +51% interview lift
Without
With
+50.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
27 currently pending
Career history
440
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 413 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. This action is responsive to the communications filed on 08/22/2025. 2. Claims 1-11, 13-21 are pending in this application. 3. Claims 1, 8-11, 13, 17-20 have been amended. 4. Claim 12 has been cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-11 and 13-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the limitation “a worker node including a plurality of processing cores each executing different network functions on different packets from the traffic flow independently of each other…”. Applicant’s specification does not describe “independently of each other.” The specification discloses that the network functions share resources, such as packet memory, packet buffers, and network function state memory (Specification, Paragraph 61). The only time the specification mentions “independent” is in paragraph 117 which does not deal with executing different network functions on different packets…independently of each other. Claims 13 and 20 recite similar claim language. Claims 11, 18, recite “exclusively”. Applicant’s specification states that each network function…share a memory region (Specification, Paragraph 55). Applicant states that only the first network function in the network function chain can access the NIC packet queue (Specification, Paragraph 91), which solves the issue of violating packet isolation (Specification, Paragraph 90). Applicant goes on to state that “network functions run in the order they appear in the chain, even if a downstream network function has access to shared memory, the downstream network function cannot access a batch that has not been processed by an upstream network function…” (Paragraph 92). However, this does not explicitly describe that “the first and the second network functions in the network function chain exclusively share the memory region…” as it does not disclose that only the first and second network functions access the shared memory, just that the second network function cannot access a batch that has not been processed by the first network function. Response to Arguments Applicant’s arguments with respect to claims 1-11, 13-21 have been considered but are moot in view of the new grounds of rejection. Although a new ground of rejection has been used to address additional limitations that have been added to the claims, a response is considered necessary for several of applicant's arguments since reference Kutch will continue to be used to meet several claimed limitations. In the remarks, applicant argued that: a. Kutch does not disclose a worker node that includes multiple processing cores, a scheduler that schedules execution of the first network function and an agent that creates a network function chain and assigns execution to one of the processing cores. In relation to now canceled claim 12, the Office Action cites paragraphs 141 and 142 of Kutch. These paragraphs describe a work manager (QMD) copying received traffic to a queue element (QE). Kutch discloses multiple cores for load balancing, but does not disclose a worker node with the agent and scheduler that that allow multiple different network functions to be executed by processing cores on different packets. The present amended claims allow multiple different network function chains running on cores thus allowing support of different network functions. The load balancing in Kutch is balanced across an individual network function, which can be deployed on workload accelerators and the host CPU. This is in contrast to a configuration that adds an entire network function chain to an assigned processing core, rather than distributing work across individual network functions. As explained above, Kutch uses a QMD to distribute copied packets in QE to cores. The QMD is not the same as an ingress module that does not copy the packets but separates packets for distribution to the plurality of cores for the appropriate different execution of network functions on the different packets. Thus, the amended independent claims are allowable over Kutch (Applicant’s remarks, pages 7-8). In response: The examiner respectfully disagrees. The examiner has equated the claimed worker node to the QMD (also called work manager) of Kutch. Kutch shows that the QMD is connected to multiple cores through a ring interconnect. The ring interconnect connects the cores 1706-1712 with the QMD 1700 (Paragraph 145, Figure 17). Each of these cores are heterogenous with different processing capabilities, for example, one core could provide cryptographic operations while another core provides ACL operations (Paragraphs 63, 86). Kutch shows that the QMD includes a scheduler 1716 (claimed scheduler), and a Host Interface Manager 1612 (claimed agent) (Figures 16-17). The Host Interface Manager is configured as a work manager agent and translates communications from/to the QMD (Paragraph 136). The scheduler chooses a buffer and selects one or more requests from the chosen buffer and the selected requests are executed by either the enqueue engine or the dequeue engine (i.e., different network functions)(Paragraph 147). Although applicant argues that the QMD is not the same as the ingress module, the examiner did not equate as such. The ingress module in the claims is equated to the Network Interface Manager 1810. The NIM receives traffic and manages queue descriptors and storage of buffer content in one or more memory devices. The packets are sent out of order to separate slow and fast path data. Therefore, the combination of Kutch and Paramasivam disclose the claims as filed. b. Dependent claims 8 and 17 have been amended to recite that the storage device includes code for executing a second network function distinct from the first network function and the second network function is assigned to the network function chain. Amended dependent claims 8 and 17 are separately allowable over the cited references. The Office Action has cited paragraph 59 of Kutch as disclosing network function chains. However, this paragraph discloses that a virtual network function (VNF) can dynamically program a flow for traffic based on upon rules (i.e. network function chain), such as to drop packets, forward packets, decrypt packets, or assign priority to a flow based on a packet header (i.e. all network functions). This paragraph describes what a VNF can do, i.e. applying access control and priority on packets. However, it does not describe the concept of executing multiple network functions as a chain on a single processing core. Paragraph 59 of Kutch only discloses a single network function that performs basic packet processing based upon a rule that are part of the single network function. There is no disclosure in Kutch of a second network function that may be assigned to a network function chain by the agent. The importance of this is to leverage use of closed-source (i.e. no code available) network functions from different vendors. There is no way to achieve so many different network functions within one VNF in Kutch (Applicant’s remarks, pages 8-9). In response: The examiner respectfully disagrees. Kutch disclosed having a core process cryptographic operations while another core processes ACL operations (Paragraph 86). This shows that there are a plurality of network functions that are performed. For assigning the network functions, please see updated rejection with the Paramasivam reference applied. c. Amended claim 9 and new claim 21 are separately allowable over the cited references. As explained in paragraph 66 of the printed publication, each worker node includes a network interface controller that uses single-root input/output virtualization to allow a PCIe based NIC to appear as many physical virtual network interface cards. The agent manages the virtualized NIC to exclusively route packets to a network function chain as explained in paragraphs 67 and 89-90 of the printed publication. The virtualized NICs reduce packet steering overhead as explained in paragraph 66 of the printed publication. Neither Kutch nor Sharma discloses a NIC that executes a plurality of virtualized network interfaces as now recited in the amended claims. The Office Action cites paragraphs 52 and 55 of Kutch as disclosing a network interface card forming a virtualized network function in relation to claim 9. Paragraph 52 of Kutch only discloses a network interface card that directs traffic to the off load processor. Similarly, paragraph 58 discloses that a network interface manager may send traffic to and from the NIC, but does not disclose that the NIC executes a plurality of virtualized network interfaces. Kutch and Sharma also do not disclose exclusively dedicating a virtualized network interface to a network function chain as now recited in amended claim 9. Claims 9 and 21 are thus separately allowable over the references (Applicant’s remarks, pages 9-10). In response: The examiner respectfully disagrees. Kutch disclosed that virtual functions are also considered virtual interfaces (Paragraph 90) and that the VNFs are linked together to form a service chain (Paragraph 103). This is done through the NIC that provides the ingress traffic to a packet processing pipeline where the NIC can either direct packets to the offload processor or host memory (Paragraphs 52, 55). As a note, Kutch also discloses the usage of single root input/output virtualization (Paragraph 150). d. Amended claim 10 and 17 are separately allowable over Kutch. These claims recite that the first network function is executed to process the packets and places the second network function in a work queue. The Office Action has cited paragraph 59 of Kutch that discloses the execution of rules. As explained above the rules are not network functions. Even if such rules were construed as network functions, the rules are processed in sequence on a packet. Kutch thus does not disclose executing all the packets and executing the second network function only after completing of the processing of all the packets by the first network function as recited in amended claims 10 and 17 (Applicant’s remarks, page 10). In response: The examiner respectfully disagrees. Kutch disclosed that the buffers are chosen based on a scheduling policy, such as weighted round robin. This means that the scheduler serves each buffer sequentially based on their priority. As such, each higher priority buffer would be served first before a lower priority one would be started. e. Claims 11 and 18 have been amended to recite that the first and second network functions exclusively share the memory region for storing the incoming packets and avoids copying the packets from the first network function to the second network function. These amendments are supported by at least paragraph 92 of the printed publication. Exclusive sharing of the memory region is required for the execution of multiple network functions in a network function chain in sequence. As explained in paragraph 92 of the printed publication, the scheduler enforces packet accessing order for network functions downstream in the network function chain to guarantee that the network functions are executed on the packets in sequence and the down stream network functions are executed after completion of a preceding network function. The Office Action has asserted that paragraphs 69 and 76 of Kitch disclose not copying all traffic to the host memory and storing some of the traffic in buffers and having the CPU and the WAT share system memory. Kitch does not disclose sharing a memory region exclusively between different network functions. The CPU and WAT sharing some memory does not disclose exclusively sharing between different network functions as recited in amended claims 11 and 18. Kutch does not disclose that the CPU and WAT constitute network functions or that the memory is shared exclusively (Applicant’s remarks, page 10). In response: The examiner respectfully disagrees. Please see the examiner’s rejection under 112(a) regarding claims 11, 18. Kutch disclosed that not only do the CPU and WAT share system memory but the cores share at least one cache and share use of at least one memory device (Paragraph 342). As the cores have the network functions in them, the network functions then also share cache and memory. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-11, 13-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. (US 2021/0117360) in view of Paramasivam (US 2017/0317932). Regarding claim 1, Kutch disclosed: A network function virtualization platform (Paragraph 54, Network and Edge Acceleration Tile (NEXT)) providing network functions for traffic flow of a network (Paragraph 55, providing a NIC to provide ingress traffic to a packet processing pipeline), the platform comprising: a worker node (Figure 17, QMD 1700) including a plurality of processing cores (Figure 17, cores 1706-1712) each executing different network functions (Paragraph 145, enqueue or dequeue requests) on different packets from the traffic flow independently of each other, a scheduler (Figure 17, scheduler 1716), and an agent (Figure 16, Host Interface Manager (HIM) 1612) (Paragraph 63, cores can be heterogenous device with different processing capabilities. Paragraph 86, one core could request to provide cryptographic operations and another core could request to perform ACL operations. Paragraph 100, cores include a processing core or a GPU core. Therefore, as the cores can be heterogenous, request different operations, and include different types of cores, they would process different packets independently. Paragraph 145, a high speed interconnect connects the CPU cores 1706-1712 with Queue Management Device (QMD) 1700. QMD 1700 receives enqueue and dequeue requests sent out by CPU cores. The QMD sends acknowledgements back regarding enqueue and dequeue requests, with the QMD including buffers, a scheduler, an enqueue engine, a dequeue engine, a credit pool controller, and an internal storage unit. Paragraph 136, Host Interface Manager (HIM) is configured as a work manager agent and translates or transfers communications from QMD to NEXT 1610 or vice versa. Paragraph 137, HIM using work manager interface to communicate with work manager 1600 (which is QMD 1700, see paragraph 145) (i.e., a worker node including an agent). Figure 16 showing work manager 1600 (QMD) connected to HIM 1612); a storage device (Figure 17, internal storage unit 1724) accessible by the worker node storing code for executing a first network function (Figure 10, VNF 1010) and a runtime (Paragraph 59, priority) (Paragraph 59, a virtual network function (VNF) dynamically programs a flow for traffic based on upon rules (i.e., executing the network function), such as to drop packets, forward packets, decrypt packets, or assign priority to a flow based on a packets header (i.e., runtimes). Paragraph 148, the internal storage unit inserts or retrieves data items based on the enqueue/dequeue requests); a processor (Figure 18, CPU 1830) executing an ingress module (Figure 18, Network Interface Manager (NIM) 1810) receiving network traffic flow (Paragraph 55, 152, traffic) and separating (Paragraph 55, separation of slow and fast path data) packets for performance of the first network function (Paragraph 55, NIM receives traffic from the NIC and manages queue descriptors and storage of buffer content in one or more memory devices before passing packets to the next state of the packet processing pipeline. Packets are placed within NIM memory and sent out of order (i.e., separating) to achieve separation of the slow and fast data path); and a controller (Figure 18, Load Balancer 1812) coupled to the ingress module (Figure 18, NIM 1810) and the agent (Figure 12, showing the NIM having a connection to the HIM through flexible packet processor 1206. Paragraph 112, the NIM processes data and provides that data to the FXP 1206, which in turn, sends it to the HIM. Paragraph 136, the HIM takes communications and transfers them to the work manager (QMD). As such, the load balancer of the NIM is coupled to the NIM and the HIM with a connection to the QMD (i.e., all coupled)), wherein the controller controls the ingress module to route the separated packets to the worker node (Paragraph 162, Figure 19, the NIM (ingress module) receives packets and stores them in a ring buffer (such as the ring 1702 in Figure 17) and determines high priority and low priority packets to pass to the next packet processing stage, in this case, the QMD 1700), wherein the scheduler schedules execution of the first network function on the packets (Paragraph 147, the scheduler chooses a buffer according to a scheduling policy and schedules each requests for execution by the enqueue or the dequeue engine according to the request type). While Kutch disclosed an agent (see above), Kutch did not explicitly disclose that the agent creates a network function chain, assigns the first network function to the network function chain, and assigns execution of the network function chain to one of the processing cores of the plurality of processing cores of the worker node. However, in an analogous art, Paramasivam disclosed the agent creates a network function chain (Paragraph 14, the controller (application delivery controller) (i.e., agent) uses rankings to select a highest ranking service chain of the plurality of service chains to generate a subset of service chains); assigns the first network function to the network function chain (Paragraph 14, selecting a predetermined number of highest ranking service chains to generate a subset of service chains and Paragraph 15 then identify one or more instances of a first service that are missing from the subset. Then, modifying the subset to include one or more additional service chains (i.e., first network function)); assigns execution of the network function chain to one of the processing cores of the plurality of processing cores of the worker node (Paragraph 193, functions (of the service chain) are assigned to the cores. Paragraph 269, the controller determines the placement of instances (see paragraph 8, instances a grouped to form a service chain)). One of ordinary skill in the art would have been motivated to combine the teachings of Kutch with Paramasivam because the references involve service chaining, and as such, are within the same environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the creating and assigning of the network function chain of Paramasivam with the teachings of Kutch in order efficiently provide services for instances that are distributed (Paramasivam, Paragraph 2). Regarding claims 13, 20, the claims are substantially similar to claim 1. Claim 20 recites a non-transitory computer readable medium…executed by a processor (Kutch, Paragraph 330, computer-readable medium. Paragraph 332, processor). Therefore, the claims are rejected under the same rationale. Regarding claim 2, the limitations of claim 1 have been addressed. Kutch and Paramasivam disclosed: wherein the network is a cloud computing infrastructure to support packet processing (Kutch, Paragraph 106, performing packet processing utilizing cloud native network functions). Regarding claims 3, 14, the limitations of claims 2, 13, have been addressed. Kutch and Paramasivam disclosed: wherein the cloud computing infrastructure is based on Function as a Service (FaaS) architecture (Kutch, Paragraph 108, FaaS), a Linux kernel (Kutch, Paragraph 87, Linux kernel), network interface card (NIC) hardware (Kutch, Paragraph 55, NIC providing ingress traffic), and OpenFlow switches (Kutch, Paragraph 254, virtual switch). Regarding claim 4, the limitations of claim 1 have been addressed. Kutch and Paramasivam disclosed: wherein the first network function is stored in one of a container (Kutch, Paragraphs 102-103, having a container of software package applications that include any applications that perform packet processing) or a virtual machine accessible by the worker node. Regarding claims 5, 15, the limitations of claims 1, 13, have been addressed. Kutch and Paramasivam disclosed: further comprising: a plurality of worker nodes including the worker node (Kutch, Paragraph 105, SR-IOVs are used to make the appearance of several physical devices. Paragraph 150, the work manager (QMD) is used in a SR-IOV); and a router coupled to the plurality of worker nodes, the ingress module controlling the router to route packets to one of the plurality of worker nodes (Kutch, Paragraph 151, the NIM provides communication and interface between one or more NICs and NEXT, which include a router. Paragraph 162, Figure 19, the NIM (ingress module) receives packets and stores them in a ring buffer (such as the ring 1702 in Figure 17) and determines high priority and low priority packets to pass to the next packet processing stage). Regarding claims 6, 16, the limitations of claims 5, 15, have been addressed. Kutch and Paramasivam disclosed: wherein the controller collects network function performance statistics from agents of each of the plurality of worker nodes, and makes a load balancing decision based on the network function performance statistics as a basis to route packets to one of the plurality of worker nodes via the ingress module (Kutch, Paragraph 139, work manager load balances performance of services such as packet processing using a stream of packet descriptors to determine which accelerators to distribute the packets to. Paragraph 264, network traffic shaping includes policing and shaping to identify and respond to traffic violations against a permitted rate and perform actions such as dropping or re-marking excess traffic). Regarding claim 7, the limitations of claim 6 have been addressed. Kutch and Paramasivam disclosed: wherein the load balancing decision is based on a service level objective (SLO) specifying a target latency (Kutch, Paragraph 55, ingress packets placed within NIM memory are sent to the next stage out of order to achieve separation of the slow and fast data path improving overall system performance and latency. Paragraph 93, performing arbitration among requests using service level agreements (SLAs). Paragraph 281, each class having separate priority and latency requirements). Regarding claim 8, the limitations of claim 1 have been addressed. Kutch and Paramasivam disclosed: wherein the storage device includes code for executing a second network function distinct from the first network function (Kutch, Paragraph 86, one core could request to provide cryptographic operations and another core could request to perform ACL operations (i.e., distinct)), and wherein the agent assigns the second network function to the network function chain (Paramasivam, Paragraph 193, functions (of the service chain) are assigned to the cores. Paragraph 269, the controller determines the placement of instances (see paragraph 8, instances a grouped to form a service chain)). For motivation, please refer to claim 1. Regarding claims 9, 21, the limitations of claims 1, 13 have been addressed. Kutch and Paramasivam disclosed: wherein the worker node further includes a network interface card (NIC) providing a plurality of virtual network interfaces, wherein the agent assigns one of the plurality of virtual network interfaces to the network function chain and wherein the assigned one of the plurality of virtual network interfaces routes the packets on a packet queue and the processing core executes the first network function to access the packets in the packet queue (Kutch, Paragraphs 52, 55, a NIC provides ingress traffic to a packet processing pipeline (i.e., network function chain). The NIC can direct packets to the offload processor or host memory of the CPU. The NIM receives traffic from the NIC and manages queue descriptors and storage buffer content. Paragraph 58, sending traffic to and from a NIC through the VNF (i.e., assigns). Paragraph 90, virtual functions are virtual interfaces. Paragraph 103, VNFs (i.e., a plurality) are linked together to form a service chain. Paragraph 145, the queue management device receives enqueue and dequeue requests). Regarding claim 10, the limitations of claim 8 have been addressed. Kutch and Paramasivam disclosed: wherein the scheduler sequences the first network function to be executed by the processing core to process the packets and places the second network function in a wait queue, wherein the scheduler places the second network function in a run queue to be executed by the processing core to process the packets only after completion of processing of all the packets by the first network function (Kutch, Paragraphs 146-147, priority levels are assigned to all buffers, where each enqueue and dequeue buffer pair may be assigned a different priority. The scheduler chooses a buffer and selects one or more requests from the head of the buffer, the buffer being chosen according to a scheduling policy, such as preemptive priority or weighted round robin. The scheduler chooses and serves each buffer sequentially, based on their associated priority (i.e., processing all higher priority packets first). A subset of buffers store enqueue requests while another subset of buffers store dequeue requests). Regarding claims 11, 18, the limitations of claims 8, 17, have been addressed. Kutch and Paramasivam disclosed: further comprising a memory region, wherein the first and the second network functions in the network function chain exclusively share the memory region, and wherein the memory region stores the incoming packets, and avoids copying the packets from the first network function to the second network function in the network function chain (Kutch, Paragraph 69, not copying all traffic to the host memory and storing some of the traffic in buffers. Paragraph 76, having the CPU and WAT share system memory. Paragraph 342, the cores share at least one cache or at least one memory device). Regarding claim 12, the limitations of claim 1 have been addressed. Kutch and Paramasivam disclosed: wherein the worker node includes a plurality of cores including the core, and wherein the agent creates network function chains that each are executed by an assigned one the plurality of cores (Kutch, Paragraph 141-142, using the work manager (QMD) copying received traffic to a QE and the contents of the QEs load balanced across multiple cores. Work manager (QMD) manages distribution of contents among queues for cores). Regarding claim 17, the limitations of claim 13 have been addressed. Kutch and Paramasivam disclosed: adding a second network function to the network function chain (Paramasivam, Paragraph 193, functions (of the service chain) are assigned to the cores. Paragraph 269, the controller determines the placement of instances (see paragraph 8, instances a grouped to form a service chain)), wherein the second network function is distinct from the first network function (Kutch, Paragraph 59, a virtual network function (VNF) dynamically programs a flow for traffic based on upon rules (i.e., network function chain), such as to drop packets, forward packets, decrypt packets, or assign priority to a flow based on a packets header (i.e., all network functions). Paragraph 86, one core could request to provide cryptographic operations and another core could request to perform ACL operations (i.e, distinct)); sequencing the first network function to be executed by the processing core to process the packets via a scheduler; placing the second network function in a wait queue; and placing the second network function in a run queue to be executed by the processing core to process the packets only after completion of processing of the packets by the first network function (Kutch, Paragraphs 146-147, priority levels are assigned to all buffers, where each enqueue and dequeue buffer pair may be assigned a different priority. The scheduler chooses a buffer and selects one or more requests from the head of the buffer, the buffer being chosen according to a scheduling policy, such as preemptive priority or weighted round robin. The scheduler chooses and serves each buffer sequentially, based on their associated priority. Therefore, if the run queue would be the queue that is actively used while the other queues are wait queues as they are being processed based on priority). Regarding claim 19, the limitations of claim 17 have been addressed. Kutch and Paramasivam disclosed: further comprising: creating a plurality network function chains including the first network function chain (Kutch, Paragraph 141-142, using the work manager (QMD) copying received traffic to a QE and the contents of the QEs load balanced across multiple cores. Work manager (QMD) manages distribution of contents among queues for cores. Paragraph 145, enqueue or dequeue requests)); assigning execution of each of the network function chains to one of the plurality of processing cores of the worker node (Paramasivam, Paragraph 193, functions (of the service chain) are assigned to the cores. Paragraph 269, the controller determines the placement of instances (see paragraph 8, instances a grouped to form a service chain)). For motivation, please refer to claim 1. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven C. Nguyen whose telephone number is (571)270-5663. The examiner can normally be reached M-F 7AM - 3PM and alternatively, through e-mail at Steven.Nguyen2@USPTO.gov. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.C.N/Examiner, Art Unit 2451 /Chris Parry/Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Feb 19, 2025
Non-Final Rejection — §103, §112
Aug 22, 2025
Response Filed
Nov 27, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592855
Network Intent Orchestration in Enterprise Fabrics
2y 5m to grant Granted Mar 31, 2026
Patent 12580863
SYSTEMS AND METHODS FOR PROVIDING ANALYTICS FROM A NETWORK DATA ANALYTICS FUNCTION BASED ON NETWORK POLICIES
2y 5m to grant Granted Mar 17, 2026
Patent 12580872
DYNAMIC QOS CHANGES
2y 5m to grant Granted Mar 17, 2026
Patent 12537749
LEARNING-BASED NETWORK OPTIMIZATION SERVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12531931
SYSTEMS AND METHODS FOR CREATING A VIRTUAL KVM SESSION BETWEEN A CLIENT DEVICE AND A TARGET DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+50.6%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 413 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month