Prosecution Insights
Last updated: April 19, 2026
Application No. 17/879,695

METHOD AND SYSTEM FOR HYBRID PIPELINED-DATA FLOW PACKET PROCESSING

Final Rejection §103
Filed
Aug 02, 2022
Examiner
KIM, ANDREW CHANUL
Art Unit
2471
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
4 (Final)
32%
Grant Probability
At Risk
5-6
OA Rounds
3y 1m
To Grant
12%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
8 granted / 25 resolved
-26.0% vs TC avg
Minimal -20% lift
Without
With
+-20.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
67 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
64.9%
+24.9% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is in response to an amendment/response filed 2/10/2026. Claims 2, 4, 5, 17, 19, and 20 have been cancelled. No claims have been added. Claims 1, 3, 6-16, and 18 are now pending. Response to Arguments Applicant’s arguments with respect to the independent claims (pages 6-7) in a reply filed 2/10/2026 have been considered but are moot because the arguments are based on newly changed limitations in the amendment and new ground of rejections using newly introduced references or a newly introduced portion of an existing reference are applied in the current rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Halepovic et al. US 20230336455 (hereinafter “Halepovic”) in view of Kim et al. US 20170093707 (hereinafter “Kim”) and in further view of Ibanez et al. US 20230127722 (hereinafter “Ibanez”) As to claim 1 and 16 (claim 16 is the method claim for the dataflow system in claim 1): Halepovic discloses: A network switch (“In various embodiments, the communications network 125 can include wired, optical and/or wireless links and the network elements 150, 152, 154, 156, etc., can include service switching points, signal transfer points, service control points, network gateways, media distribution hubs, servers, firewalls, routers, edge devices, switches and other network nodes for routing and controlling communications traffic over wired, optical and wireless links as part of the Internet and other public networks as well as one or more private networks, for managing subscriber access, for billing and network management and for supporting other network functions.”, Halepovic [0033]), comprising: pdatable registers to store state data (“The subject disclosure describes, among other things, illustrative embodiments for periodically sampling a network packet flow between a pair of devices, generating an activity record having symbols that indicate an active-idle status of each sample of the packet flow, and inferring a suitability of the packet flow for estimating a network throughput in a passive manner, without interpreting payloads of the flow of packets.”, Halepovic [0014]), policy data, (“Without limitation, the data-flow analyzer 216 may apply other rules, e.g., detecting consecutive number of sample intervals having the same value, e.g., an active or idle state, or some other predetermine pattern of 1's or 0's.”, Halepovic [0056]) scheduling data (“assigning sampled packets to a particular data flow or transaction and monitoring data transfer activity or inactivity”, Halepovic [0022]), and dataflow operation data, (“In at least some embodiments, the example network monitoring system 200 includes a storage device 214 or system. The example storage device is in communication with the network monitor 210 and adapted to store network monitoring records, e.g., data-flow activity records 212. The data-flow activity records may include an indicator, e.g., data flow identifier (ID) and the data-flow activity array or vector. The data flow ID may include the IP source and destination addresses. Alternatively or in addition, the data-flow activity records may include one or more other data-flow parameters, such as one or more of a data-flow start time, a data-flow end time, a data-flow duration, an associated amount of data, e.g., a data quantity or volume, associated with the entire data flow.”, Halepovic [0051]) wherein the state data is associated with an application (“Inspection of the packets may include interpreting data contained therein, e.g., to identify an application, an object, and any other detail as may be utilized to distinguish among different objects.”, Halepovic [0020]) being executed to generate output data that is routed by the network switch, the state data indicating whether the application is running or idle; (“The subject disclosure describes, among other things, illustrative embodiments for periodically sampling a network packet flow between a pair of devices, generating an activity record having symbols that indicate an active-idle status of each sample of the packet flow, and inferring a suitability of the packet flow for estimating a network throughput in a passive manner, without interpreting payloads of the flow of packets.”, Halepovic [0014]) (“The data-flow activity vector includes a number of symbols corresponding to the number of monitored results, the number of symbols including an active symbol value indicative of the presence of the exchange of data of the identified data flow and an idle symbol value indicative of the absence of the exchange of data of the identified data flow.”, Halepovic [0015]) (“Alternatively or in addition, monitoring identifies whether the data flow was inactive or idle, e.g., that no packet was observed within the sample period, or perhaps that a packet was observed having a payload below a predetermined payload threshold size. In at least some embodiments, the data-flow monitor 180 determined active and/or idle statuses of each sample period without accessing and/or otherwise interpreting contents of a packet payload. Thus, active and/or idle status may be obtained for packets in which the payload portion is encrypted, without requiring an encryption key and without requiring decryption of the payload portion.”, Halepovic [0035]) and electrical circuitry comprising one or more circuits to: ignore the data packets output from the command queue when the state data indicates the application is idle; (“Without limitation, the data-flow analyzer 216 may apply other rules, e.g., detecting consecutive number of sample intervals having the same value, e.g., an active or idle state, or some other predetermine pattern of 1's or 0's. In at least some embodiments, the data-flow analyzer 216 may ignore leading and/or trailing 0's.”, Halepovic [0056]) (“In various embodiments, the communications network 125 can include wired, optical and/or wireless links and the network elements 150, 152, 154, 156, etc., can include service switching points, signal transfer points, service control points, network gateways, media distribution hubs, servers, firewalls, routers, edge devices, switches and other network nodes for routing and controlling communications traffic over wired, optical and wireless links as part of the Internet and other public networks as well as one or more private networks, for managing subscriber access, for billing and network management and for supporting other network functions.”, Halepovic [0033]) (Examiner’s Note: network elements such as servers map to “command queue”) select, when the state data indicates the application is running, a packet processing flow based on the data packets output rom the command queue and at least one of the state data, the policy data, the scheduling data, and the dataflow operation data; (“The data-flow analyzer 181 may be adapted to perform analyses in a batch mode, e.g., analyzing data-flow records obtained and stored for multiple data flows. The multiple data flows may include different data flows involving the same network endpoints, different data flow directions of the same network endpoints and/or different data flows associated with different network endpoints.”, Halepovic [0044]) (“The network monitor 210 may associate the source and destination addresses with a packet flow, or a data flow. Packet traffic observations may be repeated in a like manner in order to obtain a time sequence of packet traffic. Once a packet flow has been identified, the network monitor 210 may determine whether there is an associated active data transfer or whether the data transfer activity is idle for that sample. The network monitor 210 may create a record of the activity samples, e.g., according to an array or vector. A binary vector of 1's and 0's may be used to identify periods of activity and/or inactivity for an identified data flow.”, Halepovic [0050]) (“Accordingly, the suitability of a data flow as a candidate for further network performance analysis, e.g., network throughput, may be obtained at least in part based upon training and deployment of the second AI/ML module 217.”, Halepovic [0060]) (“In various embodiments, the communications network 125 can include wired, optical and/or wireless links and the network elements 150, 152, 154, 156, etc., can include service switching points, signal transfer points, service control points, network gateways, media distribution hubs, servers, firewalls, routers, edge devices, switches and other network nodes for routing and controlling communications traffic over wired, optical and wireless links as part of the Internet and other public networks as well as one or more private networks, for managing subscriber access, for billing and network management and for supporting other network functions.”, Halepovic [0033]) (Examiner’s Note: network elements such as servers map to “command queue”) and output the selected packet processing flow such that the data packets output from the command queue are processed according to the selected packet processing flow. (“The uplink and downlink packet flows 208a, 208b may be routed through a communication network 206, according to one or more networking protocols. At least some network protocols, such as IP, generally define a packet structure, e.g., distinguish a packet header portion from a packet payload portion. The packet header portion may include, among other fields, IP addresses of sender and a recipient. Thus, packets of the downlink packet flow 208b would include a source IP address of the data source 202 and a destination IP address of the data recipient 204. The packet payloads may include chunks of data corresponding to the groups of objects. In at least some embodiments, the data source 202 provides sequential transactions for each object of the group of objects associated with the requested Web page. It is envisioned that there may be at least some minimal delay between packet flows of the individual objects that may be usable to distinguish individual transactions of the requested Web page from each other.”, Halepovic [0049]) (“In various embodiments, the communications network 125 can include wired, optical and/or wireless links and the network elements 150, 152, 154, 156, etc., can include service switching points, signal transfer points, service control points, network gateways, media distribution hubs, servers, firewalls, routers, edge devices, switches and other network nodes for routing and controlling communications traffic over wired, optical and wireless links as part of the Internet and other public networks as well as one or more private networks, for managing subscriber access, for billing and network management and for supporting other network functions.”, Halepovic [0033]) (Examiner’s Note: network elements such as servers map to “command queue”) Halepovic as described above does not explicitly teach: an arithmetic logic unit (ALU) to perform one or more ALU operations on the output data as part of an in-network compute operation performed by the network switch to yield data packets; a command queue to enable dynamic updates to the registers storing the state data, the policy data, the scheduling data, and the dataflow operation data by queuing the data packets output from the ALU based on the scheduling data; However, Kim further teaches ALU that perform ALU operations on data packets includes: an arithmetic logic unit (ALU) to perform one or more ALU operations on the output data as part of an in-network compute operation performed by the network switch to yield data packets; (FIG. 1 through FIG. 12 show DPSUs and ALU that perform ALU operations on packets included in the output for a network switch, Kim) Halepovic and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include ALU that perform ALU operations on data packets as described in Kim into Halepovic. By modifying the method to include ALU that perform ALU operations on data packets as taught by Kim, the benefits of improved packet processing methods (Halepovic [0024] and Kim [0008]) are achieved. The combination of Kim and Halepovic as described above does not explicitly teach: a command queue to enable dynamic updates to the registers storing the state data, the policy data, the scheduling data, and the dataflow operation data by queuing the data packets output from the ALU based on the scheduling data; However, Ibanez further teaches command queue includes: a command queue to enable dynamic updates to the registers storing the state data, the policy data, the scheduling data, and the dataflow operation data by queuing the data packets output from the ALU based on the scheduling data; (FIG. 4, Ibanez) (“FIG. 4 depicts an example configuration of a programmable transport architecture. In some examples, PTA can process 200 Mpps (million packets per second) in transmit and receive directions, or other packets per second. Programmable packet processing pipeline 402 of PTA can be configured by a packet processing program to perform stateful operations on connection state such as the protocol state used to implement reliable delivery (e.g., packet sequence numbers, packet transmission timestamps, acknowledgement (ACK) coalescing state, etc.) or congestion control (e.g., congestion window, round trip time estimates, etc.). CSPs can write a packet processing program to implement and deploy custom transport protocol.”, Ibanez [0060]) (“Configurable scheduling 408 can schedule packets for transmission from active queues and can generate scheduling events to be processed by programmable pipeline 402 to perform a configurable scheduling policy to arbitrate across queues that have been marked as active by programmable queue management 404. Scheduling 408 can generate scheduling events that indicate the selected connection and queue identifier (ID). Programmable queue management 404 can process the scheduling event and fetch the packet state from the corresponding connection and queue ID. Scheduling 408 can implement a configurable, hierarchical scheduling policy to schedule packet transmissions from amongst the active queues.”, Ibanez [0065]) (“Commands can be processed by programmable queue management to update the connection state and enforce the congestion control decisions. Programmable queue management can provide primitives to implement a wide range of queueing data structures including first in first out (FIFO) queues, go-back-N queues, or reorder queues.”, Ibanez [0135]) (Examiner’s Note: the combination of 404 “programmable queue management” and 412 “packet buffer” map to “command queue”) Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include command queue as described in Ibanez into Halepovic as modified by Kim. By modifying the method to include command queue as taught by Ibanez, the benefits of improved packet processing methods (Halepovic [0024], Ibanez [0065], and Kim [0008]) are achieved. As to claim 6: Halepovic as described above does not explicitly teach: The network switch dataflow system of claim 1, wherein the state data indicates a pipeline stage. However, Kim further teaches state data that indicates a pipeline stage includes: The network switch dataflow system of claim 1, wherein the state data indicates a pipeline stage. (“FIG. 3 illustrates the use of packet variables and state variables by DSPUs for processing packets in the packet-processing pipeline.”, Kim [0020]) (“FIG. 4 illustrates a DSPU 410 that is performing read-modify-write operations for using and maintaining state variables. The DSPU is in one of the data processing stages of a packet-processing pipeline. As illustrated, the DSPU is maintaining three different sets of state variables for three different classes of packets (e.g., packets having different PHV hash values). The first set of state variables includes state variables X1 and Y1, the second set of state variables includes state variables X2 and Y2, and the third set of state variables includes state variables X3 and Y3.”, Kim [0066]) (“One state variable records end_of_interval time, while another state variable records numdropped. In some embodiments the two state variable are updated by Hi and Lo ALUs of a same DSPU. In some embodiments, the two state variables are maintained and updated by two different DSPUs at different pipeline stages.”, Kim [0178]) Halepovic and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include state data that indicates a pipeline stage as described in Kim into Halepovic. By modifying the method to include state data that indicates a pipeline stage as taught by Kim, the benefits of improved packet processing methods (Halepovic [0024] and Kim [0008]) are achieved. Claim(s) 8-12 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Halepovic in view of Kim and Ibanez, as applied to claim 1 above, and further in view of Bryers et al. US 20130155861 (hereinafter “Bryers”) As to claim 8: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the policy data comprises a policy to improve bandwidth, latency, PPS (packets processed per second), or any combination thereof, and wherein However, Bryers further teaches selecting a packet processing flow based on a policy which includes: The network switch of claim 1, wherein the policy data comprises a policy to improve bandwidth, latency, PPS (packets processed per second), or any combination thereof, and wherein (“As shown in FIG. 6a, packets and flows can follow a "slow" or "fast" path through the processors. The identification process defines a "slow path" for the packet, wherein the processing sequence for the flow must be set up as well as the specific requirements for each process. This includes performing a policy review based on the particular subscriber to whom the flow belongs, and setting up the flow to access the particular service or series of services defined for that subscriber. A "fast path" is established once the flow is identified and additional packets in the flow are routed to the service processors immediately upon identification for processing by the compute elements.”, Bryers [0121]) (“Returning again to FIG. 5, the next level in the architecture is the service architecture 330. The service architecture 330 provides support for the flow control and conversation based identification of packets described below. The service architecture 330 is a flow-based architecture that is suitable for implementing content services such as firewall, NAT bandwidth management, and IP forwarding.”, Bryers [0118]) Bryers, Ibanez, Halepovic, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a packet processing flow based on a policy as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include selecting a packet processing flow based on a policy as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 9: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the policy data comprises a policy to perform parallel operations, pipelined operations, or both, and wherein the packet processing flow is selected based on the policy. However, Bryers further teaches selecting a packet processing flow based on a policy which includes: The network switch of claim 1, wherein the policy data comprises a policy to perform parallel operations, pipelined operations, or both, and wherein the packet processing flow is selected based on the policy. (“The use of the flow tables allows for packet processing to be rapidly directed to appropriate processors performing application specific processing. Nevertheless, initially, the route of the packets through the processing pipelines must be determined. As shown in FIG. 6a, packets and flows can follow a "slow" or "fast" path through the processors. The identification process defines a "slow path" for the packet, wherein the processing sequence for the flow must be set up as well as the specific requirements for each process. This includes performing a policy review based on the particular subscriber to whom the flow belongs, and setting up the flow to access the particular service or series of services defined for that subscriber. A "fast path" is established once the flow is identified and additional packets in the flow are routed to the service processors immediately upon identification for processing by the compute elements.”, Bryers [0121]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a packet processing flow based on a policy as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include selecting a packet processing flow based on a policy as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 10: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the policy data comprises a policy to improve security, and wherein the packet processing flow is configurable based on the policy to improve security. However, Bryers further teaches selecting a packet processing flow based on a policy which includes: The network switch of claim 1, wherein the policy data comprises a policy to improve security, and wherein the packet processing flow is configurable based on the policy to improve security. (“To bind conversations and a given IPSec security association to single compute elements, the flow stage employs various techniques. In one case, the stage can statically allocate subscribers to processing pipelines based on minimum and maximum bandwidth demands. (For example, all flows must satisfy some processing pipeline minimum and minimize variation on the sum of maximums across various processing pipelines). In an alternative mode, if a subscriber is restricted to a processing pipeline, new flows are allocated to the single pipe where the subscriber is mapped. Also, the route-tag is computed in the flow stage based on policies. The processing can later modify the route-tag, if needed.”, Bryers [0154]) (“It should be noted that each security association can consist of multiple flows and all packets belonging to a security association are generally directed to one compute element. The security policy database is accessible to all compute elements, allowing all compute elements to do lookups in the database.”, Bryers [0205]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a packet processing flow based on a policy as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include selecting a packet processing flow based on a policy as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 11: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the scheduling data comprises priority data for a packet stream. However, Bryers further teaches scheduling based on prioritization which includes: The network switch of claim 1, wherein the scheduling data comprises priority data for a packet stream. (“the scheduler's programmable prioritization is described with reference to queue 2382. The same prioritization process is performed for queues 2384, 2386, and 2388. In one embodiment, priority is given to load requests. The scheduler in cache 2080 reviews the Opcode fields of the request descriptors in queue 2382 to identify all load operations. In an alternate embodiment, store operations are favored. The scheduler also identifies these operations by employing the Opcode field.”, Bryers [0308]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include scheduling based on prioritization as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include scheduling based on prioritization as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 12: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the scheduling data comprises priority data for However, Bryers further teaches scheduling based on prioritization which includes: The network switch of claim 1, wherein the scheduling data comprises priority data for (“In the depiction shown in FIG. 3, each compute element may comprise one or more microprocessors, including any commercially available microprocessor. Alternatively, the compute elements may comprise one or more application-specific integrated circuit processors specifically designed to process packets in accordance with the network service which the content service aggregator is designed to provide.”, Bryers [0086]) (“the scheduler's programmable prioritization is described with reference to queue 2382. The same prioritization process is performed for queues 2384, 2386, and 2388. In one embodiment, priority is given to load requests. The scheduler in cache 2080 reviews the Opcode fields of the request descriptors in queue 2382 to identify all load operations. In an alternate embodiment, store operations are favored. The scheduler also identifies these operations by employing the Opcode field.”, Bryers [0308]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include scheduling based on prioritization as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include scheduling based on prioritization as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 14: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the packet processing flow is selected based on the state data, the policy data, the scheduling data, and the dataflow operation data. However, Bryers further teaches selecting a packet processing flow based on various data which includes: The network switch of claim 1, wherein the packet processing flow is selected based on the state data (“the flow stage will broadcast a send/request query to determine which processing pipeline is able to handle the SSL processing flow. The Control Authority receiving the queues will verify load on all CPUs in the compute elements and determine whether the SSL flows exist for same IP pair, and then select a CPU to perform the SSL. An entry in the flow table is then made and a response to the Control Authority with a flow hint is made. The flow hint contains information about the flow state, the corresponding CPU's ID and index to the SSL Certificate Base.”, Bryers [0232]), the policy data (“As shown in FIG. 6a, packets and flows can follow a "slow" or "fast" path through the processors. The identification process defines a "slow path" for the packet, wherein the processing sequence for the flow must be set up as well as the specific requirements for each process. This includes performing a policy review based on the particular subscriber to whom the flow belongs, and setting up the flow to access the particular service or series of services defined for that subscriber.”, Bryers [0121]), the scheduling data (“If no match is found at the checking the hash flow table at step 912, then a policy walk is performed wherein the identity of the subscriber and the services to be offered are matched at step 944. If a subscriber is not allocated to multiple pipes, at step 946, each pipe is "queried" at step 950 (using the multi-cast support in the cross-bar switch) to determine which pipe has ownership of the conversation. If one of the pipelines does own the conversation, the pipeline that owns this conversation returns the ownership info at 950 and service specific set-up is initiated at 948. The service specific setup is also initiated if the flow is found to be submapped as determined by step 946. If no pipe owns the flow at step 950, that the flow is scheduled for a pipe at 952. Following service specific setup at 948, a database entry to the fast path processing is added at 953 and at step 954, route tag is added and the packet forwarded.”, Bryers [0156]) (“The QOS architecture of the present invention determines a set of distributed target bandwidths for each traffic class. This allows the content aggregator to provide bandwidth guarantees for the system as a whole. These targets are then used on a local basis by each flow compute element to enforce global QOS requirements over a period of time. After that period has elapsed, a new set of target bandwidths are calculated in order to accommodate the changes in traffic behavior that have occurred while the previous set of targets were in place. For each traffic class, a single target bandwidth must be chosen that: provides that class with its minimum guaranteed bandwidth (or a "fair" portion, in the case of contention for internal resources); does not allow that class to exceed its maximum allowed bandwidth; and awards a "fair" portion of any extra available bandwidth to that class.”, Bryers [0164]), and the dataflow operation data. (“The use of the flow tables allows for packet processing to be rapidly directed to appropriate processors performing application specific processing.”, Bryers [0121]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a packet processing flow based on various data as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include selecting a packet processing flow based on various data as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. As to claim 15: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the packet processing flow is selected based on at least two of: the state data, the policy data, the scheduling data, and the dataflow operation data. However, Bryers further teaches selecting a packet processing flow based on various data which includes: The network switch of claim 1, wherein the packet processing flow is selected based on at least two of: the state data (“the flow stage will broadcast a send/request query to determine which processing pipeline is able to handle the SSL processing flow. The Control Authority receiving the queues will verify load on all CPUs in the compute elements and determine whether the SSL flows exist for same IP pair, and then select a CPU to perform the SSL. An entry in the flow table is then made and a response to the Control Authority with a flow hint is made. The flow hint contains information about the flow state, the corresponding CPU's ID and index to the SSL Certificate Base.”, Bryers [0232]), the policy data, (“As shown in FIG. 6a, packets and flows can follow a "slow" or "fast" path through the processors. The identification process defines a "slow path" for the packet, wherein the processing sequence for the flow must be set up as well as the specific requirements for each process. This includes performing a policy review based on the particular subscriber to whom the flow belongs, and setting up the flow to access the particular service or series of services defined for that subscriber.”, Bryers [0121]) the scheduling data (“If no match is found at the checking the hash flow table at step 912, then a policy walk is performed wherein the identity of the subscriber and the services to be offered are matched at step 944. If a subscriber is not allocated to multiple pipes, at step 946, each pipe is "queried" at step 950 (using the multi-cast support in the cross-bar switch) to determine which pipe has ownership of the conversation. If one of the pipelines does own the conversation, the pipeline that owns this conversation returns the ownership info at 950 and service specific set-up is initiated at 948. The service specific setup is also initiated if the flow is found to be submapped as determined by step 946. If no pipe owns the flow at step 950, that the flow is scheduled for a pipe at 952. Following service specific setup at 948, a database entry to the fast path processing is added at 953 and at step 954, route tag is added and the packet forwarded.”, Bryers [0156]) (“The QOS architecture of the present invention determines a set of distributed target bandwidths for each traffic class. This allows the content aggregator to provide bandwidth guarantees for the system as a whole. These targets are then used on a local basis by each flow compute element to enforce global QOS requirements over a period of time. After that period has elapsed, a new set of target bandwidths are calculated in order to accommodate the changes in traffic behavior that have occurred while the previous set of targets were in place. For each traffic class, a single target bandwidth must be chosen that: provides that class with its minimum guaranteed bandwidth (or a "fair" portion, in the case of contention for internal resources); does not allow that class to exceed its maximum allowed bandwidth; and awards a "fair" portion of any extra available bandwidth to that class.”, Bryers [0164]), and the dataflow operation data. (“The use of the flow tables allows for packet processing to be rapidly directed to appropriate processors performing application specific processing.”, Bryers [0121]) Bryers, Halepovic, Ibanez, and Kim are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a packet processing flow based on various data as described in Bryers into Halepovic as modified by Kim and Ibanez. By modifying the method to include selecting a packet processing flow based on various data as taught by Bryers, the benefits of improved packet processing methods (Halepovic [0024], Bryers [0118], Ibanez [0065], and Kim [0008]) are achieved. Claim(s) 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Halepovic in view of Kim and Ibanez, as applied to claim 1 above, and further in view of Soni US 20240037429 (hereinafter “Soni”) As to claim 3 and 18 (claim 18 is the method claim for the dataflow system in claim 3): The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the dataflow operation data is received from a dynamically programmable parser. However, Soni further teaches receiving operation data from a programmable parser which includes: The network switch of claim 1, wherein the dataflow operation data is received from a dynamically programmable parser. (“Packet-processing hardware devices (e.g., Intel Tofino™) or software running on general purpose central processing units (CPU)s can perform three main operations to process packets in a dataplane or datapath: (1) parse protocol headers; (2) lookup the content of parsed headers in tables to identify actions and/or operations for execution; and (3) reassemble the parsed and/or new protocols headers before sending the packets out. These operations are the tenets of two domain-specific primitives: programmable parser-deparsers and reconfigurable match-action tables.”, Soni [0042]) Halepovic, Kim, Ibanez, and Soni are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include receiving operation data from a programmable parser as described in Soni into Halepovic as modified by Kim and Ibanez. By modifying the method to include receiving operation data from a programmable parser as taught by Soni, the benefits of improved packet processing methods (Halepovic [0024], Ibanez [0065], and Kim [0008]) and improved throughput speed (Soni [0063]) are achieved. Claim(s) 7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Halepovic in view of Kim and Ibanez, as applied to claim 1 above, and further in view of Weiner et al. US 20240039849 (hereinafter “Weiner”) As to claim 7: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein Artificial Intelligence determines and outputs the policy data, updated policies, or both. However, Weiner further teaches using machine learning to update and output policy which includes: The network switch of claim 1, wherein Artificial Intelligence determines and outputs the policy data, updated policies, or both. Kim, Halepovic, Ibanez, and Weiner are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include using machine learning to update and output policy as described in Weiner into Halepovic as modified by Kim and Ibanez. By modifying the method to include using machine learning to update and output policy as taught by Weiner, the benefits of improved packet processing methods (Halepovic [0024], Ibanez [0065], and Kim [0008]) and applying machine learning to manage policies (Weiner [0047]) are achieved. As to claim 13: The combination of Halepovic, Ibanez, and Kim as described above does not explicitly teach: The network switch of claim 1, wherein the scheduling data comprises an amount of time assigned to a specific task. However, Weiner further teaches workflow scheduling data which includes: The network switch of claim 1, wherein the scheduling data comprises an amount of time assigned to a specific task. (“the generated cost/reward value may be a multi-dimensional value accounting for a balance of system properties such as power consumption, average workflow execution time, execution time of an individual workflow, prioritization of critical workflows, and/or other similar dynamic load balancing system 100 feedback. In some embodiments, the cost/reward value or set of values generated by the ML cost/reward model 304 may be transmitted to the system prediction ML model 306 for determining the workflow scheduling policy”, Weiner [0046]) Kim, Halepovic, Ibanez, and Weiner are analogous because they pertain to packet processing. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include workflow scheduling data as described in Weiner into Halepovic as modified by Kim and Ibanez. By modifying the method to include workflow scheduling data as taught by Weiner, the benefits of improved packet processing methods (Halepovic [0024], Ibanez [0065], and Kim [0008]) and applying machine learning to manage policies (Weiner [0047]) are achieved. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW C KIM whose telephone number is (703)756-5607. The examiner can normally be reached M-F 9AM - 5PM (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sujoy K Kundu can be reached at (571) 272-8586. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.C.K./ Examiner Art Unit 2471 /SUJOY K KUNDU/Supervisory Patent Examiner, Art Unit 2471
Read full office action

Prosecution Timeline

Aug 02, 2022
Application Filed
Jan 13, 2025
Non-Final Rejection — §103
Apr 17, 2025
Response Filed
May 19, 2025
Final Rejection — §103
Jul 30, 2025
Interview Requested
Aug 14, 2025
Examiner Interview Summary
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 22, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Nov 03, 2025
Non-Final Rejection — §103
Jan 26, 2026
Interview Requested
Feb 03, 2026
Applicant Interview (Telephonic)
Feb 03, 2026
Examiner Interview Summary
Feb 10, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12490157
TIMING CHANGE AND NEW RADIO MOBILITY PROCEDURE
2y 5m to grant Granted Dec 02, 2025
Patent 12464341
DEVICE, PROCESS, AND APPLICATION FOR DETERMINING WIRELESS DEVICE CARRIER COMPATIBILITY
2y 5m to grant Granted Nov 04, 2025
Patent 12439313
INTER-DONOR TOPOLOGY ADAPTATION IN INTEGRATED ACCESS AND BACKHAUL NETWORKS
2y 5m to grant Granted Oct 07, 2025
Patent 12418821
AWARENESS LAYERS FOR MANAGING ACCESS POINTS IN CENTRALIZED WIRELESS NETWORKS
2y 5m to grant Granted Sep 16, 2025
Patent 12414023
METHOD AND NETWORK APPARATUS FOR PROVISIONING MOBILITY MANAGEMENT DURING CONGESTION IN A WIRELESS NETWORK
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
32%
Grant Probability
12%
With Interview (-20.2%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month