Prosecution Insights
Last updated: April 19, 2026
Application No. 18/735,959

METHOD AND DEVICE FOR SPEEDING UP PACKET PROCESSING

Final Rejection §103
Filed
Jun 06, 2024
Examiner
BALLOWE, CALEB JAMES
Art Unit
2419
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
2 (Final)
14%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
61%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
2 granted / 14 resolved
-43.7% vs TC avg
Strong +46% interview lift
Without
With
+46.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
55 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s submission filed on 1-16 has been entered. Applicant’s submission overcomes prior rejections of claims 6 and 14 under 35 USC § 112. Therefore, the corresponding rejections are withdrawn. Claims 1-16 are pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Aziz et al. (US 2019/0082040), hereinafter "Aziz", in view of Li et al. (US 2016/0204851), hereinafter "Li". Regarding claims 1, 9, Aziz teaches: A method for speeding up packet processing or a device for speeding up packet processing, comprising: a central processing unit (CPU) (see Aziz, Fig. 3, par. [0041]: The apparatus 100 also includes a PDCP SDU manager 104, a TX memory 105, a MAC PDU assembler 106, a control processor 112); and a hardware acceleration processor coupled to the CPU, wherein the hardware acceleration processor is operable (see Aziz, Fig. 3, par. [0046]: the control processor 112 includes a dedicated interface (or port, or link, or bus) to each of the hardware accelerators, e.g., PDCP SDU manager 104, header generators 114, MAC PDU assembler 106, MAC PDU manager 196, header decoders 116, and PDCP SDU fetcher 194. An advantage of the dedicated interfaces is that write or read operations by the control processor 112 to/from the hardware accelerators may complete immediately without contention with other agents) to: receiving, by a hardware acceleration circuitry of a computing device, a packet (see Aziz, Fig. 3, par. [0048]: When the PDCP SDU manager 104 detects the presence of a PDCP SDU in the FIFO 102, it reads the PDCP SDU 132 from the FIFO 102 and writes the PDCP SDU 134 to a location in the PDCP SDU buffer 122 and notifies the control processor 112. The control processor 112 receives the length of the PDCP SDU, either from the PDCP SDU manager 104 or from the L3 unit transport mechanism (e.g., 10 Gb Ethernet port or similar high speed data port) that writes the PDCP SDU into the FIFO 102. In one embodiment, the L3 unit transport mechanism determines the length of the PDCP SDU (e.g., if the PDCP SDU is an IP packet, the L3 unit knows the length of the IP packet, which may be determined from the IP packet header) and provides the PDCP SDU length to the control processor 112 and/or to the PDCP SDU manager 104. The PDCP SDU manager 104 reads words from the FIFO 102 (e.g., in 4-byte words) and provides addresses to the TX memory 105 along with the words read from the FIFO 102 to write the words to the TX memory 105); performing, by the hardware acceleration circuitry, related processing on the TCP/IP packet to obtain a processed TCP/IP packet after the related processing in response to determining that the packet is the TCP/IP packet (see Aziz, Fig. 3, par. [0048]: When the PDCP SDU manager 104 detects the presence of a PDCP SDU in the FIFO 102, it reads the PDCP SDU 132 from the FIFO 102 and writes the PDCP SDU 134 to a location in the PDCP SDU buffer 122 and notifies the control processor 112. The control processor 112 receives the length of the PDCP SDU, either from the PDCP SDU manager 104 or from the L3 unit transport mechanism (e.g., 10 Gb Ethernet port or similar high speed data port) that writes the PDCP SDU into the FIFO 102. In one embodiment, the L3 unit transport mechanism determines the length of the PDCP SDU (e.g., if the PDCP SDU is an IP packet, the L3 unit knows the length of the IP packet, which may be determined from the IP packet header) and provides the PDCP SDU length to the control processor 112 and/or to the PDCP SDU manager 104. The PDCP SDU manager 104 reads words from the FIFO 102 (e.g., in 4-byte words) and provides addresses to the TX memory 105 along with the words read from the FIFO 102 to write the words to the TX memory 105); and transmitting, by the hardware acceleration circuitry, the processed TCP/IP packet to a lower layer (see Aziz, Fig. 3, par. [0051]: The MAC PDU assembler 106 uses the pointers and lengths 148 to generate TX memory 105 read addresses to form MAC PDU 138 that the MAC PDU assembler 106 writes to the FIFO 108, preferably words (e.g., 4-byte words) at a time, and see par. [0052]: The presence of the MAC PDU in the FIFO 108 (e.g., non-empty indicator) may serve as an indication to the L1 unit that it may read the MAC PDU from the FIFO 108 for transmission on the wireless channel by the radios of the PHY and the antennas connected thereto. In one embodiment, the MAC PDU assembler 106 provides the length of the MAC PDU to the L1 unit). However, Aziz does not teach: receiving the packet from an application layer; determining, by the hardware acceleration circuitry, whether the packet is a transmission control protocol/Internet protocol (TCP/IP) packet; Li, in the same field of endeavor, teaches: receiving the packet from an application layer (see Li, Figs. 2 and 3, par. [0027]: FIG. 3, with reference to FIG. 2, illustrates exemplary operation of hardware accelerator 106a. We assume that modem 104a received a packet from a source device connected to the WAN and provided the received packet to reception data buffer 202 via receiver 208. TCP/IP packet decoder 212 fetches an IP version from the received packet while the packet is simultaneously routed to a destination connected to LAN 108 via application processing device 222 (act 302)); determining, by the hardware acceleration circuitry, whether the packet is a transmission control protocol/Internet protocol (TCP/IP) packet (see Li, Figs. 2 and 3, par. [0029]: If, during act 304, the received packet is determined not to pertain to IP version 4, then IP version 6 comparator 218 determine whether the fetched IP version indicates that received packet pertains to IP version 6 (act 316), and see par. [0029]: if the received packet is determined to pertain to IP version 6, TCP/IP packet decoder 212 may fetch protocol information from the received packet (act 318) and may provide the protocol information to TCP comparator 220 to determine whether the received packet pertaining to IP version 6 is a TCP data packet (act 320)); Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of Aziz with the receiving a packet from an application layer and determining whether the packet is a TCP/IP packet of Li with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing propagation delay (see Li, par. [0001]). Claims 2, 4, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Aziz in view of Li, as applied to claims 1 and 9 above, and further in view of Sivaramakrishnan (US 9,641,435), hereinafter “Sivaramakrishnan”. Regarding claims 2, 10, the combination of Aziz in view of Li teaches the method or device. However, the combination of Aziz in view of Li does not teach: wherein the related processing is completely performed without memory access. Sivaramakrishnan, in the same field of endeavor, teaches: wherein the related processing is completely performed without memory access (see Sivaramakrishnan, col. 14, lines 42-59: Virtual router forwarding plane 128 generates the tunnel header 152 to ensure that the tunnel header is identical for multiple TCP segments to be generated by segmentation offload 115. Because the checksum field 162 value varies according to the values of all fields of outer IP header 153, virtual router forwarding plane 128 ensures that (1) the length of the tunnel packet is identical such that the length field 159 value does not vary, (2) the identification field 160 of the outer IP header 153 is set to 0, and (3) the do not fragment field 161 is set to 1. Accordingly, the tunnel header 153 may be identical for each of the multiple TCP segments to be generated. Identification field 161 is in general used to support IP fragmentation and may be safely set to 0 for the tunnel header 152 for each tunnel packet because the do not fragment field 161 is also set to 1. Virtual router forwarding plane 128 also computes a checksum for outer IP header 153 and sets the value for checksum field 162 to the computed checksum). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the processing of the combination of Aziz in view of Li with the processing without memory access of Sivaramakrishnan with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing resource usage by the computing resources (see Sivaramakrishnan, col. 17, lines 57-67). Regarding claims 4, 12, the combination of Aziz in view of Li teaches the method or device. However, the combination of Aziz in view of Li does not teach: wherein the related processing at least comprises: a checksum calculation for TCP/IP packet; an IP fragmentation; and a TCP segmentation offload (TSO). Sivaramakrishnan, in the same field of endeavor, teaches: wherein the related processing at least comprises: a checksum calculation for TCP/IP packet (see Sivaramakrishnan, col. 14, lines 42-59: Virtual router forwarding plane 128 generates the tunnel header 152 to ensure that the tunnel header is identical for multiple TCP segments to be generated by segmentation offload 115. Because the checksum field 162 value varies according to the values of all fields of outer IP header 153, virtual router forwarding plane 128 ensures that (1) the length of the tunnel packet is identical such that the length field 159 value does not vary, (2) the identification field 160 of the outer IP header 153 is set to 0, and (3) the do not fragment field 161 is set to 1. Accordingly, the tunnel header 153 may be identical for each of the multiple TCP segments to be generated. Identification field 161 is in general used to support IP fragmentation and may be safely set to 0 for the tunnel header 152 for each tunnel packet because the do not fragment field 161 is also set to 1. Virtual router forwarding plane 128 also computes a checksum for outer IP header 153 and sets the value for checksum field 162 to the computed checksum); an IP fragmentation (see Sivaramakrishnan, col. 14, lines 42-59: Virtual router forwarding plane 128 generates the tunnel header 152 to ensure that the tunnel header is identical for multiple TCP segments to be generated by segmentation offload 115. Because the checksum field 162 value varies according to the values of all fields of outer IP header 153, virtual router forwarding plane 128 ensures that (1) the length of the tunnel packet is identical such that the length field 159 value does not vary, (2) the identification field 160 of the outer IP header 153 is set to 0, and (3) the do not fragment field 161 is set to 1. Accordingly, the tunnel header 153 may be identical for each of the multiple TCP segments to be generated. Identification field 161 is in general used to support IP fragmentation and may be safely set to 0 for the tunnel header 152 for each tunnel packet because the do not fragment field 161 is also set to 1. Virtual router forwarding plane 128 also computes a checksum for outer IP header 153 and sets the value for checksum field 162 to the computed checksum); and a TCP segmentation offload (TSO) (see Sivaramakrishnan, col. 14, lines 42-59: Virtual router forwarding plane 128 generates the tunnel header 152 to ensure that the tunnel header is identical for multiple TCP segments to be generated by segmentation offload 115. Because the checksum field 162 value varies according to the values of all fields of outer IP header 153, virtual router forwarding plane 128 ensures that (1) the length of the tunnel packet is identical such that the length field 159 value does not vary, (2) the identification field 160 of the outer IP header 153 is set to 0, and (3) the do not fragment field 161 is set to 1. Accordingly, the tunnel header 153 may be identical for each of the multiple TCP segments to be generated. Identification field 161 is in general used to support IP fragmentation and may be safely set to 0 for the tunnel header 152 for each tunnel packet because the do not fragment field 161 is also set to 1. Virtual router forwarding plane 128 also computes a checksum for outer IP header 153 and sets the value for checksum field 162 to the computed checksum). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the processing of the combination of Aziz in view of Li with the specific processing of Sivaramakrishnan with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing resource usage by the computing resources (see Sivaramakrishnan, col. 17, lines 57-67). Claims 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Aziz in view of Li, as applied to claims 1 and 9 above, and further in view of Guo et al. (US 2015/0055481), hereinafter “Guo”. Regarding claims 3, 11, the combination of Aziz in view of Li teaches the method or device. Aziz does not teach, but Li teaches: further comprising: transferring, by the hardware acceleration circuitry, the packet in response to determining that the packet is not a TCP/IP packet (see Li, Figs. 2 and 3, par. [0029]: If, during act 304, the received packet is determined not to pertain to IP version 4, then IP version 6 comparator 218 determine whether the fetched IP version indicates that received packet pertains to IP version 6 (act 316). If the received packet does not pertain to IP version 6, then the received packet may be discarded (act 322). Otherwise, if the received packet is determined to pertain to IP version 6, TCP/IP packet decoder 212 may fetch protocol information from the received packet (act 318) and may provide the protocol information to TCP comparator 220 to determine whether the received packet pertaining to IP version 6 is a TCP data packet (act 320). If the received packet is determined not to be the TCP data packet, then the received packet may be discarded (act 322); in this case, based on the packet not being a TCP/IP packet, the packet is discarded (i.e. transferred) from the hardware acceleration device) Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of Aziz with the transferring the packet of Li with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing propagation delay (see Li, par. [0001]). However, the combination of Aziz in view of Li does not teach: transferring, by the hardware acceleration circuitry, the packet to a memory of the computing device and instructing a central processing unit (CPU) of the computing device to perform the related processing on the packet in the memory to obtain a processed non-TCP/IP packet after the related processing; and transmitting the processed non-TCP/IP packet to a lower layer by the CPU. Guo, in the same field of endeavor, teaches: transferring, by the hardware acceleration circuitry, the packet to a memory of the computing device and instructing a central processing unit (CPU) of the computing device to perform the related processing on the packet in the memory to obtain a processed non-TCP/IP packet after the related processing (see Guo, Fig. 2, par. [0054]: Once packets that contextually match with one or more conditions/rules/rule identifiers are identified, such packets can be forwarded to the GPP 206 for onward processing/transmission to the application software 212 through operating system 208 and low-level software 210 as already described above, and see par. [0058]: The general purpose processor and/or the acceleration hardware can be configured to capture, aggregate, annotate, store, and index network packet data in real time from one or more portions of the network and retrieve such data utilizing the storage and the indexing database. Thus, the storage may be operable as a packet capture repository and the indexing database may be operable as an index into the packet capture repository. The storage may include any kind of storage media, including, but not limited to one or more magnetic storage media, optical storage media, volatile memory, non-volatile memory, flash memory, and the like, and see par. [0060]: Pre-matching module 306 can be configured to receive re-assembled and re-ordered incoming network packets as a stream, and match the incoming packet stream with one or more conditions to identify packets meeting the one or more conditions. According to one embodiment, such one or more conditions can include packet field-level conditions/criterions/rules, protocol-level conditions/criterions/rules; in this case, Guo teaches using protocol-level conditions for determining when to forward packets to a general purpose processor (corresponding to the CPU) which stores packets in memory and further processes packets. Taken in combination with Aziz and Li, packets are transferred based on a protocol-level condition indicating the packet is not TCP/IP); and transmitting the processed non-TCP/IP packet to a lower layer by the CPU (see Guo, Fig. 2, par. [0054]: Once packets that contextually match with one or more conditions/rules/rule identifiers are identified, such packets can be forwarded to the GPP 206 for onward processing/transmission to the application software 212 through operating system 208 and low-level software 210 as already described above). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the transferring the packet of the combination of Aziz in view of Li with the transferring the packet to a memory of a CPU and transmitting the packet to a lower layer by the CPU of Guo with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving accuracy, speed, and efficiency of context-aware pattern matching (see Guo, par. [0033]). Claims 5-7 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Aziz in view of Li, as applied to claims 1 and 9 above, and further in view of Agrawal et al. (US 2020/0266955), hereinafter “Agrawal”. Regarding claims 5, 13, the combination of Aziz in view of Li teaches the method or device. However, the combination of Aziz in view of Li does not teach: further comprising: filtering out at least one redundant TCP acknowledgment (ACK). Agrawal, in the same field of endeavor, teaches: further comprising: filtering out at least one redundant TCP acknowledgment (ACK) (see Agrawal, Fig. 7, par. [0084]: a diagram 700 illustrates several examples of TCP ACK aggregation. Specifically, the diagram 700 shows three implementations 710, 720, and 730 of TCP ACK aggregation when the reduction factor N equals to 2. A reduction factor may be associated with a decrease in ACK transmitted when utilizing ACK aggregation. For example, if a receiving device sends 2000 ACKs for 2000 packets, a reduction factor of 4 would mean that the receiving device only sends 500 ACKs when utilizing ACK aggregation). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of the combination of Aziz in view of Li with the filtering TCP ACKs of Agrawal with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of decreasing transmission overhead (see Agrawal, par. [0033]). Regarding claims 6, 14, the combination of Aziz in view of Li, and further in view of Agrawal, teaches the method or device. The combination of Aziz in view of Li does not teach, but Agrawal teaches: wherein filtering out at least one redundant TCP ACK comprises: searching a list of queued TCP ACK whose destinations are the lower layer (see Agrawal, par. [0093]: the UE 110 may track the sequence numbers of the ACKs within a given time period, and see Agrawal, par. [0082]: a sending device 610, such as the UE 110 or the BS 105, may send one or more packets to a receiving device 620, such as a different UE 110 or BS 105, and see Agrawal, par. [0084]: there may be eight unique TCP ACKs (ACK k, ACK k+1 . . . ACK k+7) generated in response to the received TCP traffic within a fixed interval. Because the reduction factor N is 2, four unique TCP ACKs may be transmitted to the sending device to acknowledge the reception of the TCP traffic within the interval, and the other four unique TCP acknowledgments may be discarded; in this case, considering a number of TCP ACKs for transmission corresponds to searching a list of queued TCP ACK); determining a latest sequence number in the TCP ACKs (see Agrawal, Fig. 7, par. [0085]: the unique TCP ACKs with the highest sequence number(s) in the interval may be transmitted to the sending device, and the unique TCP ACKs with the lowest sequence number(s) may be discarded. The unique TCP ACKs with the highest sequence number(s) may be the unique TCP ACKs that are generated to acknowledge the latest transmitted packets. As shown in the first implementation 710, ACK k, ACK k+1, ACK k+2, and ACK k+3 may be discarded, while ACK k+4, ACK k+5, ACK k+6, and ACK k+7 may be transmitted to the sending device); and deleting one or more TCP ACKs whose sequence number is earlier than a latest TCP ACK (see Agrawal, Fig. 7, par. [0085]: the unique TCP ACKs with the highest sequence number(s) in the interval may be transmitted to the sending device, and the unique TCP ACKs with the lowest sequence number(s) may be discarded. The unique TCP ACKs with the highest sequence number(s) may be the unique TCP ACKs that are generated to acknowledge the latest transmitted packets. As shown in the first implementation 710, ACK k, ACK k+1, ACK k+2, and ACK k+3 may be discarded, while ACK k+4, ACK k+5, ACK k+6, and ACK k+7 may be transmitted to the sending device). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of the combination of Aziz in view of Li with the filtering TCP ACKs of Agrawal with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of decreasing transmission overhead (see Agrawal, par. [0033]). Regarding claims 7, 15, the combination of Aziz in view of Li teaches the method or device. However, the combination of Aziz in view of Li does not teach: further comprising: transmitting a pure acknowledgment (ACK) packet in response to determining that the packet is the pure ACK packet; wherein a priority of the pure ACK packet is higher than a priority of a normal packet. Agrawal, in the same field of endeavor, teaches: further comprising: transmitting a pure acknowledgment (ACK) packet in response to determining that the packet is the pure ACK packet; wherein a priority of the pure ACK packet is higher than a priority of a normal packet (see Agrawal, Fig. 7, par. [0085]: the unique TCP ACKs with the highest sequence number(s) in the interval may be transmitted to the sending device, and the unique TCP ACKs with the lowest sequence number(s) may be discarded. The unique TCP ACKs with the highest sequence number(s) may be the unique TCP ACKs that are generated to acknowledge the latest transmitted packets. As shown in the first implementation 710, ACK k, ACK k+1, ACK k+2, and ACK k+3 may be discarded, while ACK k+4, ACK k+5, ACK k+6, and ACK k+7 may be transmitted to the sending device; in this case, a TCP ACK (corresponding to a pure ACK packet) with the highest sequence number may have priority and be transmitted). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of the combination of Aziz in view of Li with the filtering TCP ACKs of Agrawal with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of decreasing transmission overhead (see Agrawal, par. [0033]). Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Aziz in view of Li, as applied to claims 1 and 9 above, and further in view of Jackowski et al. (US 2012/0039332), hereinafter “Jackowski”. Regarding claims 8, 16, the combination of Aziz in view of Li teaches the method or device. However, the combination of Aziz in view of Li does not teach: further comprising: transmitting a quality of service (QoS) packet in response to determining that the packet is the QoS packet; wherein a priority of the QoS packet is higher than a priority of a normal packet. Jackowski, in the same field of endeavor, teaches: further comprising: transmitting a quality of service (QoS) packet in response to determining that the packet is the QoS packet (see Jackowski, par. [0162]: the QoS engine 236 prioritizes, schedules and transmits network packets according to one or more policies as specified by the policy engine 295, 295', and see Jackowski, par. [0299]: a device performing QoS, priority queuing and other acceleration techniques may classify received packets as corresponding to an application, and then apply QoS and other policies associated with the application, and see Jackowski, par. [0203]: QoS plug-in 404 may comprise a service, process, subroutine, or other executable code for classifying packets and applying QoS policies); wherein a priority of the QoS packet is higher than a priority of a normal packet (see Jackowski, par. [0204]: QoS plug-in 404 may provide a low priority queue, a medium priority queue, and a high priority queue and place packets into the queues responsive to QoS priorities associated with the packets. QoS plug-in 404 may then process the queues in order of priority. For example, in one embodiment, QoS plug-in 404 may process a high priority queue at a faster rate, or more frequently, than the plug-in processes a low priority queue. In another embodiment, QoS plug-in 404 may move packets within a single queue. For example, QoS plug-in 404 may place high priority packets ahead of low priority packets within the queue). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the method or device of the combination of Aziz in view of Li with the QoS packet transmission of Jackowski with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of enhancing and optimizing network performance (see Jackowski, par. [0003]). Response to Arguments Applicant's arguments filed 12/14/2025 have been fully considered but they are not persuasive. Applicant argues “neither Aziz nor Li mention the feature ‘receiving a packet from an application layer’ as recited in claim 1”. Examiner respectfully disagrees and points to Li in Figs. 2 and 3, and par. [0027] which teaches “FIG. 3, with reference to FIG. 2, illustrates exemplary operation of hardware accelerator 106a. We assume that modem 104a received a packet from a source device connected to the WAN and provided the received packet to reception data buffer 202 via receiver 208. TCP/IP packet decoder 212 fetches an IP version from the received packet while the packet is simultaneously routed to a destination connected to LAN 108 via application processing device 222 (act 302)”. These sections teach receiving a packet at an application layer which corresponds to the claim limitation. Applicant argues “neither Aziz nor Li mention the feature ‘performing related processing on the TCP/IP packet to obtain a processed TCP/IP packet after the related processing in response to determining that the packet is the TCP/IP packet’ as recited in claim 1”. Examiner respectfully disagrees and points to Aziz in Fig. 3 and par. [0048] which teaches “When the PDCP SDU manager 104 detects the presence of a PDCP SDU in the FIFO 102, it reads the PDCP SDU 132 from the FIFO 102 and writes the PDCP SDU 134 to a location in the PDCP SDU buffer 122 and notifies the control processor 112. The control processor 112 receives the length of the PDCP SDU, either from the PDCP SDU manager 104 or from the L3 unit transport mechanism (e.g., 10 Gb Ethernet port or similar high speed data port) that writes the PDCP SDU into the FIFO 102. In one embodiment, the L3 unit transport mechanism determines the length of the PDCP SDU (e.g., if the PDCP SDU is an IP packet, the L3 unit knows the length of the IP packet, which may be determined from the IP packet header) and provides the PDCP SDU length to the control processor 112 and/or to the PDCP SDU manager 104. The PDCP SDU manager 104 reads words from the FIFO 102 (e.g., in 4-byte words) and provides addresses to the TX memory 105 along with the words read from the FIFO 102 to write the words to the TX memory 105”. These sections teach performing processing on an IP packet using information regarding the length of the IP packet. Determining information of the IP packet can only be realized if the packet is an IP packet, so this step corresponds to determining that the packet is an IP packet. Processing is performed using this the determined information on the packet, which corresponds to performing related processing under its broadest reasonable interpretation. Examiner also points to Li in Figs. 2 and 3 and par. [0029] which teaches “If, during act 304, the received packet is determined not to pertain to IP version 4, then IP version 6 comparator 218 determine whether the fetched IP version indicates that received packet pertains to IP version 6 (act 316), and see par. [0029]: if the received packet is determined to pertain to IP version 6, TCP/IP packet decoder 212 may fetch protocol information from the received packet (act 318) and may provide the protocol information to TCP comparator 220 to determine whether the received packet pertaining to IP version 6 is a TCP data packet (act 320)”. These sections teach determining whether the packet is an TCP/IP packet. Taken in combination with Aziz, the references teach the limitation under its broadest reasonable interpretation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Sarangam et al. (US 11,451,609) teaches technologies for accelerated HTTP message processing include a computing device having a network controller. The computing device may generate an HTTP message, frame the HTTP message to generate a transport protocol packet such as a TCP/IP packet or QUIC packet, and pass the transport protocol packet to the network controller. Du et al. (WO 2024/093540) teaches an L2TP packet hardware acceleration method and apparatus, and a device and a storage medium. Xu et al. (CN 111506541) teaches a method for accelerating network data packet processing in embedded network device. U. Langenbach et al. ("A 10 GbE TCP/IP hardware stack as part of a protocol acceleration platform") teaches a TCP/IP stack for a fully integrated and accelerated communication stack as part of an FPGA or ASIC design. X. Baiquan, ("TCP/IP Acceleration Stack Based on Multi-core Platform") teaches a TCP/IP acceleration protocol stack based on multi-core processors. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB J BALLOWE whose telephone number is (571)270-0410. The examiner can normally be reached MON-FRI 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.J.B./Examiner, Art Unit 2419 /Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Sep 16, 2025
Non-Final Rejection — §103
Dec 14, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
14%
Grant Probability
61%
With Interview (+46.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month