Prosecution Insights
Last updated: April 19, 2026
Application No. 18/963,735

NETWORK PACKET PROCESSING APPARATUS

Non-Final OA §102
Filed
Nov 28, 2024
Examiner
BORROMEO, JUANITO C
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Airoha Technology (Suzhou) Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
89%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
460 granted / 608 resolved
+20.7% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
33 currently pending
Career history
641
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
34.0%
-6.0% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 608 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 – 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Mattina et al. (US Pat. No. 10606750), hereinafter referred to as Mattina. As to claim 1, Mattina discloses a network packet (packet distribution engine, 940, Fig. 56; packet processing pipeline, 1000–1006, Fig. 57) processing apparatus comprising: a packet buffer (incoming buffers and backing store, 1050, Fig. 60; memory interface and DMA buffers, 178, Fig. 14), arranged to store a network packet (incoming buffers and backing store for packets, 1050, Fig. 60); a ring buffer (software virtualized command queue and hardware command queue, 708, 710, 712, Fig. 40A–40B), arranged to store a packet descriptor (descriptor in command queue, 720–724, Fig. 41A) of the network packet (incoming buffers and backing store for packets, 1050, Fig. 60), wherein the packet descriptor comprises a first field (descriptor metadata fields associated with buffer management, 722, Fig. 41A), the first field is arranged to indirectly indicate a buffer address (buffer stack index / top-of-stack pointer, 1058–1060, Fig. 60) of the network packet in the packet buffer (descriptor metadata associated with buffer identification rather than direct physical address storage, 722, Fig. 41A; buffer stack indexing, 1058–1060, Fig. 60), and the packet descriptor does not directly record the buffer address (buffer identification via stack/index abstraction rather than direct address pointer, 1058, Fig. 60); and a network processing unit (packet distribution engine and classifier processing packets based on descriptor information, 940, 950, Fig. 56), arranged to read the packet descriptor from the ring buffer (wait for descriptor from application and enqueue into hardware queue, 718–726, Fig. 41A), and perform predetermined packet processing of the network packet according to the packet descriptor (process packet header and distribute to worker, 1004, 1038–1046, Fig. 57; Fig. 59). As to claim 2, Mattina discloses the network packet processing apparatus of claim 1, further comprising: a direct memory access (DMA) controller (iDMA engine / eDMA engine, 110, 112, Fig. 10; ingress DMA and egress DMA engines, 178, Fig. 14), comprising: a control circuit (DMA engine control logic, 178, Fig. 14), arranged to write the network packet into the packet buffer (write memory (PA), Fig. 11B; incoming buffers, 1050, Fig. 60), convert the buffer address of the network packet in the packet buffer into an address identification code (buffer stack index / top-of-stack identifier abstraction, 1058–1060, Fig. 60), and store the address identification code into the first field (descriptor metadata stored in command queue entry, 722, Fig. 41A). As to claim 3, Mattina discloses the network packet processing apparatus of claim 2, wherein the packet buffer comprises a plurality of storage blocks used for storing a plurality of network packets, respectively (plural incoming buffers and backing store entries, 1050, Fig. 60), and the control circuit is arranged to map the plurality of storage blocks to a plurality of address identification codes, respectively (buffer stack indexing and push/pop mapping of buffer entries, 1058–1060, Fig. 60). As to claim 4, Mattina discloses the network packet processing apparatus of claim 2, wherein the packet descriptor further comprises a second field (descriptor metadata field associated with status/control, 722, Fig. 41A); when the address identification code is stored into the first field, the DMA controller is further arranged to store a control code into the second field to indicate that processing of the packet descriptor is handed over to the NPU (put descriptor in hardware queue to signal processing by packet engine, 724–726, Fig. 41A). As to claim 5, Mattina discloses the network packet processing apparatus of claim 2, wherein the packet descriptor further comprises a second field (descriptor metadata and result handling fields, 722, Fig. 41A); after reading the packet descriptor from the ring buffer (retrieve descriptor and process packet, 734–738, Fig. 41B), the NPU is further arranged to store a control code into the second field to indicate that processing of the packet descriptor is handed over to the DMA controller (add result to application queue and release buffer via push, 740, Fig. 41B; return buffer push, 1070, Fig. 61). As to claim 6, Mattina discloses the network packet processing apparatus of claim 2, wherein the DMA controller further comprises: a buffer address pool (incoming buffers stack / backing store pool, 1050, 1058, Fig. 60), arranged to store a plurality of available buffer addresses in the packet buffer (buffer stack entries storing available buffer identifiers, 1058, Fig. 60); the control circuit is further arranged to read an available buffer address from the buffer address pool (pop operation retrieving available buffer, 1060, Fig. 60), and determine the buffer address of the network packet in the packet buffer according to the available buffer address (write memory (PA) using retrieved buffer identifier, Fig. 11B). As to claim 7, Mattina discloses the network packet processing apparatus of claim 6, further comprising: a buffer management circuit (fill/spill control managing buffer stack usage, 1074–1076, Fig. 61), arranged to manage usage of the packet buffer (monitor TOS empty/full, 1072–1078, Fig. 61); wherein the DMA controller further comprises: a buffer address filling circuit (fill buffers control logic, 1074, Fig. 61), wherein the plurality of available buffer addresses are obtained through the buffer address filling circuit that requests the plurality of available buffer addresses from the buffer management circuit (fill buffers operation supplying stack entries, 1074, Fig. 61). As to claim 8, Mattina discloses the network packet processing apparatus of claim 7, wherein a capacity of the buffer address pool is equal to M (buffer stack capacity parameter, Fig. 60), and when the buffer address pool is being initialized (initial fill condition when TOS empty, 1072–1074, Fig. 61), the buffer address filling circuit is arranged to request M available buffer addresses from the buffer management circuit, and store the M available buffer addresses into the buffer address pool (fill buffers populating stack, 1074, Fig. 61). As to claim 9, Mattina discloses the network packet processing apparatus of claim 7, wherein a capacity of the buffer address pool is equal to M (buffer stack capacity, Fig. 60), and the buffer address filling circuit is arranged to monitor usage of the buffer address pool (TOS full/empty monitoring, 1072–1078, Fig. 61); when a number of available buffer addresses in the buffer address pool that are not used by the control circuit yet reaches A (threshold condition when TOS empty or below level, 1072, Fig. 61), the buffer address filling circuit is arranged to request (M-A) available buffer addresses from the buffer management circuit, and store the (M-A) available buffer addresses into the buffer address pool (fill buffers replenishment, 1074, Fig. 61). As to claim 10, Mattina discloses a network packet processing apparatus comprising: a packet buffer (incoming buffers and backing store, 1050, Fig. 60), arranged to store a network packet; a ring buffer (software virtualized command queue and hardware command queue, 708–712, Fig. 40A), arranged to store a packet descriptor of the network packet (descriptor handling, 720–724, Fig. 41A), wherein the packet descriptor comprises a first field (descriptor metadata field, 722, Fig. 41A); and a direct memory access (DMA) controller (iDMA / eDMA engines, 110, 112, Fig. 10), comprising: a control circuit (DMA engine control logic, 178, Fig. 14), arranged to write the network packet into the packet buffer (write memory (PA), Fig. 11B), and store address-related information of a buffer address of the network packet in the packet buffer into the first field (buffer stack index stored in descriptor metadata, 722, Fig. 41A; 1058, Fig. 60). As to claim 11, Mattina discloses the network packet processing apparatus of claim 10, further comprising: a network processing unit (packet distribution engine and classifier, 940, 950, Fig. 56), arranged to perform predetermined packet processing of the network packet according to the packet descriptor (process packet header and distribute to worker, 1004, Fig. 57); wherein the packet descriptor further comprises a second field (descriptor status/control metadata, 722, Fig. 41A); when the address-related information is stored into the first field, the DMA controller is further arranged to store a control code into the second field to indicate that processing of the packet descriptor is handed over to the NPU (put descriptor in hardware queue signaling processing, 724–726, Fig. 41A). As to claim 12, Mattina discloses the network packet processing apparatus of claim 10, further comprising: a buffer address pool (incoming buffers stack, 1058, Fig. 60), arranged to store a plurality of available buffer addresses in the packet buffer; the control circuit is further arranged to read an available buffer address from the buffer address pool (pop operation, 1060, Fig. 60), and determine the buffer address of the network packet in the packet buffer according to the available buffer address (write memory using retrieved buffer identifier, Fig. 11B). As to claim 13, Mattina discloses the network packet processing apparatus of claim 10, further comprising: a buffer management circuit (fill/spill control managing buffer usage, 1074–1076, Fig. 61), arranged to manage usage of the packet buffer; wherein the DMA controller further comprises: a buffer address filling circuit (fill buffers logic, 1074, Fig. 61), wherein the plurality of available buffer addresses are obtained through the buffer address filling circuit that requests the plurality of available buffer addresses from the buffer management circuit (fill buffers supplying stack entries, 1074, Fig. 61). As to claim 14, Mattina discloses the network packet processing apparatus of claim 13, wherein a capacity of the buffer address pool is equal to M (buffer stack capacity, Fig. 60), and when the buffer address pool is being initialized (initial fill condition when empty, 1072–1074, Fig. 61), the buffer address filling circuit is arranged to request M available buffer addresses from the buffer management circuit, and store the M available buffer addresses into the buffer address pool (fill buffers populating stack, 1074, Fig. 61). As to claim 15, Mattina discloses the network packet processing apparatus of claim 13, wherein a capacity of the buffer address pool is equal to M (buffer stack capacity, Fig. 60), and the buffer address filling circuit is arranged to monitor usage of the buffer address pool (TOS monitoring, 1072–1078, Fig. 61); when a number of available buffer addresses in the buffer address pool that are not used by the control circuit yet reaches A (threshold condition of reduced availability, 1072, Fig. 61), the buffer address filling circuit is further arranged to request (M-A) available buffer addresses from the buffer management circuit, and store the (M-A) available buffer addresses into the buffer address pool (fill buffers replenishment, 1074, Fig. 61). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Raumann et al. (US Pat. No. 11237880) Roughly described, a system for data parallel training of a neural network on multiple reconfigurable units configured by a host with dataflow pipelines to perform different steps in the training CGRA units are configured to evaluate first and second sequential sections of neural network layers based on a respective subset of training data. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUANITO C BORROMEO whose telephone number is (571)270-1720. The examiner can normally be reached on Monday - Friday 9 - 5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henry Tsai can be reached on 5712724176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.B/ Assistant Examiner, Art Unit 2184 /HENRY TSAI/ Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Nov 28, 2024
Application Filed
Feb 16, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591534
APPARATUSES, COMPUTER-IMPLEMENTED METHODS, AND COMPUTER PROGRAM PRODUCTS FOR CONNECTED DEVICE CONTROL AND USE
2y 5m to grant Granted Mar 31, 2026
Patent 12585613
DETECTION OF AN ERROR CONDITION ON A SERIAL DATA BUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579091
SECURE DUAL FUNCTION USB CONNECTOR
2y 5m to grant Granted Mar 17, 2026
Patent 12572488
UNIVERSAL SERIAL BUS REPEATER WITH IMPROVED REMOTE WAKE CAPABILITY
2y 5m to grant Granted Mar 10, 2026
Patent 12572481
INTELLIGENTLY MANAGING SPOOL DATA SETS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
89%
With Interview (+13.0%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 608 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month