Prosecution Insights
Last updated: April 19, 2026
Application No. 18/924,994

NETWORK PROCESSOR USING FAKE PACKET GENERATION FOR IMPROVING PROCESSING EFFICIENCY OF LEARNING PACKETS AND ASSOCIATED PACKET PROCESSING METHOD

Non-Final OA §103
Filed
Oct 23, 2024
Examiner
LEE, BRYAN Y
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
Airoha Technology (Suzhou) Limited
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
216 granted / 324 resolved
+8.7% vs TC avg
Strong +42% interview lift
Without
With
+42.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
17 currently pending
Career history
341
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 324 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The present application is being examined under the pre-AIA first to invent provisions. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent No. / U.S. Pre-Grant Publication US-20060168400-A1 to Ronciak et al. (“Ronciak”) in view of NPL “NVIDIA MESSAGE ACCELERATOR VMA Documentation rev 952” to NVIDIA et al. (“NVIDIA”) (See version 9.8.60 for citations, documentations with better formatting). As to claim 1, Ronciak disclose(s) a network processor comprising: a processor, (Ronciak; Fig. 1; CPU) comprising: at least one processor core, arranged to load and execute program codes to deal with packet processing, wherein the program codes comprise: a network driver; (Ronciak; Fig. 1; Network Controller) a network stack of an operating system kernel; (Ronciak; Fig. 1; Protocol Layers) and a cache, arranged to cache at least a portion of instructions and data associated with processing of the fake packet that is performed by the network stack of the OS kernel. (Ronciak; Fig. 1; Cache; store prefetch instructions [0005] and data; [0029]) But does not expressly disclose a packet pre-learning module, arranged to generate a fake packet, and send the fake packet to the network stack of the OS kernel through the network driver. NVIDIA discloses a packet pre-learning module, arranged to generate a fake packet, and send the fake packet to the network stack of the OS kernel through the network driver. (NVIDIA; dummy send command to send dummy packets to warm up the cache; p. 66) At the time of invention, it would have been obvious to a person of ordinary skill in the art to combine the dummy packets of NVIDIA and the caching of Ronciak. One of ordinary skill in the art would have been motivated to combine the teachings as both are concerned with caching. Using the dummy packets of NVIDIA would allow for cache to be warmed. Accordingly, the prior art references teach all of the claimed elements. Furthermore, it would have been obvious to combine the teachings as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art. As to claim 2, Ronciak-NVIDIA disclose(s) the network processor of claim 1, But does not expressly disclose wherein the packet pre-learning module is arrange to generate the fake packet periodically. (NVIDIA; DummysendcycleDuration; p. 66) As to claim 3, Ronciak-NVIDIA disclose(s) the network processor of claim 1, wherein the network stack of the OS kernel is arranged to send the fake packet to the network driver after processing the fake packet, and the network driver is arranged to drop the fake packet after receiving the fake packet from the network stack of the OS kernel. (NVIDIA; dummy packets reaches the hardware NIC and then is dropped; p. 66) As to claim 4, Ronciak-NVIDIA disclose(s) the network processor of claim 1, further comprising: a network interface, arranged to receive a packet from a network before the fake packet is generated; (Ronciak; Fig. 1; Network controller ; [0014] receive packets) wherein a configuration of the fake packet is based at least partly on the packet. (Ronciak; Fig. 1; packet headers; [0028]) As to claim 5, Ronciak-NVIDIA disclose(s) the network processor of claim 4, wherein the network interface is a local area network interface. (Ronciak; Fig. 1; network 18; LAN; [0018]) As to claim 6, Ronciak-NVIDIA disclose(s) the network processor of claim 4, wherein the network interface is a wide area network interface. (Ronciak; Fig. 1; network 18; WAN; [0018]) As to claim 7, Ronciak-NVIDIA disclose(s) the network processor of claim 1, further comprising: a network interface, arranged to receive a packet from a network after the fake packet is generated, and send the packet to the network stack of the OS kernel through the network driver; (Ronciak; Fig. 1; Network controller, device driver, protocol layers) wherein a sequence of instructions invoked by the network stack of the OS kernel for processing the packet received by the network interface is identical to a sequence of instructions invoked by the network stack of the OS kernel for processing the fake packet generated by the packet pre-learning module. (Ronciak; Fig. 1; Cache; store prefetch instructions [0005] and data; [0029]; a cache hit will have the same instructions) As to claim 8, Ronciak-NVIDIA disclose(s) the network processor of claim 7, wherein the network interface is a local area network interface. (Ronciak; Fig. 1; network 18; LAN; [0018]) As to claim 9, Ronciak-NVIDIA disclose(s) the network processor of claim 7, wherein the network interface is a wide area network interface. (Ronciak; Fig. 1; network 18; WAN; [0018]) As to claim 10, Ronciak-NVIDIA disclose(s) a packet processing method comprising: executing a network driver; executing a network stack of an operating system kernel; generating a fake packet, and sending the fake packet to the network stack of the OS kernel through the network driver; and caching at least a portion of instructions and data associated with processing of the fake packet that is performed by the network stack of the OS kernel. See similar rejection to claim 1. As to claim 11, Ronciak-NVIDIA disclose(s) the packet processing method of claim 10, wherein generating the fake packet comprises: generating the fake packet periodically. See similar rejection to claim 2. As to claim 12, Ronciak-NVIDIA disclose(s) the packet processing method of claim 10, wherein executing the network stack of the OS kernel comprises: sending the fake packet to the network driver after processing the fake packet; executing the network driver comprises: dropping the fake packet after receiving the fake packet from the network stack of the OS kernel. See similar rejection to claim 3. As to claim 13, Ronciak-NVIDIA disclose(s) the packet processing method of claim 10, further comprising: receiving a packet from a network interface before the fake packet is generated; wherein a configuration of the fake packet is based at least partly on the packet. See similar rejection to claim 4. As to claim 14, Ronciak-NVIDIA disclose(s) the packet processing method of claim 13, wherein the network interface is a local area network interface. See similar rejection to claim 5. As to claim 15, Ronciak-NVIDIA disclose(s) the packet processing method of claim 13, wherein the network interface is a wide area network interface. See similar rejection to claim 6. As to claim 16, Ronciak-NVIDIA disclose(s) the packet processing method of claim 10, further comprising: receiving a packet from the network interface after the fake packet is generated; and sending the packet to the network stack of the OS kernel through the network driver; wherein a sequence of instructions invoked by the network stack of the OS kernel for processing the packet received from the network interface is identical to a sequence of instructions invoked by the network stack of the OS kernel for processing the fake packet. See similar rejection to claim 7. As to claim 17, Ronciak-NVIDIA disclose(s) the packet processing method of claim 16, wherein the network interface is a local area network interface. See similar rejection to claim 8. As to claim 18, Ronciak-NVIDIA disclose(s) the packet processing method of claim 16, wherein the network interface is a wide area network interface. See similar rejection to claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRYAN LEE whose telephone number is (571)270-5606. The examiner can normally be reached on Mon-Fri 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OSCAR LOUIE can be reached on (571)270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRYAN Y LEE/Primary Examiner, Art Unit 2445
Read full office action

Prosecution Timeline

Oct 23, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587575
IMS RECOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12549947
FIRST NODE, SECOND NODE, FOURTH NODE, FIFTH NODE AND METHODS PERFORMED THEREBY FOR HANDLING INDICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12542775
SECURE FILE TRANSFER
2y 5m to grant Granted Feb 03, 2026
Patent 12543041
CONNECTION AUTHENTICATION SYSTEM AND METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12536037
TASK PROCESSING SYSTEM, METHOD, AND APPARATUS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+42.2%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 324 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month