Prosecution Insights
Last updated: April 19, 2026
Application No. 18/141,595

SYSTEM AND METHOD FOR MESSAGE OR DATA AGGREGATION IN COMPUTER NETWORKS

Non-Final OA §101§103
Filed
May 01, 2023
Examiner
ONAT, UMUT
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Mellanox Technologies Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
415 granted / 523 resolved
+24.3% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
35 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Notes The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3 and 5-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1-3 and 5-10 are directed to a system comprising a network module, one or more of a network library, an API, or a HPC application, and a plurality of buffers. Currently presented, neither the system nor the components of the system are disclosed as being limited to hardware embodiments. Furthermore, the specification discloses the system and its components can be implemented as software (paragraphs [0013], [0039], [0098]). As such, the system recited in claims 1-3 and 5-10 encompasses software embodiments which are non-statutory. See MPEP §2106. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Graham et al. (US 2021/0218808 A1; from IDS filed on 10/10/2024; hereinafter Graham) in view of Alanara et al. (US 2008/0146242 A1; hereinafter Alanara). With respect to claim 1, Graham teaches: A system (see e.g. Graham, Fig. 1) for message or data aggregation in computer networks (see e.g. Graham, paragraph 28: “systems for aggregating egress messages”), comprising: a network module (see e.g. Graham, Fig. 1: “Network Adapter 112”) of a host machine (see e.g. Graham, Fig. 1: “Compute Node 102”) to receive communication comprising messages having data (see e.g. Graham, paragraph 39: “Network Adapter 112 that is configured to communicate messages over the communication network with peer compute nodes”), the network module further to determine a plurality of destination host machines for the messages (see e.g. Graham, paragraph 34: “network adapter sends messages to a plurality of destinations”; and paragraph 51: “messages to specified destinations”) and to aggregate a subset of the messages or the data to be transmitted to one of the plurality of destination host machines (see e.g. Graham, paragraph 28: “aggregate messages that share the same destination”; paragraph 34: “if a network adapter sends messages to a plurality of destinations, but a group of the messages is first sent to the same switch in the communication network, the network adapter may aggregate the group of messages and send the aggregated message to the switch”; and paragraph 50: “aggregate messages for given destinations”), wherein the aggregation is based at least in part on a bandwidth (see e.g. Graham, paragraph 31: “aggregation may stop when a minimum bandwidth specification is met”; and paragraph 61: “Aggregation Control circuit enters a Check-Bandwidth step 410”) and a buffer availability associated with the one of the plurality of destination host machines (see e.g. Graham, paragraph 31: “aggregation of egress messages to create an aggregated message may stop… when a buffer size has been reached”), and Even though Graham discloses considering buffer availability for aggregation of messages (see e.g. Graham, paragraph 31), Graham does not explicitly disclose this availability being determined from a status communication. However, Alanara teaches: wherein the buffer availability is determined from a status communication between the one of the plurality of destination host machines and the host machine (see e.g. Alanara, paragraph 6: “UE reports to the Node B an amount of data stored in the buffer in a buffer status report”; paragraph 35: “an uplink buffer status report, indicating the amount of data that is buffered in the logical channel queues in the UE MAC, is provided by the UE to the scheduler”; and Fig. 3). Graham and Alanara are analogous art because they are in the same field of endeavor: managing network messages in view of buffer availability. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Graham with the teachings of Alanara. The motivation/suggestion would be to improve the accuracy of determining buffer availability; thus increasing the reliability of the network communications. With respect to claim 2, Graham as modified teaches: The system of claim 1, wherein the network module is further to: remove received metadata from the subset of the messages based in part on the subset of the messages destined for the one of the plurality of destination host machines (see e.g. Graham, paragraph 43: “when the MAC aggregates multiple messages having the same destination, the MAC may strip-off the common destination fields of the messages (sending a single destination header instead), and possibly strip-off additional header fields”); aggregate the data from the subset of the messages to provide the subset of the data (see e.g. Graham, paragraph 43: “when the MAC aggregates multiple messages having the same destination”; paragraph 28: “aggregate messages that share the same destination”; and paragraph 34: “aggregate the group of messages and send the aggregated message to the switch”); and add new metadata to the subset of the data (see e.g. Graham, paragraph 43: “sending a single destination header instead”), the new metadata identifying at least an execution unit associated with the one of the plurality of destination host machines (see e.g. Graham, paragraph 30: “When aggregating messages destined to a common destination compute node, individual messages in the aggregated message may be addressed to different processors and/or processes in the common destination compute node. When aggregating messages destined to a common destination path, individual messages in the aggregated message may be addressed to different compute nodes, processors and/or processes reachable via the common destination path”; and paragraph 43: “when the MAC aggregates multiple messages having the same destination, the MAC may strip-off the common destination fields of the messages (sending a single destination header instead)”), wherein the subset of the data is to be received by a process or thread associated with the execution unit (see e.g. Graham, paragraph 30: “aggregated message may be addressed to different processors and/or processes in the common destination compute node… the aggregated message may be addressed to different compute nodes, processors and/or processes reachable via the common destination path”). With respect to claim 3, Graham as modified teaches: The system of claim 1, further comprising: one or more of a network library (see e.g. Graham, paragraph 74: “programmed in software to carry out the functions described herein”; and paragraph 34: “sends messages to a plurality of destinations… aggregate the group of messages”), an application programming interface (API), or a High-Performance Computing (HPC) application on the host machine, as part of the network module (see e.g. Graham, paragraph 74: “elements of the Network Adapters… are programmed in software to carry out the functions described herein”; and paragraph 34: “network adapter sends messages to a plurality of destinations… the network adapter may aggregate the group of messages and send the aggregated message”), to initiate the transmission of the subset of the messages or the data between the host machine and the one of the plurality of destination host machines (see e.g. Graham, paragraph 34: “if a network adapter sends messages to a plurality of destinations, but a group of the messages is first sent to the same switch in the communication network, the network adapter may aggregate the group of messages and send the aggregated message”). With respect to claim 4, Graham as modified teaches: The system of claim 1, further comprising: a Network Interface Card (NIC) (see e.g. Graham, paragraph 37: “network-connected devices such as Network-Interface Controllers (NICs)”), a SmartNIC, or a switch (see e.g. Graham, paragraph 37: “switches”) that comprises at least one processor, as part of the network module (see e.g. Graham, paragraph 37: “computers that run the shared task are typically connected to the network through a network adapter (NIC in Ethernet nomenclature, HCA in InfiniBand™ nomenclature, or similar for other communication networks”), wherein the at least one processor is to perform the transmission of the subset of the messages or the data (see e.g. Graham, paragraph 34: “if a network adapter sends messages to a plurality of destinations, but a group of the messages is first sent to the same switch in the communication network, the network adapter may aggregate the group of messages and send the aggregated message”) on behalf of a central processing unit (CPU) or a graphical processing unit (GPU) of the host machine (see e.g. Graham, paragraph 48: “The MAC may be implemented as a separate dedicated block on a device (e.g., a processor such as a CPU or GPU) or an FPGA) connected to a standard network adapter that does not include a MAC”). With respect to claim 5, Graham as modified teaches: The system of claim 1, wherein the network module is further to aggregate the subset of the data in a buffer associated with the network module till a predetermined threshold of the buffer is reached (see e.g. Graham, paragraph 60: “in a Check-Size step 404, the Aggregation Control circuit checks the Aggregation Circuit (with destination ID equals to destination (i)) against a message size criterion. For example, the accumulated size of the aggregated message is compared to a predefined threshold”), wherein the network module is to perform the transmission of the subset of the messages or the data upon the predetermined threshold of the buffer being reached (see e.g. Graham, paragraph 60: “If the message size is greater than the threshold, the Aggregation Control circuit enters a Post-Message step 406, wherein the Aggregation Control circuit posts the aggregated message that is stored in the aggregation circuit in the Egress Queue”). With respect to claim 6, Graham as modified teaches: The system of claim 1, wherein the network module is further to: perform a flush operation to cause the transmission of the subset of the messages or the data to the one of the plurality of destination host machines (see e.g. Graham, paragraph 34: “if a network adapter sends messages to a plurality of destinations…, the network adapter may aggregate the group of messages and send the aggregated message”; and paragraph 60: “deallocating and emptying Aggregation Circuits… Aggregation Control circuit enters a Post-Message step 406, wherein the Aggregation Control circuit posts the aggregated message that is stored in the aggregation circuit in the Egress Queue, and deallocates the aggregation circuit”). With respect to claim 7, Graham as modified teaches: The system of claim 1, wherein the network module is further to: coalesce the messages or the data as part of the aggregation of the subset of the messages or the data based at least in part on at least one operation associated with the subset of the messages or the data (see e.g. Graham, paragraph 42: “checks the destination of the messages and may aggregate a plurality of messages that are destined to the same peer host (same destination compute node) into a single aggregated message”; paragraph 45: “aggregation of read operations is supported—both the source and destination network adapters comprise MDCs; read requests are aggregated into a single message”; and paragraph 47: “atomic read and writes may also be aggregated. In yet other embodiments, multiple transaction types may be combined to a single aggregated message”). With respect to claim 8, Graham as modified teaches: The system of claim 1, further comprising: a plurality of buffers associated with the host machine and with the one of the plurality of destination host machines and registered with the network module of the host machine or with a destination network module of the one of the plurality of destination host machines (see e.g. Graham, paragraph 55: “MAC sends the aggregated messages directly to buffers in the Packet Processing circuit”; and Fig. 2), wherein the network module of the host machine is to receive information associated with the buffer availability (see e.g. Graham, paragraph 31: “aggregation of egress messages to create an aggregated message may stop… when a buffer size has been reached”), Graham does not but Alanara teaches: from the destination network module of the one of the plurality of destination host machines (see e.g. Alanara, paragraph 6: “UE reports to the Node B an amount of data stored in the buffer in a buffer status report”; and paragraph 35: “an uplink buffer status report, indicating the amount of data that is buffered in the logical channel queues in the UE MAC, is provided by the UE to the scheduler”). Graham and Alanara are analogous art because they are in the same field of endeavor: managing network messages in view of buffer availability. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Graham with the teachings of Alanara. The motivation/suggestion would be to improve the accuracy of determining buffer availability; thus increasing the reliability of the network communications. With respect to claim 9, Graham as modified teaches: The system of claim 1, wherein the bandwidth is determined (see e.g. Graham, paragraph 61: “Aggregation Control circuit enters a Check-Bandwidth step 410”) Graham does not but Alanara teaches: based, at least in part, on the status communication (see e.g. Alanara, paragraph 35: “the uplink resource allocation is handled by a scheduler in the Node B… The UE may, however, request some uplink resource allocation for future use. In requesting the uplink resource for future use, an uplink buffer status report, indicating the amount of data that is buffered in the logical channel queues in the UE MAC, is provided by the UE to the scheduler”; and paragraph 37: “if it is determined that an uplink resource allocation is needed for future use, such as for transmitting small amounts of intermittent data, the UE prepares an uplink resource allocation request (120). The UE checks if there is an allocation available for transmitting the request (140). The request includes a buffer status report, and sending the buffer status report itself needs an allocation of the uplink resource. If there is an allocation available, the UE sends the buffer status report (150), receives resource allocation information from the Node B (170)”). Graham and Alanara are analogous art because they are in the same field of endeavor: managing network messages in view of buffer availability. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Graham with the teachings of Alanara. The motivation/suggestion would be to improve the accuracy of determining availability of network resources; thus increasing the overall efficiency of the network communications. With respect to claim 10, Graham as modified teaches: The system of claim 1, wherein the communication is queued in a data structure (see e.g. Graham, paragraph 50: “an Egress Queue 208, which is configured to temporarily store aggregated messages until the messages are handled by Packet-Processing”), wherein the data structure is controlled by a single process that is associated with an operating system or a processor thread (see e.g. Graham, paragraph 48: “a single process runs on Host 108, and the MAC aggregates messages that the single process generates (and are destined to the same Remote Compute Node)”). With respect to claims 11-17: Claims 11-17 are directed to a method corresponding to the active functions implemented by the system disclosed in claims 1-5, 7, and 8, respectively; please see the rejections directed to claims 1-5, 7, and 8 above which also cover the limitations recited in claims 11-17. With respect to claims 18-20: Claims 18-20 are directed to a system comprising one or more processing units to implement active functions corresponding to the active functions implemented by the system disclosed in claims 1, 2, and 8, respectively; please see the rejections directed to claims 1, 2, and 8 above which also cover the limitations recited in claims 18-20. Note that, Graham also discloses the systems comprising processors to implement the active functions of the system disclosed in claims 1, 2, and 8 (see Graham, paragraph 74). CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Herz et al. (US 2003/0005074 A1) discloses a data stream aggregation process based on bandwidth and available buffer memory (see paragraphs 68-70). Jalal et al. (US 2021/0126877 A1) discloses a process to combine messages into a single message based on buffer availability and based on bandwidth (see paragraphs 41-42). Yang (US 2002/0114334 A1) discloses a network session aggregation process based on availability of system buffers and bandwidth (see paragraphs 53-57). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached at (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UMUT ONAT/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

May 01, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602271
NON-BLOCKING RING EXCHANGE ALGORITHM
2y 5m to grant Granted Apr 14, 2026
Patent 12572397
REAL-TIME EVENT DATA REPORTING ON EDGE COMPUTING DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12572645
SYSTEMS AND METHODS FOR MANAGING SETTINGS BASED UPON USER PERSONA USING HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 10, 2026
Patent 12566647
System And Method for Implementing Micro-Application Environments
2y 5m to grant Granted Mar 03, 2026
Patent 12547481
SYSTEMS, METHODS, AND DEVICES FOR ACCESSING A COMPUTATIONAL DEVICE KERNEL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+28.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month