Prosecution Insights
Last updated: April 19, 2026
Application No. 18/909,399

NETWORK MESSAGE PROCESSING METHOD AND DEVICE

Non-Final OA §103§112
Filed
Oct 08, 2024
Examiner
CELANI, NICHOLAS P
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
88%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
207 granted / 454 resolved
-12.4% vs TC avg
Strong +42% interview lift
Without
With
+42.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
495
Total Applications
across all art units

Statute-Specific Performance

§101
14.7%
-25.3% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
24.3%
-15.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-20 are rejected in the Instant Application. Priority Examiner acknowledges Applicant’s claim to priority benefits of CN 202410013005.1 filed 1/3/2024. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 10/17/2024, 7/1/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered if signed and initialed by the Examiner. Claim Rejections Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim(s) 5, 8-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 5 claims “storing the second network message in any one second queue in the data processing unit.” It is unclear if the claim allows for storage in a particular queue (i.e. storage in one particular queue is storage within “any one” of the set of queues) or requires storage without reference to a particular queue (i.e. “any one” means it cannot include a mapping to one particular queue). Claim limitation “reading, through [a] message distribution module, the first network message from the destination second queue and transmitting the first network message to a destination second server through a network” as well as the other functions of the module in the dependent claims, invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification does not include an algorithm of sufficient specificity for performing the functionality. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. The above cited rejections are merely exemplary. The Applicant(s) are respectfully requested to correct all similar errors. Claims not specifically mentioned are rejected by virtue of their dependency. Claim Rejections - 35 USC § 103 A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Goyal (US Pub. 2020/0159568) in view of Hamilton (US Pub. 2003/0182464), and further in view of Lavian (US Pub. 2014/0105025). With respect to Claim 1, Goyal teaches a network message processing method, comprising: reading a first network message from a first queue maintained in a first server; (A first queue will be taught later. Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Each access node may act as network or storage I/O or a gateway for multiple servers.) querying a queue mapping table maintained in a data processing unit to determine a destination second queue to which the first network message is to be transmitted, and storing the first network message in the destination second queue, (para. 37-43; DPU includes multiple cores for performing processing on packet streams. Paras. 45-46; DPU receives stream and converts it to work units. Paras. 59-61; upon receiving a flow, DPU performs a lookup in a flow to map it to a particular core by using a queue manager to enqueue the work unit in the queue associated with that particular core. See also Fig. 6, para. 69; WU queue for each core.) and calling a message distribution module in the data processing unit, and reading, through the message distribution module, the first network message from the destination second queue and transmitting the first network message to a destination second server through a network. (para. 80; DPU transmits data packets to one or more external devices. Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Para. 187; allow packet to pass through device.) But Goyal does not explicitly teach a quantity of the first queues is greater than a quantity of the second queues. Hamilton, however, does teach a first queue maintained in a first server (Fig. 1, paras. 51-54; network of server computers with queueing. Fig. 5, para. 67-69; Queues can be accessed locally or remotely. Member queues of a macro queue can be located outside of a local computer. Fig. 4, paras. 62-66; each process has a queue that feeds into a macro queue. Therefore, Hamilton teaches a technique where queues on multiple devices such as servers can be accessed by a remote device.) and a quantity of the first queues is greater than a quantity of the second queues; (Fig. 4, para. 66; four producer queues feed into a macro queue. Therefore, the art can map multiple inputs to a single queue.) It would have been obvious to one of ordinary skill prior to the effective filing date to combine the method of Goyal with the quantity of the first queues is greater than a quantity of the second queues in order to allow a device to retrieve messages from multiple sources as if they were a single source. (Hamilton, para. 5) But modified Goyal does not explicitly teach a mapping relationship between first queues maintained in the first server and second queues maintained in the data processing unit. Lavian, however, does teach wherein the queue mapping table is used to record a mapping relationship between first queues maintained in the first server and second queues maintained in the data processing unit, (Goyal may teach on its own. First see Goyal, paras. 59-60; system includes a flow table that maps a flow to a core. When a flow is seen for the first time the system writes the flow into the flow table. Subsequent times the flow can be looked up to find which core (and therefore which queue) is responsible for the flow. That is a mapping between the incoming data (from the first queue) and the second queues. However, Goyal does not explicitly teach mapping a server to a queue, i.e., assigning a particular server to a particular queue, and it is unclear if the claim language calls for that. Consequently, Examiner will cite Lavian, paras. 27-28, 36; AR table includes source address, and packets may be specifically enqueued in particular queues based upon source address.) It would have been obvious to one of ordinary skill prior to the effective filing date to combine the method of modified Goyal with the mapping relationship between first queues maintained in the first server and second queues maintained in the data processing unit in order to allow a device to prioritize data from particular devices. (Lavian, para. 36) With respect to Claim 2, modified Goyal teaches the network message processing method according to claim 1, and Goyal also teaches wherein the calling the message distribution module in the data processing unit, and reading, through the message distribution module, the first network message from the destination second queue and transmitting the first network message to the destination second server through the network comprises: calling the message distribution module in the data processing unit, and controlling the message distribution module to use a multi-thread mode, wherein one second queue in the data processing unit is allocated for each thread, and different threads correspond to different second queues; (para. 4, 108-109; DPU accelerators are multithreaded for parallel processing. para. 37-43; DPU includes multiple cores for performing processing on packet streams. Fig. 6, para. 69; WU queue for each core. Para. 95; DPU performs thread scheduling) And Hamilton also teaches and calling a first target thread corresponding to the destination second queue, and reading, through the first target thread, the first network message and transmitting the first network message to the destination second server through the network. (Fig. 5, para. 67-69; Queues can be accessed locally or remotely. Member queues of a macro queue can be located outside of a local computer.) The same motivation to combine as the independent claim applies here. With respect to Claim 3, modified Goyal teaches the network message processing method according to claim 2, and Lavian also teaches wherein the reading, through the first target thread, the first network message and transmitting the first network message to the destination second server through the network comprises: determining the first queue from which the first network message comes, and determining a second target thread having a mapping relationship with the first queue, (paras. 27-28, 36; AR table includes source address, and packets may be specifically enqueued in particular queues based upon source address. See also Goyal, paras. 59-60; system includes a flow table that maps a flow to a core. When a flow is seen for the first time the system writes the flow into the flow table. Subsequent times the flow can be looked up to find which core (and therefore which queue) is responsible for the flow.) wherein each first queue of the first server is provided with a corresponding thread; (paras. 27-28, 36; AR table includes source address, and packets may be specifically enqueued in particular queues based upon source address. See also Goyal, paras. 45-46; DPU receives stream and converts it to work units. Paras. 59-61; upon receiving a flow, DPU performs a lookup in a flow to map it to a particular core by using a queue manager to enqueue the work unit in the queue associated with that particular core. See also Fig. 6, para. 69; WU queue for each core.) The same motivation to combine as the independent claim applies here. And Goyal also teaches if the second target thread is not a same thread as the first target thread, reading, through the first target thread, the first network message and forwarding the first network message to the second target thread; (Para. 151; dispatcher invokes functions in a series. paras. 96, 107-108, 151; DPU may employ service chaining, where a work unit has multiple functions performed on it. A processor may relinquish control of work units to be passed to next processors in line. A dispatcher may invoke next functions.) and transmitting, through the second target thread, the first network message to the destination second server through the network. (para. 80; DPU transmits data packets to one or more external devices. Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Para. 187; allow packet to pass through device.) With respect to Claim 4, modified Goyal teaches the network message processing method according to claim 3, and Goyal also teaches further comprising: determining a target second queue corresponding to the second target thread; (Fig. 6, para. 69; WU queue for each core. Para. 151; dispatcher invokes functions in a series.) and updating, in the queue mapping table, a mapping relationship between the first queue and the destination second queue to a mapping relationship between the first queue and the target second queue. (paras. 96, 107-108, 151; DPU may employ service chaining, where a work unit has multiple functions performed on it. A processor may relinquish control of work units to be passed to next processors in line. A dispatcher may invoke next functions. Paras. 59-61; upon receiving a flow, DPU performs a lookup in a flow to map it to a particular core by using a queue manager to enqueue the work unit in the queue associated with that particular core. See also Lavian, para. 45; updating of queue assignment.) With respect to Claim 5, modified Goyal teaches the network message processing method according to claim 1, and Goyal also teaches further comprising: calling the message distribution module in the data processing unit, and obtaining a second network message, (Paras. 45-46; DPU receives stream, which suggests a plurality of messages.) and reading the second network message from the any one second queue, identifying the target first queue to which the second network message is to be transmitted, and transmitting the second network message to the target first queue. (para. 80; DPU transmits data packets to one or more external devices. Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Para. 187; allow packet to pass through device.) and Lavian also teaches wherein the second network message carries a target first queue to which the second network message is to be transmitted; (para. 16-20; destination address for a message.) The same motivation to combine as the independent claim applies here. And Hamilton also teaches storing the second network message in any one second queue in the data processing unit; (para. 64; enqueueing in a round-robin fashion to a plurality of queues. Examiner also notes that enqueueing based on assignment or enqueueing based on source address would result in storage of “in any one.” See Goyal, paras. 59-61; upon receiving a flow, DPU performs a lookup in a flow to map it to a particular core by using a queue manager to enqueue the work unit in the queue associated with that particular core. See Lavian, Lavian, paras. 27-28, 36; AR table includes source address, and packets may be specifically enqueued in particular queues based upon source address.) The same motivation to combine as the independent claim applies here. With respect to Claim 6, modified Goyal teaches the network message processing method according to claim 1, and Goyal also teaches wherein the queue mapping table maintains a mapping relationship between a plurality of first queues (Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Each access node may act as network or storage I/O or a gateway for multiple servers.) and a plurality of second queues, (para. 37-43; DPU includes multiple cores for performing processing on packet streams. Paras. 45-46; DPU receives stream and converts it to work units. Paras. 59-61; upon receiving a flow, DPU performs a lookup in a flow to map it to a particular core by using a queue manager to enqueue the work unit in the queue associated with that particular core. See also Fig. 6, para. 69; WU queue for each core.) and Lavian also teaches wherein one first queue is mapped to only one second queue, (paras. 27-28, 36; AR table includes source address, and packets may be specifically enqueued in particular queues based upon source address.) The same motivation to combine as the independent claim applies here. And Hamilton also teaches and one second queue supports mapping of a plurality of first queues. (Fig. 4, para. 66; four producer queues feed into a macro queue.) The same motivation to combine as the independent claim applies here. With respect to Claim 7, modified Goyal teaches the network message processing method according to claim 1, and Goyal also teaches wherein the reading, through the message distribution module, the first network message from the destination second queue and transmitting the first network message to the destination second server through the network comprises: reading, through the message distribution module, the first network message from the destination second queue, identifying the destination second server to which the first network message is to be transmitted, and transmitting the first network message to the destination second server through the network. (para. 80; DPU transmits data packets to one or more external devices. Fig. 1a, paras. 31-35; Data center includes data processing units (access nodes) connected to a plurality of servers. Paras. 34-37; The access nodes may process streams of data written to or read from the servers. Para. 187; allow packet to pass through device.) With respect to Claim 8, it is substantially similar to Claim 1 and is rejected in the same manner, the same art and reasoning applying. Further, Goyal also teaches an electronic device, comprising: at least one processor and a memory; wherein the memory stores computer-execution instructions; and when the at least one processor executes the computer-execution instructions stored in the memory, the at least one processor is enabled to: (para. 31, 37; processor. para. 70; memory.) With respect to Claims 9-14, they are substantially similar to Claims 2-7, respectively, and are rejected in the same manner, the same art and reasoning applying. With respect to Claim 15, it is substantially similar to Claim 1 and is rejected in the same manner, the same art and reasoning applying. Further, Goyal also teaches a non-transitory computer-readable storage medium, wherein the computer-readable storage medium stores computer-execution instructions, and when a processor executes the computer-execution instructions, the following operations are implemented: (para. 207; non-transitory media read by processors.) With respect to Claims 16-20, they are substantially similar to Claims 2-6, respectively, and are rejected in the same manner, the same art and reasoning applying. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS P CELANI whose telephone number is (571)272-1205. The examiner can normally be reached on M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Srivastava can be reached on 571-272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS P CELANI/Examiner, Art Unit 2449
Read full office action

Prosecution Timeline

Oct 08, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592949
METHODS AND SYSTEMS FOR CATEGORIZING CYBER INCIDENT LOGS FEATURING DYNAMIC RELATIONSHIPS TO PRE-EXISTING CYBER INCIDENT REPORTS IN REAL-TIME
2y 5m to grant Granted Mar 31, 2026
Patent 12580823
ON-PREMISE MACHINE LEARNING MODEL SELECTION IN A NETWORK ASSURANCE SERVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12574424
Systems and methods for video-conference network system suitable for scalable, automatable, inter-social domain, private tele-consultation service
2y 5m to grant Granted Mar 10, 2026
Patent 12574208
DATA ENCRYPTION AND DECRYPTION USING SCREENS AND LFSR-GENERATED LOGIC BLOCKS
2y 5m to grant Granted Mar 10, 2026
Patent 12547471
TECHNIQUES FOR MANAGING EDGE DEVICE PROVISIONING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
88%
With Interview (+42.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month