Prosecution Insights
Last updated: April 19, 2026
Application No. 18/480,217

NETWORK SWITCH WITH HYBRID ARCHITECTURE

Non-Final OA §103
Filed
Oct 03, 2023
Examiner
JAGANNATHAN, MELANIE
Art Unit
2468
Tech Center
2400 — Computer Networks
Assignee
Hewlett Packard Enterprise Development LP
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
659 granted / 762 resolved
+28.5% vs TC avg
Minimal +5% lift
Without
With
+5.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
787
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 762 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 6, 12-13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chrysos et al. US 20150278135 in view of Brown et al. US 8400915. Regarding claim 1, A device (switch unit, Figure 1, element 100) comprising: an output buffer (element 104); a data crossbar (element 108) connected to the output buffer; an input buffer connected to the data crossbar (element 102); an input queue configured to transfer a packet from the input buffer to the output buffer over the data crossbar in response to the packet being eligible for packet pushing (each group, element 112, of ports have corresponding logic blocks, elements 102 and 104, that handles the data buffering and link layer protocol for that group of subports, a link layer portion of logic blocks is configured to manage the link protocol operations of the switch unit including credits, error checking, and packet transmission, handle sequencing of an arbitration-winning packet out to the data crossbar 108, as well as receiving incoming crossbar data to sequence to an output link, para. 0028), and to send a push request; a request crossbar connected to the input queue; and an output queue configured to receive the push request from the input queue over the request crossbar, (each input group arbiter, element 212, manages requests for packet transfers from the corresponding group of links through the use of a link queue, Figure 3, element 302, which includes a plurality of entries, corresponding to packets buffered in the accumulator, Figure 2, element 210, each entry in the link queue specifies a destination port of the corresponding buffered packet, and represents a request to transfer data through the data crossbar to that destination port, para. 0040, the output group arbiter, Figure 2, element 214, corresponding to the output group having output links, receives 4-bit sub-vectors from the input group arbiters representing unified requests from groups of input links to transfer data to the output links, para. 0045); and to control reading of the packet in response to granting the push request (the grant logic block, Figure 4, element 402, is configured to determine, for each output link, if the output link can grant an incoming request according to whether any of a plurality of conditions are met such as an output subport cannot issue a grant if the output subport has no credits, para. 0045). Chrysos does not disclose sending a request in parallel with transferring the packet to the output buffer. Brown discloses a packet switch, Figure 1, element 105, includes ingress ports, element 115, includes egress ports, element 125, further includes a switch fabric, element 120, enqueue modules, element 140, a pipeline scheduler, element 145, and egress port status modules, element 150. Brown discloses a packet pointer, element 205, of the enqueue request, element 200, which identifies the corresponding packet stored in the ingress port and a packet type indicator, element 210, of the enqueue request identifies a type of the packet corresponding to the enqueue request. Brown discloses the enqueue module includes a dequeue module and a shift module, the dequeue module is connected to the ingress port corresponding to the enqueue module, the shift module receives enqueue requests from the ingress port corresponding to the enqueue module and stores the enqueue requests, Figure 26. Brown discloses the dequeue logic, element 2610, Figure 26, generates a dequeue request based on the number of clock cycles determined for routing the packet to each destination egress port to synchronize removal of the packet from the ingress port and removal of the enqueue request of the packet from the enqueue queue, column 30, lines 21-30. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic of synchronizing removal of packet with requests. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Regarding claim 2, The device of claim 1, Chrysos does not explicitly disclose further comprising a grant crossbar connected to the output queue and to the input queue, the output queue further configured to send a push grant from the output queue to the input queue over the grant crossbar in parallel with controlling the reading the packet. Brown discloses a grant pipeline stage which selects packets for routing to the egress ports by selecting packet requests generated by a request pipeline stage that correspond to the packets, the grant pipeline stage generates packet grants based on the selected packet requests, the grant pipeline stage selects packets corresponding to the selected packet requests for routing to the egress ports, column 17, lines 8-18. Brown discloses the dequeue logic, element 2610, Figure 26, generates a dequeue request based on the number of clock cycles determined for routing the packet to each destination egress port to synchronize removal of the packet from the ingress port and removal of the enqueue request of the packet from the enqueue queue, column 30, lines 21-30. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic of synchronizing removal of packet with requests. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Regarding claim 3, The device of claim 1, wherein the packet is eligible for packet pushing when credit is available at the output buffer for the input queue (the grant logic block, Figure 4, element 402, is configured to determine, for each output link, if the output link can grant an incoming request according to whether any of a plurality of conditions are met such as an output subport cannot issue a grant if the output subport has no credits, para. 0046). Regarding claim 6, The device of claim 1,Chrysos does not disclose wherein the output queue is further configured to assign a sequence number to the packet based on ordering of the packet in a packet flow. Brown discloses each of the ingress ports receives packets from the link partner corresponding to the ingress port in a sequential order, stores the packets, and generates enqueue requests corresponding the packets, the enqueue module corresponding to the ingress port stores the enqueue requests, each of the enqueue requests generated by an ingress port identifies the corresponding packet stored in the ingress port and includes data for determining a routing order for routing the packets from the ingress port to the egress ports, column 6, lines 4-13. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Regarding claim 12, A method comprising: determining whether a packet is eligible for packet pushing; sending a push request from an input queue to an output queue in response to the packet being eligible for packet pushing (switch unit, Figure 1, element 100, each input group arbiter, element 212, manages requests for packet transfers from the corresponding group of links through the use of a link queue, Figure 3, element 302, which includes a plurality of entries, corresponding to packets buffered in the accumulator, Figure 2, element 210, each entry in the link queue specifies a destination port of the corresponding buffered packet, and represents a request to transfer data through the data crossbar to that destination port, para. 0040, the output group arbiter, Figure 2, element 214, corresponding to the output group having output links, receives 4-bit sub-vectors from the input group arbiters representing unified requests from groups of input links to transfer data to the output links, para. 0045); transferring the packet from an input buffer to an output buffer; granting the push request at the output; reading the packet from the output buffer in response to the push request being granted (the grant logic block, Figure 4, element 402, is configured to determine, for each output link, if the output link can grant an incoming request according to whether any of a plurality of conditions are met such as an output subport cannot issue a grant if the output subport has no credits, para. 0045). Chrysos does not disclose a push request comprising a description of the packet, transferring the packet from an input buffer to an output buffer in parallel with the sending of the push request sending a push grant from the output queue to the input queue in parallel with the reading the packet from the output buffer. Brown discloses an enqueue request, Figure 2, element 200, corresponds to a packet stored in an ingress port and the request includes a packet poiner, a packet type indicator, a destination port indicator and a credit request. Brown discloses a grant pipeline stage which selects packets for routing to the egress ports by selecting packet requests generated by a request pipeline stage that correspond to the packets, the grant pipeline stage generates packet grants based on the selected packet requests, the grant pipeline stage selects packets corresponding to the selected packet requests for routing to the egress ports, column 17, lines 8-18. Brown discloses the dequeue logic, element 2610, Figure 26, generates a dequeue request based on the number of clock cycles determined for routing the packet to each destination egress port to synchronize removal of the packet from the ingress port and removal of the enqueue request of the packet from the enqueue queue, column 30, lines 21-30. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic of synchronizing removal of packet with requests. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Regarding claim 13, The method of claim 12, Chrysos does not disclose wherein determining whether the packet is eligible for packet pushing is based on whether credit is available at the output buffer for the input queue, transferring the packet from the input buffer to the output buffer decrements the credit, and reading the packet from the output buffer increments the credit. Brown discloses the pipeline scheduler selects one of the identified packets for routing to the egress port in a subsequent pipeline stage of the pipeline scheduler and selectively adjusts the credits available for the egress port based on credits requested for routing the selected packet to the egress port, the pipeline scheduler may decrement the available credits for the egress port by a number of additional credits to indicate the credits requested for routing the selected packet to the egress port so then the pipeline scheduler updates the credits available for an egress port in a previous pipeline stage of the pipeline scheduler based on a minimum number of credits and updates the credits available for the egress port in a subsequent pipeline stage of the pipeline scheduler based on credits requested by a packet selected for routing, column 6, lines 58-67. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Regarding claim 20, The method of claim 12, Chrysos does not disclose wherein the push request is sent over a request crossbar that is connected to the input queue and to the output queue, the push grant is sent over a grant crossbar that is connected to the input queue and to the output queue, and the packet is transferred over a data crossbar that is connected to the input buffer and to the output buffer. Brown discloses a packet switch includes a pipeline scheduler for scheduling packets according to a credit-based flow control protocol, a credit update pipeline stage initializes available credits for egress ports of the packet switch, a request pipeline stage generates packet requests for packets based on the available credits, a grant pipeline stage selects packets based on the ports requests and the available credits, and generates port grants for the selected packets, Abstract. Before the filing of the invention it would have been obvious to modify Chrysos to include Brown’s packet switch and logic. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Claim(s) 4-5, 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chrysos in view of Brown in view of Agarwal US 20240121320. Regarding claim 4, The device of claim 1, Chrysos and Brown disclose determining if the packet is eligible for packet pushing but does not disclose when the packet is of a particular traffic class. Agarwal discloses a number of scheduling queues can be determined by a mapping of host device and traffic class to queues, the traffic class can correspond to a categorization for the connections based on protocol, the scheduling queues can support packet queues for various packet types in each of the queues, such as pull requests, push requests, push grants, para. 0054. Before the filing of the invention it would have been obvious to modify Chrysos and Brown to include Agarwal’s queue scheduling of packet types and push requests. One of ordinary skill in the art would be motivated to do so for increased performance and decrease in latency, para. 0001. Claim 14 is rejected under the same rationale. Regarding claim 5, The device of claim 1, Chrysos and Brown disclose whether the packet is eligible for packet pushing but does not disclose when a quantity of outstanding requests queued at the output queue is below a predetermined threshold. Agarwal discloses an eligibility check module can maintain a number of outstanding request state variables per sliding window, the count for number of outstanding requests can be incremented when the connection scheduler initially schedules a pull request, push request, or push unsolicited data, and can be decremented when a corresponding pull request is acknowledged or a corresponding push data is acknowledged, para. 0056, passing the eligibility check includes determining a number of outstanding requests is less than the end-node congestion window, para. 0005. Before the filing of the invention it would have been obvious to modify Chrysos and Brown to include Agarwal’s queue scheduling of packet types and push requests. One of ordinary skill in the art would be motivated to do so for increased performance and decrease in latency, para. 0001. Claims 15-17 are rejected under the same rationale. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Calvignac et al. US 6195335 in view of Manula et al. US 20200174828 Regarding claim 7, A device comprising (packet switch, Figure 1) : an input buffer; an output buffer connected to the input buffer; an output queue connected to the output buffer, the output queue configured to control receiving of packets in the output buffer from the input buffer; and an input queue connected to the output queue and the input buffer (input adapter including input queue mechanism, outputs A,B,C, and D are connected to output adapters, a crossbar switch fabric through which data packets can be transmitted between any one of the inputs and any one of the outputs via crosspoints, column 3, lines 11-22, a data packet is transferred from the queue selected by scheduler to the corresponding crosspoint buffer via input line, Figure 2, input adapter, Figure 1, element 120, for each of inputs a, b, c, d, first in first out queue is provided at each input for each of the outputs A, B, C, D, so that there are a total of 16 queues, Figure 3). Calvignac does not disclose the input queue configured to: send a push request for a first packet of a packet flow to the output queue; transfer the first packet from the input buffer to the output buffer before the output queue grants the push request; send a pull request for a second packet of the packet flow to the output queue; and transfer the second packet from the input buffer to the output buffer after the output queue grants the pull request. Manula discloses a gateway transfer memory comprises a plurality of buffers, wherein each of the buffers is configured to store data belonging to an associated one of the plurality of streams, para. 0011, an advanced gateway push model uses the credit mechanism for controlling the availability of data input to be pushed, as well as availability of gateway data buffers for an accelerator to output data into, para. 0122. Manula discloses the gateway receives the data from the host and stores it in memory before making it available in a fast gateway transfer memory for transfer to the accelerator, the contents of the gateway transfer memory are transferred to the accelerator in response to the completion of a handshake request, para. 0240. Manula discloses read requests issued to pull data of a first of the plurality of streams from the gateway transfer memory, para. 0013, if there is not sufficient space available, data of the stream remains in main gateway memory, without being pre-loaded, para. 0157, a synchronisation request is received from the accelerator at the gateway, in response to receiving the sync acknowledgment, the accelerator issues a read request to pull the data from the gateway, the accelerator reads data from at least one of the main memory, para. 0158-0161. Before the filing of the invention it would have been obvious to modify Calvignac to include Manula’s pre-loading of data for push model and Manula’s pull model in response to the acknowledgment. One of ordinary skill in the art would be motivated to do so that data can be made available in a timely fashion, para. 0007. Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Calvignac in view of Manula in view of Agarwal. Regarding claim 8, The device of claim 7, Calvignac and Manula do not disclose wherein the output queue is further configured to assign sequence numbers to the first packet and the second packet based on an order of the first packet and the second packet in the packet flow. Agarwal discloses passing the eligibility check includes determining a packet sequence number is less than a base sequence number plus the network congestion window, para. 0009. Agarwal discloses for ordered connections, ordering requirements can include ensuring request sequence number (RSN) and packet sequence number (PSN) relative ordering within each sliding window, para 0047. Before the filing of the invention it would have been obvious to modify Calvignac and Manula to include Agarwal’s ordering requirements. One of ordinary skill in the art would be motivated to do so for increased performance and decrease in latency, para. 0001. Regarding claim 9, The device of claim 8, further comprising: Calvignac and Manula do not disclose a transmitter configured to read the first packet and the second packet from the output buffer in order of the sequence numbers. Agarwal discloses passing the eligibility check includes determining a packet sequence number is less than a base sequence number plus the network congestion window, para. 0009. Agarwal discloses for ordered connections, ordering requirements can include ensuring request sequence number (RSN) and packet sequence number (PSN) relative ordering within each sliding window, para 0047. Before the filing of the invention it would have been obvious to modify Calvignac and Manula to include Agarwal’s ordering requirements. One of ordinary skill in the art would be motivated to do so for increased performance and decrease in latency, para. 0001. Regarding claim 10, The device of claim 7, Calvignac does not disclose wherein the input queue is configured to send the push request in response to credit being available at the output queue and in response to the first packet having a particular traffic class. Manula discloses a gateway transfer memory comprises a plurality of buffers, wherein each of the buffers is configured to store data belonging to an associated one of the plurality of streams, para. 0011, an advanced gateway push model uses the credit mechanism for controlling the availability of data input to be pushed, as well as availability of gateway data buffers for an accelerator to output data into, para. 0122. Manula discloses the gateway receives the data from the host and stores it in memory before making it available in a fast gateway transfer memory for transfer to the accelerator, the contents of the gateway transfer memory are transferred to the accelerator in response to the completion of a handshake request, para. 0240. Manula discloses read requests issued to pull data of a first of the plurality of streams from the gateway transfer memory, para. 0013, if there is not sufficient space available, data of the stream remains in main gateway memory, without being pre-loaded, para. 0157, a synchronisation request is received from the accelerator at the gateway, in response to receiving the sync acknowledgment, the accelerator issues a read request to pull the data from the gateway, the accelerator reads data from at least one of the main memory, para. 0158-0161. Before the filing of the invention it would have been obvious to modify Calvignac to include Manula’s pre-loading of data for push model and Manula’s pull model in response to the acknowledgment. One of ordinary skill in the art would be motivated to do so that data can be made available in a timely fashion, para. 0007. Calvignac and Manula do not disclose in response to the first packet having a particular traffic class. Agarwal discloses a number of scheduling queues can be determined by a mapping of host device and traffic class to queues, the traffic class can correspond to a categorization for the connections based on protocol, the scheduling queues can support packet queues for various packet types in each of the queues, such as pull requests, push requests, push grants, para. 0054. Before the filing of the invention it would have been obvious to modify Calvignac and Manula to include Agarwal’s queue scheduling of packet types and push requests. One of ordinary skill in the art would be motivated to do so for increased performance and decrease in latency, para. 0001. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Calvignac in view of Manula in view of Agarwal in view of Brown et al. US 8400915. Regarding claim 11, Calvignac, Manula and Agarwal do not disclose the device of claim 10, further comprising: a credit crossbar connected to the input queue and to the output buffer, the output buffer further configured to return credit to the input queue over the credit crossbar. Brown discloses a credit update pipeline stage is configured to initialize the available credit state based on the advertised credits, store the available credit state, and update the available credit state based on the port grant state, column 2, lines 60-65. Before the filing of the invention it would have been obvious to modify the cited prior art to include Brown’s packet switch and logic and credit updating. One of ordinary skill in the art would be motivated to do so for increase in packet throughput of the packet switch, column 2, lines 13-15. Allowable Subject Matter Claims 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELANIE JAGANNATHAN whose telephone number is (571)272-3163. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marcus Smith can be reached at 571-270-1096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MELANIE JAGANNATHAN/Primary Examiner, Art Unit 2468
Read full office action

Prosecution Timeline

Oct 03, 2023
Application Filed
Jun 20, 2024
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103
Mar 12, 2026
Interview Requested
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604345
User Equipment Configuration For Determining Channel Access Priority Class
2y 5m to grant Granted Apr 14, 2026
Patent 12593273
CROSS-RADIO CONFIGURATION FOR POSITIONING AND SENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12580695
FEEDBACK INFORMATION TRANSMITTING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581384
SYSTEMS AND METHODS FOR NETWORK SLICE PERFORMANCE OPTIMIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12574987
BEAM FAILURE RECOVERY METHOD, APPARATUS, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
92%
With Interview (+5.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 762 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month