Prosecution Insights
Last updated: April 19, 2026
Application No. 18/420,430

BANDWIDTH MANAGEMENT WITH CONFIGURABLE PIPELINES IN A HIGH-PERFORMANCE COMPUTING ENVIRONMENT

Non-Final OA §103§112
Filed
Jan 23, 2024
Examiner
REYES, CHRISTOPHER ANTHONY
Art Unit
2475
Tech Center
2400 — Computer Networks
Assignee
Cornelis Networks Inc.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
81%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
7 granted / 8 resolved
+29.5% vs TC avg
Minimal -6% lift
Without
With
+-6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
52 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
82.8%
+42.8% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
2.9%
-37.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 14-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. In Claim 14, claim limitation “*means for establishing, means for configuring” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-6, 14-15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over FREKING, et al. (US 20130346665 A1, hereinafter, "FREKING") in view of BORCH, et al. (US 20180351812 A1, hereinafter, "BORCH"). Regarding claim 1, FREKING teaches receiving, by a link manager during link negotiation and initialization, bandwidth capabilities of a link partner; FREKING writes, “...the device may enter into a negotiation process with the other PCIe device for determining which lane configuration to use as a first lane configuration. During this process, the PCIe enabled devices determine the capabilities of the other device (i.e., the different number of lane configurations the respective PHY interfaces support) and choose, for example, the lane configuration that has the widest PCIe link supported by both devices” (paragraph 0045). establishing, by the link manager, a local receive bandwidth for a receive controller of a port in dependence upon the bandwidth capabilities of the link partner and the local port, FREKING writes, “...and choose, for example, the lane configuration that has the widest PCIe link supported by both devices” (paragraph 0045). and configuring, by the link manager, a pipeline in the receive controller of a port of the switch for processing data according to the receive bandwidth. FREKING writes, “Once the devices have established the lane configuration, the devices may individually configure the MAC and PHY interfaces--i.e., connect the appropriate hardware modules using bus controllers--to provide a PCIe connection based on the lane configuration negotiated by the two devices” (paragraph 0045). FREKING fails to explicitly disclose information regarding, “a method of bandwidth management in a high-performance computing environment, the method comprising;” However, in analogous art, BORCH teaches a method of bandwidth management in a high-performance computing environment, the method comprising; BORCH writes, “FIG. 1 is a simplified block diagram of at least one embodiment of a high-performance computing (HPC) network for dynamic bandwidth management of interconnect fabric…” (paragraph 0006; figure 1). BORCH adds, “FIG. 5 is a simplified flow diagram of at least one embodiment of a method for dynamic bandwidth management of interconnect fabric...” (paragraph 0010; figure 5). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING to include aspects described by BORCH that include “Technologies for dynamic bandwidth management of interconnect fabric include a compute device configured to calculate a predicted fabric bandwidth demand which is expected to be used by the interconnect fabric in a next epoch and subsequent to a present epoch.” BORCH provides the motivation for modification stating, “In use, as will be described in further detail below, the fabric management compute device 106 is configured to reduce power consumed by the links by only leaving those links (i.e., local links and global links) and global switches enabled that are along paths which are required to process/forward network traffic through the system 100 over a given period of time, or epoch” (paragraph 0019). Regarding claim 2, FREKING and BORCH teach the method of claim 1 further comprising Additionally, FREKING teaches processing, by the pipeline of the receive controller, data in dependence according to the established receive bandwidth. FREKING writes, “FIG. 1 illustrates a system communicating serial data, according to an embodiment disclosed herein. As shown, computing device 105 and computing device 130 communication using a PCIe connection 163. Although data is shown as traveling from device 105 to device 130, bidirectional traffic is also possible...” (paragraph 0033). FREKING continues, “Device 105 and 130 include respective processing elements 110 and 150 which may represent one or more processors (e.g., microprocessors) or multi-core processors. The devices 105, 110 also include PCIe interfaces 115 and 135 that convert data received from the processing elements 110, 150 into PCIe packets which are then transmitted across the bus 155. Additionally, the PCIe interfaces 115, 135 receive PCIe packets which are then converted and transmitted to the respective processing elements 110, 150” (paragraph 0034). Regarding claim 4, FREKING and BORCH teach the method of claim 1 Additionally, FREKING teaches wherein the receive bandwidth value comprises a number of flow control units (flits) per clock cycle. FREKING writes, “In one embodiment, the MAC interfaces 120, 145 and processing elements 110, 150 transfer data in parallel rather than serially. That is, paths 161 and 165 may be an internal data bus that transmits synchronized, related data across a plurality of traces on each clock cycle rather than a plurality of traces that may each send one bit that may be unrelated to the bits being sent on the other traces” (paragraph 0035). Regarding claim 5, FREKING and BORCH teach the method of claim 4 Additionally, FREKING teaches wherein populating the mega port with flits according to the established receive bandwidth further comprise populating the mega port buffer with flits according to packet processing rules. FREKING writes, “The PCIe packets generated in the MAC interfaces 120, 145 are transmitted to the PHY interfaces 125, 140 where the PCIe packets are serialized (via a SERDES) and transmitted on the bus 155 using the designated lane configuration (e.g., 1.times.32, 2.times.16, 4.times.8, etc.)...the PCIe interface may include a control module that transmits configuration logic between the different interfaces in the devices 105, 130 that determines which lane configuration the PHY interfaces 140 use to transmit the PCIe packets. In one embodiment, the MAC interfaces 120 and 145 and PHY interfaces 125 and 140 are compatible with the PIE-8 standard for Generation 3 PCIe” (paragraph 0036). Regarding claim 6, FREKING and BORCH teach the method of claim 5 Additionally, FREKING teaches wherein the packet processing rules comprise distance rules. FREKING writes, “Moreover, multiple traces or wires may used to connect each port 620, 630 to the bus controllers 215, 220 thereby allowing more than one bit to be transmitted between the circuit elements in parallel per clock cycle” (paragraph 0053). Regarding claim 14, FREKING teaches means for receiving, during link negotiation and initialization, bandwidth capabilities of a link partner; FREKING writes, “...the device may enter into a negotiation process with the other PCIe device for determining which lane configuration to use as a first lane configuration. During this process, the PCIe enabled devices determine the capabilities of the other device (i.e., the different number of lane configurations the respective PHY interfaces support) and choose, for example, the lane configuration that has the widest PCIe link supported by both devices” (paragraph 0045). means for establishing a local receive bandwidth for a receive controller of a port in dependence upon the bandwidth capabilities of the link partner and the local port, FREKING writes, “...and choose, for example, the lane configuration that has the widest PCIe link supported by both devices” (paragraph 0045). and means for configuring a pipeline in the receive controller of a port of the switch for processing data according to the receive bandwidth. FREKING writes, “Once the devices have established the lane configuration, the devices may individually configure the MAC and PHY interfaces--i.e., connect the appropriate hardware modules using bus controllers--to provide a PCIe connection based on the lane configuration negotiated by the two devices” (paragraph 0045). FREKING fails to explicitly disclose information regarding, “a system of bandwidth management in a high-performance computing environment, the system comprising;” However, in analogous art, BORCH teaches a system of bandwidth management in a high-performance computing environment, the system comprising; BORCH writes, “Referring now to FIG. 1, a system 100 includes multiple compute devices 102, each of which are communicatively coupled, via a network 104, to at least one other computing node 102 in the network 104. The network 104 may be embodied as any type of network capable of communicatively connecting the compute devices 102, such as a high performance computing (HPC) system, a data center, etc.” (paragraph 0017). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING to include aspects described by BORCH that include “Technologies for dynamic bandwidth management of interconnect fabric include a compute device configured to calculate a predicted fabric bandwidth demand which is expected to be used by the interconnect fabric in a next epoch and subsequent to a present epoch.” BORCH provides the motivation for modification stating, “In use, as will be described in further detail below, the fabric management compute device 106 is configured to reduce power consumed by the links by only leaving those links (i.e., local links and global links) and global switches enabled that are along paths which are required to process/forward network traffic through the system 100 over a given period of time, or epoch” (paragraph 0019). Claims 15 and 17-19 are system claims corresponding to the method claims 2 and 4-6 that have already been rejected above. The applicant’s attention is directed to the rejection of claim 2 and 4-6. Claims 15 and 17-19 are rejected under the same rational as claims 2 and 4-6. Claim(s) 3 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over FREKING and BORCH as applied to claims 2 and 15 above, and further in view of CHO, et al. (US 20120272114 A1, hereinafter, "CHO"). Regarding claim 3, FREKING and BORCH teach the method of claim 2 wherein processing data through a pipeline of the receive controller in dependence upon the established receive bandwidth further comprises: Additionally, FREKING teaches processing data from the error check buffer to a mega port buffer through the pipeline in dependence upon established receive bandwidth. FREKING writes, “Although not shown, the devices 105 and 130 may include applications in memory that use the processing elements 110, 150 and PCIe interfaces 115, 135 to transmit and receive data via the bus 155” (paragraph 0034). FREKING continues, “The PCIe packets generated in the MAC interfaces 120, 145 are transmitted to the PHY interfaces 125, 140 where the PCIe packets are serialized (via a SERDES) and transmitted on the bus 155 using the designated lane configuration (e.g., 1.times.32, 2.times.16, 4.times.8, etc.)” (paragraph 0036). FREKING and BORCH fail to explicitly disclose information regarding, “receiving data in an error check buffer;” However, in analogous art, CHO teaches receiving data in an error check buffer; CHO writes, “The received data may be temporarily stored in a data buffer 120 via the error check block 113. The data buffer 120 may correspond to the memory 120 illustrated in FIG. 2” (paragraph 0043). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING and BORCH to include aspects described by CHO that “relates to memory controller, memory systems, and methods of operating same. More particularly, the inventive concept relates to non-volatile memory controllers, non-volatile memory systems, and operating methods for same.” CHO provides the motivation for modification stating, “When sufficient processing time is secured for data as in the embodiments of the inventive concept, firmware overhead associated with avoiding busy period time-outs, as is conventionally typical, may be eliminated, and management of firmware becomes a great deal more flexible. Consequently, the overall performance of memory systems consistent with embodiments of the inventive concept may be increased” (paragraph 0075). Claim 16 is a system claim corresponding to the method claim 3 that has already been rejected above. The applicant’s attention is directed to the rejection of claim 3. Claims 16 is rejected under the same rational as claim 3. Claim(s) 7 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over FREKING in view of BIRRITTELLA, et al. (US 20150222533 A1, hereinafter, "BIRRITTELLA"). Regarding claim 7, FREKING teaches wherein the management processor comprises a link manager comprising logic configured to establish, during link negotiation and initialization, a receive bandwidth in dependence upon local port configurations FREKING writes, “...the device may enter into a negotiation process with the other PCIe device for determining which lane configuration to use as a first lane configuration. During this process, the PCIe enabled devices determine the capabilities of the other device (i.e., the different number of lane configurations the respective PHY interfaces support) and choose, for example, the lane configuration that has the widest PCIe link supported by both devices” (paragraph 0045). and port configurations of a link partner and configure the pipeline to process data according to the receive bandwidth. FREKING writes, “Once the devices have established the lane configuration, the devices may individually configure the MAC and PHY interfaces--i.e., connect the appropriate hardware modules using bus controllers--to provide a PCIe connection based on the lane configuration negotiated by the two devices” (paragraph 0045). FREKING fails to explicitly disclose information regarding, “a switch, the switch comprising:”, “a plurality of ports including a transmit controller and a receive controller,”, “wherein the receive controller includes an error check buffer, a configurable pipeline, and a mega port buffer;”, “a switch core;”, and “and a control port comprising a management processor,” However, in analogous art, BIRRITTELLA teaches a switch (paragraph 0457; figure 60, multi-port Fabric Switch: 6000), the switch comprising: a plurality of ports (paragraph 0457; figure 60, multi-port Fabric Switch: 6000) including a transmit controller (paragraph 0450; figure 59, Tx Link Control Block: 1804) and a receive controller (paragraph 0453; figure 59, Rx Link Control Block: 1805), wherein the receive controller includes an error check buffer, a configurable pipeline, and a mega port buffer; BIRRITTELLA writes, “...a determination made in a decision block 2224 to whether the received LTP has a CRC error (Tx CRC and Rx CRC mismatch)…” (paragraph 0176; figure 18c, Rx Link Control Block: 1805). BIRRITTELLA adds, “When data begins to flow across the link, the buffer availability at the various receive ports dynamically change as a function of flits that are received at each receiver and flits that are removed from that receiver's buffers in connection with forwarding flits to a next hop” (paragraph 0235). BIRRITTELLA continues, “In one embodiment, the logic pipelines for the architecture's devices such as HFIs and switches transport packets at the upper Link Fabric Sub-Layer. At the links between devices however, Link Fabric Packets are segmented into smaller units (flits), which in turn are bundled together into (LTPs), and carried using the Link Transfer sub-layer protocol” (paragraph 0333). a switch core; BIRRITTELLA writes, “...the operations depicted by various logic blocks and/or circuitry may be effected using programmed logic gates and the like, including but not limited to ASICs, FPGAs, IP block libraries, or through one or more of software or firmware instructions executed on one or more processing elements including processors, processor cores, controllers, microcontrollers, microengines, etc” (paragraph 0469). and a control port comprising a management processor, BIRRITTELLA writes, “Also, as used herein, circuitry and logic to effect various operations may be implemented via one or more of embedded logic, embedded processors, controllers, microengines, or otherwise using any combination of hardware, software, and/or firmware” (paragraph 0469). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING to include aspects described by BIRRITTELLA of “Embodiments of method, apparatus, and systems for reliably transferring Ethernet packet data over a link layer and facilitating fabric-to-Ethernet and Ethernet-to-fabric gateway operations at matching wire speed and packet data rate are described herein” BIRRITTELLA provides the motivation for modification stating, “The architecture includes CRCs for Link Transfer Packets and Fabric Packets to ensure data integrity. The architecture also provides link-level retry for LTPs that are not received correctly. LTP retry significantly improves the effective bit error rate of the link, and enables the use of PHY strategies that may trade lower power consumption for a slightly degraded physical BER. LTP retry is also helpful for large fabrics where the large number of links in the fabric necessitates much better per link BER characteristics in order to maintain an acceptable system level error rate” (paragraph 0120). Regarding claim 11, FREKING teaches wherein the management processor comprises a link manager comprising logic configured to establish, during link negotiation and initialization, a receive bandwidth in dependence upon local port configurations and port configurations of a link partner and configure the pipeline to process data according to the receive bandwidth value. FREKING writes, “...the device may enter into a negotiation process with the other PCIe device for determining which lane configuration to use as a first lane configuration. During this process, the PCIe enabled devices determine the capabilities of the other device (i.e., the different number of lane configurations the respective PHY interfaces support) and choose, for example, the lane configuration that has the widest PCIe link supported by both devices...Once the devices have established the lane configuration, the devices may individually configure the MAC and PHY interfaces--i.e., connect the appropriate hardware modules using bus controllers--to provide a PCIe connection based on the lane configuration negotiated by the two devices” (paragraph 0045). FREKING fails to explicitly disclose information regarding, “a host fabric adapter, the host fabric adapter comprising:”, “at least one fabric port comprising a management processor, a serializer/deserializer;”, “a receive controller and a transmit controller;”, and “wherein the receive controller includes an error check buffer, a pipeline, and a mega port buffer;” However, in analogous art, BIRRITTELLA teaches a host fabric adapter, the host fabric adapter comprising: BIRRITTELLA writes, “Host Fabric Interfaces minimally consist of the logic to implement the physical and link layers of the architecture, such that a node can attach to a fabric and send and receive packets to other servers or devices” (paragraph 0088). at least one fabric port comprising a management processor, a serializer/deserializer; BIRRITTELLA writes, “FIG. 62 shows a node 6200 having an exemplary configuration comprising a host fabric interface 6202 including a fabric port 6204 coupled to a processor 6206, which in turn is coupled to memory 6208” (paragraph 0462; figure 62). BIRRITTELLA adds, “Under the architecture's links, LTP content is sent serially over multiple lanes in parallel” (paragraph 0159). BIRRITTELLA continues, “… upon deserialization and reassembly of the serially-transferred bit streams transmitted in parallel over the multiple lanes…” (paragraph 0160). a receive controller (paragraph 0450; figure 59, Tx Link Control Block: 1804) and a transmit controller (paragraph 0453; figure 59, Rx Link Control Block: 1805); wherein the receive controller includes an error check buffer, a pipeline, and a mega port buffer; BIRRITTELLA writes, “...a determination made in a decision block 2224 to whether the received LTP has a CRC error (Tx CRC and Rx CRC mismatch)…” (paragraph 0176; figure 18c, Rx Link Control Block: 1805). BIRRITTELLA adds, “When data begins to flow across the link, the buffer availability at the various receive ports dynamically change as a function of flits that are received at each receiver and flits that are removed from that receiver's buffers in connection with forwarding flits to a next hop” (paragraph 0235). BIRRITTELLA continues, “In one embodiment, the logic pipelines for the architecture's devices such as HFIs and switches transport packets at the upper Link Fabric Sub-Layer. At the links between devices however, Link Fabric Packets are segmented into smaller units (flits), which in turn are bundled together into (LTPs), and carried using the Link Transfer sub-layer protocol” (paragraph 0333). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING to include aspects described by BIRRITTELLA of “Embodiments of method, apparatus, and systems for reliably transferring Ethernet packet data over a link layer and facilitating fabric-to-Ethernet and Ethernet-to-fabric gateway operations at matching wire speed and packet data rate are described herein” BIRRITTELLA provides the motivation for modification stating, “The architecture includes CRCs for Link Transfer Packets and Fabric Packets to ensure data integrity. The architecture also provides link-level retry for LTPs that are not received correctly. LTP retry significantly improves the effective bit error rate of the link, and enables the use of PHY strategies that may trade lower power consumption for a slightly degraded physical BER. LTP retry is also helpful for large fabrics where the large number of links in the fabric necessitates much better per link BER characteristics in order to maintain an acceptable system level error rate” (paragraph 0120). Claim(s) 8-10 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over FREKING and BIRRITTELLA as applied to claims 7 and 11 above, and further in view of BORCH. Regarding claim 8, FREKING and BIRRITTELLA teach the switch of claim 7 FREKING and BIRRITTELLA fail to explicitly disclose information regarding, “wherein the pipeline comprises logic configured to move flits of data from the error check buffer to the mega port buffer according to the receive bandwidth value.” However, in analogous art, BORCH teaches wherein the pipeline comprises logic configured to move flits of data from the error check buffer to the mega port buffer according to the receive bandwidth value. BORCH writes, “It should be appreciated that, in some embodiments, the communication circuitry 210 may include specialized circuitry, hardware, or combination thereof to perform pipeline logic (e.g., hardware algorithms) for performing the functions described herein, including processing network packets (e.g., parse received network packets, determine destination computing devices for each received network packets, forward the network packets to a particular buffer queue of a respective host buffer of the compute node 102, etc.), performing computational functions, etc.” (paragraph 0028). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of FREKING and BIRRITTELLA to include aspects described by BORCH that include “Technologies for dynamic bandwidth management of interconnect fabric include a compute device configured to calculate a predicted fabric bandwidth demand which is expected to be used by the interconnect fabric in a next epoch and subsequent to a present epoch.” BORCH provides the motivation for modification stating, “In use, as will be described in further detail below, the fabric management compute device 106 is configured to reduce power consumed by the links by only leaving those links (i.e., local links and global links) and global switches enabled that are along paths which are required to process/forward network traffic through the system 100 over a given period of time, or epoch” (paragraph 0019). Regarding claim 9, FREKING, BIRRITTELLA, and BORCH teach the switch of claim 8 Additionally, FREKING teaches wherein the pipeline is further configured to move flits of data from the error check buffer to the mega port buffer according to packet processing rules. FREKING writes, “The PCIe packets generated in the MAC interfaces 120, 145 are transmitted to the PHY interfaces 125, 140 where the PCIe packets are serialized (via a SERDES) and transmitted on the bus 155 using the designated lane configuration (e.g., 1.times.32, 2.times.16, 4.times.8, etc.)...the PCIe interface may include a control module that transmits configuration logic between the different interfaces in the devices 105, 130 that determines which lane configuration the PHY interfaces 140 use to transmit the PCIe packets. In one embodiment, the MAC interfaces 120 and 145 and PHY interfaces 125 and 140 are compatible with the PIE-8 standard for Generation 3 PCIe” (paragraph 0036). Regarding claim 10, FREKING, BIRRITTELLA, and BORCH teach the switch of claim 9 Additionally, FREKING teaches wherein the packet processing rules comprise distance rules. FREKING writes, “Moreover, multiple traces or wires may used to connect each port 620, 630 to the bus controllers 215, 220 thereby allowing more than one bit to be transmitted between the circuit elements in parallel per clock cycle” (paragraph 0053). Claims 12-13 are apparatus claims corresponding to the apparatus claims 8-9 that have already been rejected above. The applicant’s attention is directed to the rejection of claims 8-9. Claims 12-13 are rejected under the same rational as claims 8-9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A REYES whose telephone number is (703)756-4558. The examiner can normally be reached Monday - Friday 8:30 - 5:00 EDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KHALED KASSIM can be reached at (571) 270-3770. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Christopher A. Reyes/Examiner, Art Unit 2475 3/9/2026 /KHALED M KASSIM/supervisory patent examiner, Art Unit 2475
Read full office action

Prosecution Timeline

Jan 23, 2024
Application Filed
Feb 28, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598621
Device and Method for Handling a Multi-cell Scheduling
2y 5m to grant Granted Apr 07, 2026
Patent 12593337
RESOURCE DETERMINATION METHOD AND APPARATUS, DEVICES, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12457249
STORAGE MEDIUM TO STORE TRANSMISSION DATA SETTING SUPPORT PROGRAM, GATEWAY DEVICE, AND TRANSMISSION DATA SETTING SUPPORTING METHOD
2y 5m to grant Granted Oct 28, 2025
Patent 12294868
Method Of Building Ad-Hoc Network Of Wireless Relay Node And Ad-Hoc Network System
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
81%
With Interview (-6.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month