Detailed Action
Status of Claims
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-16 are presented for examination.
Claims 1-16 are amended.
Claims 1-16 are rejected.
This Action is Non-Final.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/05/2024,the submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 2-15 objected to because of the following informalities:
i. Claims 2-15 comprising “the hub circuit” is not clear which circuit it’s refereeing. Therefore, “the hub circuit” has to be corrected as “a communication hub circuit” as indicated in the preamble of claim 1.
ii. Claims 2-10 comprising “the hub circuit according to claim 1” has to be corrected as “ the hub circuit of claim 1”.
iii. Claims 11-15 comprising the word “according to claim 1” in their limitation has to be corrected by incorporation the limitations of claim 1 to replace the word “according to claim 1” in the limitations.
iv. Claim 16 comprising the word “the system according to claim 14” has to be corrected as “the system of claim 14”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Shah et al. (US Patent Application Pub. No: 20230017643 A1) in view Malladi et al. (US Patent Application Pub. No: 20210373951 A1).
As per claim 1, Shaha teaches a communication hub circuit [Fig.1, cache coherent switch on chip 102.], comprising:
at least two first physical ports each configured to exchange data with another communication hub circuit forming part of the same cache-coherent memory area as said hub circuit , when said first port is connected via a high-speed link to this other communication hub circuit [Paragraphs 0003;0005; 0030 ,…cache coherent switch on chip 102 may be communicatively coupled to one or more such components of system 100 via CXL interface 116. Cache coherent switch on chip 102 may be configured to allow for sharing of resources between the various such components. ];
at least one second physical port configured to exchange data with a computing microchip forming part of said same cache-coherent memory area when said at least one second physical port is connected via a high-speed link to this computing microchip [Paragraphs 0003;0005;0030,…The first server device may include a first memory device and a first cache coherent switch on chip, communicatively coupled to the first memory device via a Compute Express Link (CXL) protocol. The second server device may include a second memory device and a second cache coherent switch on chip, communicatively coupled to the second memory device via the CXL protocol and communicatively coupled to the first cache coherent switch on chip by the data connection via the CXL protocol.];
at least one third physical port configured to exchange data with an accelerator microchip forming part of an input/output coherent memory area with said same cache coherent memory area , when said at least one third physical port is connected via a high-speed link to this accelerator microchip [Paragraphs 0030; 0042, …, Cache coherent switch on chip 102 may utilize CXL interface 116 to provide low latency paths for memory access and coherent caching (e.g., between processors and/or devices to share memory, memory resources, such as accelerators, and memory expanders). CXL interface 116 may include a plurality of protocols, including protocols for input/output devices (IO), for cache interactions between a host and an associated device, and for memory access to an associated device with a host. For the purposes of this disclosure, reference to a CXL interface or protocol described herein may include any one or more of such protocols. Cache coherent switch on chip 102 may utilize such protocols to provide for resource sharing between a plurality of devices by acting as a switch between the devices.];
at least one first interface configured to exchange data with a memory circuit forming part of said same cache-coherent memory area when said at least one first interface is connected to this memory circuit [Paragraphs 0041-0042, FIG. 4 illustrates system 400 that includes a plurality of cache coherent switch on chips 402, CPUs 404, a plurality of memories 414, and a plurality of devices 428. While the embodiment shown in FIG. 4 illustrates a configuration where cache coherent switch on chip 402A is communicatively coupled (via CXL interface 416) to CPU 404A and cache coherent switch on chip 402B is communicatively coupled to CPU 404B, in various other embodiments, a single CPU may be coupled to both cache coherent switch on chip 402A and 402B.];
at least one second interface configured to exchange data with a sensor [Paragraph 0027;0030,… system 100 that includes cache coherent switch on chip 102, processor 104, network 170, accelerators 106, storage 108, application specific integrated circuit (ASIC) 110, persistent memory (PM) 112, and memory module 114. Various components of system 100 may be communicatively and/or electrically coupled with a CXL interface 116, which may be a port.];
at least one processing circuit configured to implement data processing [Abstract, Paragraphs 0005;0023, …The cache coherent switch on chip provides for resource sharing between components while independent of a system processor, removing the system processor as a bottleneck.];
at least one network-on-chip configured to transfer data between elements of the communication hub circuit [Paragraph 0074,…NICs 1080 may be configured to allow for cache coherent switch on chips 1002s to communicate via network/bus 1044. In certain embodiments, cache coherent switch on chips 1002 may be provided for data flow between accelerators 1006 and NICs 1080 (which may be a Smart NIC) so that NICs 1080 may write directly into accelerator 1006's cache coherent memory. Such data flow allows for sending and/or receiving of cache coherent traffic over network 1044 by accelerators 1006.], said elements comprising said at least two first ports , said at least one second port , said at least one third port , said at least one processing circuit and said at least one first interface [Paragraphs 0032-0033, …Cache coherent switch on chip 202 includes one or more upstream ports 220 and one or more downstream ports 222. Each of upstream ports 220 and downstream ports 222 may be configured to support PCI or CXL protocol. As such, upstream ports 220 and downstream ports 222 may be ports configured to support any combination of PCI and/or CXL protocols.].
Shaha discloses CXL interface but does not explicitly discloses first physical ports, second physical port and third physical ports.
Malladi teaches first physical ports, second physical port and third physical ports [Paragraphs 0050; 0068, …a device that plugs onto a cache coherent interface (e.g., a CXL/PCIe5 interface) and can implement various cache and memory protocols (e.g., type-2 device based CXL.cache and CXL.memory protocols). Further, in some examples, the device can include a programmable controller or a processor (e.g., a RISC-V processor) that can be configured to present the remote coherent devices as part of the local system, negotiated using a cache coherent protocol (e.g., a CXL.IO protocol).].
It would have been obvious one ordinary skill in the art before the effective filling date of the claimed invention, to include Malladi 's allocating memory resources into Shaha’s system for utilizing cache coherent switch on chip for the benefit of improving user experience by reducing network latency and improving network stability and operational data transfer rate (Malladi, [0010]) to obtain the invention as specified as claim 1.
As per claim 2, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha and Malladi teach, a hub circuit , wherein: said at least two first physical ports are each configured to implement a CXL.mem and CXL.cache or AXI stream protocol when exchanging data with the other communication hub circuit [Malladi, Paragraphs 0050; 0068, …a device that plugs onto a cache coherent interface (e.g., a CXL/PCIe5 interface) and can implement various cache and memory protocols (e.g., type-2 device based CXL.cache and CXL.memory protocols). Further, in some examples, the device can include a programmable controller or a processor (e.g., a RISC-V processor) that can be configured to present the remote coherent devices as part of the local system, negotiated using a cache coherent protocol (e.g., a CXL.IO protocol).];
said at least one second physical port is configured to implement a CXL.cache and CXL.mem protocol when exchanging data with the computing microchip [Malladi, Paragraphs 0050-0051; 0068, …a device that plugs onto a cache coherent interface (e.g., a CXL/PCIe5 interface) and can implement various cache and memory protocols (e.g., type-2 device based CXL.cache and CXL.memory protocols). Further, in some examples, the device can include a programmable controller or a processor (e.g., a RISC-V processor) that can be configured to present the remote coherent devices as part of the local system, negotiated using a cache coherent protocol (e.g., a CXL.IO protocol).]; and
said at least one third physical port is configured to implement a CXL.mem, CXL.io or PCIe protocol when exchanging data with the accelerator microchip [Shah, Paragraphs 0074-0077, NICs 1080 may be configured to allow for cache coherent switch on chips 1002s to communicate via network/bus 1044. In certain embodiments, cache coherent switch on chips 1002 may be provided for data flow between accelerators 1006 and NICs 1080 (which may be a Smart NIC) so that NICs 1080 may write directly into accelerator 1006's cache coherent memory. Such data flow allows for sending and/or receiving of cache coherent traffic over network 1044 by accelerators 1006.].
As per claim 3, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha teaches, a hub circuit, wherein the hub circuit is configured to merge several data flows it receives without implementing any processing on said several flows [Shah, Paragraphs 0074-0077,…cache coherent switch on chips 1002 may be provided for data flow between accelerators 1006 and NICs 1080 (which may be a Smart NIC) so that NICs 1080 may write directly into accelerator 1006's cache coherent memory. Such data flow allows for sending and/or receiving of cache coherent traffic over network 1044 by accelerators 1006.].
As per claim 4, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha teaches, a hub circuit, wherein the hub circuit is configured to implement processing on separate data flows it receives and then to merge the results of these processing [Shah, Paragraphs 0074-0077,…cache coherent switch on chips 1002 may be provided for data flow between accelerators 1006 and NICs 1080 (which may be a Smart NIC) so that NICs 1080 may write directly into accelerator 1006's cache coherent memory. Such data flow allows for sending and/or receiving of cache coherent traffic over network 1044 by accelerators 1006.].
As per claim 5, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha teaches, a hub circuit, wherein the hub circuit is configured to implement cache coherency in said same cache-coherent memory area [Shah, Paragraphs 0074-0077,…cache coherent switch on chips 1002 may be provided for data flow between accelerators 1006 and NICs 1080 (which may be a Smart NIC) so that NICs 1080 may write directly into accelerator 1006's cache coherent memory. Such data flow allows for sending and/or receiving of cache coherent traffic over network 1044 by accelerators 1006.].
As per claim 6, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha teaches, a hub circuit, wherein the hub circuit is configured to implement input/output coherency between the same cache-coherent memory area and another memory area to which an accelerator microchip belongs [Saha, Paragraphs 0030-0033,Cache coherent switch on chip 102 may utilize CXL interface 116 to provide low latency paths for memory access and coherent caching (e.g., between processors and/or devices to share memory, memory resources, such as accelerators, and memory expanders). CXL interface 116 may include a plurality of protocols, including protocols for input/output devices (IO), for cache interactions between a host and an associated device, and for memory access to an associated device with a hos.].
As per claim 7, Shaha and Malladi teach all the limitations of claim 1 above, wherein Malladi teaches, a hub circuit, wherein said at least one first interface comprises an interface for a DDR-type memory and/or an interface for a FLASH-type memory [Malladi, Paragraphs 0069,0078, The memory modules 135 may be grouped by type, form factor, or technology type (e.g., DDR4, DRAM, LDPPR, high bandwidth memory (HBM), or NAND flash, or other persistent storage (e.g., solid state drives incorporating NAND flash)). Each memory module may have a CXL interface and include an interface circuit for translating between CXL packets and signals suitable for the memory in the memory module 135.].
As per claim 8, Shaha and Malladi teach all the limitations of claim 1 above, wherein Shaha teaches, a hub circuit, wherein the hub circuit further comprises a direct memory access circuit [Shah, Paragraph 0067,…The algorithm is configured to predict the next set of addresses (expected to be fetched by the applications) and configures a direct memory access (DMA) engine to prefetch those addresses and store the data in read/write buffers, to be ready to be read by the applications.].
As per claim 9, Shaha and Malladi teach all the limitations of claim 1 above, wherein Malladi teaches, a hub circuit, wherein said at least one second interface comprises at least one CSI-type interface and/or at least one Ethernet-type interface [Malladi, Paragraphs 0060-0061, The ToR Ethernet switch 110 and the network interface circuits 125 may include an RDMA interface to facilitate RDMA requests between CXL memory devices on different servers (e.g., the ToR Ethernet switch 110 and the network interface circuits 125 may provide hardware offload or hardware acceleration of RDMA over Converged Ethernet (RoCE), Infiniband, and iWARP packets).].
As per claim 10, Shaha and Malladi teach all the limitations of claim 1 above, wherein Malladi teaches, a hub circuit, wherein the hub circuit further comprises at least one third interface configured to exchange data with a display [Malladi, Paragraphs 0156; 0164, The user device 1010 may also comprise a user interface (that can include a display 1216 coupled to a processing element 1208) and/or a user input interface (coupled to a processing element 1208).].
As per claim 11, claim 11 is rejected in accordance to the same rational and reasoning as the above claim 1 above, wherein claim 11 is the system for the device claim 1.
As per claim 12, claim 12 is rejected in accordance to the same rational and reasoning as the above claim 1 above, wherein claim 12 is the system for the device claim 1.
As per claim 13, claim 13 is rejected in accordance to the same rational and reasoning as the above claim 1 above, wherein claim 13 is the system for the device claim 1.
As per claims 14 and 16, claims 14 and 16 are rejected in accordance to the same rational and reasoning as the above claims 1 and 2 above, wherein claims 14 and 16 are the system for the device claims 1 and 2.
As per claim 15, claim 15 is rejected in accordance to the same rational and reasoning as the above claim 1 above, wherein claim 15 is the system for the device claim 1.
Conclusion
RELEVANT ART CITED BY THE EXAMINER
The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c).
References Considered Pertinent but not relied upon
Peterson et al. (US Patent No: 11,102,323 B1) teaches a switch in the form of a network core switch includes one or more card slots. Peterson discloses a cache memory device is also included that is configured to be received in one of the card slots. Peterson suggests the switch further includes one or more storage node connection ports in communication with the cache memory device, and also includes one or more client communication ports in communication with the cache memory device.
Dover (US Patent Application Pub. No: 20180276146 A1) teaches memory systems, computing systems, and machine readable mediums for protecting memory at identified addresses based upon access rules defining permissible access to the identified memory addresses that depends on the value of one or more registers stored in the memory system. Dover discloses the value of the registers (e.g., a Platform Configuration Register) may depend on a state of a computing device in which the memory system is installed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GETENTE A YIMER whose telephone number is (571)270-7106. The examiner can normally be reached on Monday-Friday 6:30-3:00.Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, IDRISS N ALROBAYE can be reached on 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair my.uspto.gov/pair/ PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GETENTE A YIMER/Primary Examiner, Art Unit 2181