DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 8, 9 are rejected under 35 U.S.C. 103 as being unpatentable over Malladi (US 20190050325 A1) and in view of Sasanka (US 20190050325 A1)
Claim 1. Malladi discloses A system-in-package (SiP) device (eg., 0022-0023 FIG. 1 Fig. 2 - illustrates a side elevation view block diagram of the HBM+ unit 100 of FIG. 1), comprising:
a base substrate (eg., 0023 – Fig. 2 - a package substrate 210.);
a processing device carried by the base substrate, wherein the processing device comprises a processing unit and a first cache memory associated with a first level of a cache hierarchy (eg., 0042 – Fig. 6 - logic die 105 may include an SRAM memory controller 620 including a prefetch engine 685 and a cache controller 690. The SRAM memory controller 620 is configured to interface with an SRAM memory 635 via the prefetch engine 685 and the cache controller 690); and
a hybrid high-bandwidth memory (HBM) device carried by the base substrate and electrically coupled to the processing unit through a SiP bus, the hybrid HBM device comprising: (eg., 0022 Fig. 1 - multiple HBM+ stacks 120 of HBM2 modules 110 and a corresponding logic die 105 disposed beneath the HBM2 modules 110. The HBM+ unit 100 can be a PCI-E compatible board.)
an interface die (eg., 0022 Fig. 2 - The logic die 105);
one or more memory dies carried by the interface die (eg., 0023 - HBM+ stacks 120 );
a shared bus electrically coupled to the interface die and each of the one or more memory dies (eg., 0043 - a memory controller 698 configured to interface with a stack of HBM2 modules 630,); and
Malladi does not disclose, but Sasanka discloses
a second cache memory formed on the interface die, wherein the second cache memory is associated with a second level of the cache hierarchy (eg., Fig 15, 0131 - HBM cache 1520 ; viewed in combination with Malladi’s disclosure of SRAM 635, Fig. 6).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 2. Malladi does not disclose, but Sasanka discloses
wherein the second level of the cache hierarchy is higher than the first level of the cache hierarchy (eg., [0036] A memory hierarchy includes one or more levels of cache unit(s) circuitry 204(A)-(N) within the cores 202(A)-(N), a set of one or more shared cache units circuitry 206, and external memory (not shown) coupled to the set of integrated memory controller units circuitry 214. The set of one or more shared cache units circuitry 206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, such as a last level cache (LLC), and/or combinations thereof.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 3. Malladi does not disclose, but Sasanka discloses
wherein the second cache memory formed on the interface die is communicably coupled to the shared bus to save a copy of data sent to the processing device from the one or more memory dies (eg., [0131] A level 1 (L1) cache 1505 and a Level 2 (L2) cache store data and instructions retrieved from a system memory 1586 via the memory controller 1590 and/or the HBM cache 1520 via the HBM cache controller 1510. Although not illustrated, a L3 cache may also be included in the processor 1501 and shared between the various cores 1501A, B.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 4. Malladi does not disclose, but Sasanka discloses
wherein the shared bus includes one or more through substrate vias (TSVs) extending from the interface die to an uppermost memory die, and wherein the second cache memory is communicably coupled to each of the one or more TSVs (eg., [0116] FIGS. 13A, on-die (FIG. 13A) On-die stacks 1301-1304, connect directly to a logic die or SoC 1305 (e.g., such as a CPU or GPU) using through-silicon vias. ).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 5. Malladi does not disclose, but Sasanka discloses
wherein the hybrid HBM device further comprises a DRAM controller formed on the interface die, wherein the DRAM controller is communicably coupled to the shared bus and operatively coupled to the second cache memory (eg., [0043] The logic die 105 may include a high bandwidth memory (HBM) controller 625 including a memory controller 698 configured to interface with a stack of HBM2 modules 630, )
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 6. Malladi does not disclose, but Sasanka discloses
wherein the DRAM controller is configured to: receive a read request from the second cache memory for requested data that is stored in the one or more memory dies; read the requested data to the second cache memory; and return the requested data to the second cache memory (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. Cache fill logic 1531 of the HBM cache controller 1510 performs a cache fill operation, storing the cache line).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 7. Malladi does not disclose, but Sasanka discloses
wherein the second cache memory is configured to: receive a request, from the first cache memory, for missing data during a processing operation, check the second cache memory for the missing data, (eg., [0132] In one embodiment, in response to a memory access request originating from one of the cores 1501A, B, etc, cache lookup logic 1530 of the HBM cache controller 1510 to determine if the requested data is stored within the HBM cache 1520 ); and
wherein: when the missing data is found in the second cache memory, the second cache memory is further configured to send the missing data to the first cache memory (eg., 0132 Fig. 15 - If the requested data is located within the HBM cache 1520, it is returned to the requesting core 1501A ), and
when the missing data is not found in the second cache memory, the second cache memory is further configured to send a read request to the DRAM controller for the missing data (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 8. Malladi discloses A method for operating a hybrid high bandwidth memory (HBM) device (eg., [0005] An HBM+ system is disclosed, comprising a host including at least one of a central processing unit (CPU),), the method comprising:
Malladi does not disclose, but Sasanka discloses
receiving, from a first cache memory on a processing device operatively coupled to the hybrid HBM device, a request for data; and checking a second cache memory at the hybrid HBM device for the requested data, (eg., 0132 Fig 15, 17- in response to a memory access request originating from one of the cores 1501A, B, etc, cache lookup logic 1530 of the HBM cache controller 1510 implements the techniques described herein to determine if the requested data is stored within the HBM cache 1520)
wherein: when the requested data is found in the second cache memory at the hybrid HBM device, the method further comprises sending the requested data, from the second cache memory to the first cache memory (eg., 0132 Fig. 15, 17 - If the requested data is located within the HBM cache 1520, it is returned to the requesting core 1501A); and
when the requested data is not found in the second cache memory (eg., 0132 Fig. 15, 17 - In response to a miss in the HBM cache 1520,),
the method further comprises: reading, from a memory die stack at the hybrid HBM device, the requested data; writing the requested data to the second cache memory; and sending the requested data, from the second cache memory to the first cache memory (eg., 0132 Fig. 15, 17 - the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. Cache fill logic 1531 of the HBM cache controller 1510 performs a cache fill operation, storing the cache line).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 9. Malladi does not disclose, but Sasanka discloses
wherein the second cache memory is communicatively coupled to a shared HBM bus at the hybrid HBM device, wherein reading the requested data from the memory die stack includes reading the requested data through the shared HBM bus (eg., [0117] In HBM implementations, the DRAM dies 1301-1304 are tightly coupled to the logic die 1305 over a distributed memory interface,… The HBM DRAM dies use a wide-interface architecture with a 128 bit data bus ; 0132 - response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. ).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claims 10-12, 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Malladi (US 20190050325 A1) and in view of Sasanka (US 20190050325 A1) and Loh (US 20200183848 A1)
Claim 10. Malladi does not disclose, but Sasanka discloses
receiving, from the first cache memory, a second request for a second set of data. checking the second cache memory for the requested second set of data (eg., [0132] in response to a memory access request originating from one of the cores 1501A, B, etc, cache lookup logic 1530 of the HBM cache controller 1510 implements to determine if the requested data is stored within the HBM cache 1520 ); and
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Malladi in view of Sasanka does not disclose, but Loh discloses
wherein the request is a first request for a first set of data, and wherein the method further comprises: (eg., 0050 - The cache controller services the selected memory request by accessing data from the memory arrays 332 within the in-package cache 330) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, with Loh, providing the benefit of a for efficiently performing memory accesses in a computing system (see Loh, 0006).
Claim 11. Malladi does not disclose, but Sasanka discloses
wherein: when the requested second set of data is found in the second cache memory, the method further comprises sending the requested second set of data from the second cache memory to the first cache memory (eg., [0132] in response to a memory access request originating from one of the cores 1501A, B, etc, cache lookup logic 1530 of the HBM cache controller 1510 implements to determine if the requested data is stored within the HBM cache 1520 … If the requested data is located within the HBM cache 1520, it is returned to the requesting core 1501A );
and when the requested second set of data is not found in the second cache memory, the method further comprises: reading the requested second set of data from the memory die stack; writing the requested second set of data to the second cache memory; and sending the requested second set of data from the second cache memory to the first cache memory (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. Cache fill logic 1531 of the HBM cache controller 1510 performs a cache fill operation, storing the cache line) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 12. Malladi does not disclose, but Sasanka discloses
and wherein sending the requested data, from the second cache memory to the first cache memory (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. Cache fill logic 1531 of the HBM cache controller 1510 performs a cache fill operation, storing the cache line) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Malladi in view of Sasanka does not disclose, but Loh discloses
wherein the second cache memory is communicatively coupled to the first cache memory through a system-in-package (SiP) bus, (eg., 0043 - SiP 310 is connected to a memory 362 and off-package DRAM 370 via a memory bus 350… The in-package low-latency interconnect 348 uses one or more of horizontal and/or vertical routes with shorter lengths; 0047 - interface logic 340 supports communication protocols, address formats and packet formats for transferring information between the in-package cache 330 and the processing unit 320) .
comprises communicating the requested data using the SiP bus (eg., 0060 Fig. 4 - SiP 440, multiple chips, or device layers, are stacked on top of one another with direct vertical interconnects 416 tunneling through them.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, with Loh, providing the benefit of a for efficiently performing memory accesses in a computing system (see Loh, 0006).
Claim 13. Malladi discloses A combined high bandwidth memory (HBM) device device (eg., [0005] An HBM+ system),, comprising:
an interface die having a central portion and a peripheral portion (eg., 0023 Fig. 2 – logic die 105; [0041] FIG. 6 illustrates a microarchitecture of the logic die 105 of FIGS. 1 and 2.;
a stack of memory dies carried by the interface die (eg., 0023 – Fig. 2 - Multiple HBM+ stacks 120 may be included on the HBM+ unit 100 … disposed atop and coupled to a package substrate 210.;
a shared bus electrically coupled to the interface die, each memory die in the stack of memory dies, and the cache memory component (eg., 0041 – Fig. 6 The peripheral logic may include a host manager 615 having queuing control, an SRAM controller 620, an HBM controller 625; 0043 - [0043] The logic die 105 may include a high bandwidth memory (HBM) controller 625 including a memory controller 698 configured to interface with a stack of HBM2 modules 630),
Malladi does not disclose, but Sasanka discloses
wherein the shared bus is positioned at least partially within a footprint of the central portion of the interface die (eg., [0117] In HBM implementations, the DRAM dies 1301-1304 are tightly coupled to the logic die 1305 over a distributed memory interface, which is divided into multiple independent channels. For example, a typical arrangement may include four stacked DRAM dies 1301-1304, with each die coupled to dual independent channels (channels 0 and 1). The HBM DRAM dies use a wide-interface architecture with a 128 bit data bus operating at double data rate (DDR). HBM2 implementations may further subdivide each physical channel into two pseudo-channels, which share the channel's row and column command bus and clocking input).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Malladi in view of Sasanka does not disclose, but Loh discloses
a cache memory component formed in the peripheral portion of the interface die (eg., 0047 Fig. 3 - the processor cores additionally access a shared cache within the execution engine 322. When the cache memory subsystem within the execution engine 322 does not include data requested by a processor core, the execution engine 322 sends the memory access request to the in-package cache 330.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, with Loh, providing the benefit of a for efficiently performing memory accesses in a computing system (see Loh, 0006).
Claim 14. Malladi does not disclose, but Sasanka discloses
wherein the shared bus includes a plurality of through substrate vias (TSVs) extending from the central portion of the interface die to an uppermost memory die in the stack of memory dies, and wherein the cache memory component is communicably coupled to each TSV in the plurality of TSVs to store a copy of any data communicated through the shared bus. (eg., [0116] FIGS. 13A, on-die (FIG. 13A) On-die stacks 1301-1304, connect directly to a logic die or SoC 1305 (e.g., such as a CPU or GPU) using through-silicon vias. ).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 15. Malladi does not disclose, but Sasanka discloses
further comprising a DRAM controller formed in the peripheral portion of the interface die, wherein the DRAM controller is operably coupled to the shared bus and the cache memory component (eg., 0116 - beside-die stacks 1301-1304, shown in FIG. 13B, are placed beside the logic/SoC die 1305 on a silicon interposer or bridge 1312, with the connections between the DRAM and the logic die 1305 running through the interposer 1312 and an interface layer 1311.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 16. Malladi does not disclose, but Sasanka discloses
wherein the cache memory component is configured to: receive, from a processing device external to the combined HBM device, a request for a set of data (eg., [0132] In one embodiment, in response to a memory access request originating from one of the cores 1501A, B, etc, cache lookup logic 1530 of the HBM cache controller 1510 to determine if the requested data is stored within the HBM cache 1520 ); and
wherein: when the missing data is found in the second cache memory, the second cache memory is further configured to send the missing data to the first cache memory (eg., 0132 Fig. 15 - If the requested data is located within the HBM cache 1520, it is returned to the requesting core 1501A ), and
when the cache memory component does not contain the set of data, the cache memory component is further configured to send a read request to the DRAM controller to cause the DRAM controller to read the set of data from the stack of memory dies (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A.)
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Malladi in view of Sasanka does not disclose, but Loh discloses
check for the set of data within the cache memory component, wherein: when the cache memory component contains the set of data, the cache memory component is further configured to send the set of data, from the cache memory component, to the processing device (eg., 0047 Fig. 3 - the processor cores additionally access a shared cache within the execution engine 322. When the cache memory subsystem within the execution engine 322 does not include data requested by a processor core, the execution engine 322 sends the memory access request to the in-package cache 330.); and
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, with Loh, providing the benefit of a for efficiently performing memory accesses in a computing system (see Loh, 0006).
Claim 17. Malladi does not disclose, but Sasanka discloses
when the cache memory component does not contain the set of data, the cache memory component is further configured to overwrite an data stored in the cache memory component with the requested set of data (eg., In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A. Cache fill logic 1531 of the HBM cache controller 1510 performs a cache fill operation, storing the cache line) .
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 18. Malladi does not disclose, but Sasanka discloses
wherein the DRAM controller is configured to: receive, from the cache memory component, a read request for a set of data stored in the stack of memory dies; and read the set of data from the stack of memory dies through the shared bus (eg., 0132 - In response to a miss in the HBM cache 1520, the memory controller 1590 retrieves the requested cache line from system memory 1586 and provides it to the requesting core 1501A.).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Claim 19. Malladi does not disclose, but Sasanka discloses
wherein the cache memory component is communicatively coupled between and a system-in-package (SiP) bus, and wherein the cache memory component is configured to send the set of data to a processing device external to the combined HBM device through the SiP bus (eg., 0116 - FIG. 13B, are placed beside the logic/SoC die 1305 on a silicon interposer or bridge 1312, with the connections between the DRAM and the logic die 1305 running through the interposer 1312 and an interface layer 1311).
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, providing the benefit of a shared cache (not shown) may be included in either processor 170, 180 or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode (see Sasanka, 0028) to reduce bandwidth and latency overhead of HBM caches (0001).
Malladi in view of Sasanka does not disclose, but Loh discloses
the shared bus (eg., 0043 - The SiP 310 is connected to a memory 362 and off-package DRAM 370 via a memory bus 350); and
It would have been obvious to one of ordinary skill in the art prior to the filing date of the claimed invention to modify HBM memory package as disclosed by Malladi with Sasanka, with Loh, providing the benefit of a for efficiently performing memory accesses in a computing system (see Loh, 0006).
Claim 20. Malladi discloses wherein the interface die further comprises one or more read and write components formed in the peripheral portion and communicatively coupled to the cache memory component (eg., [0024] In HBM+, the logicdie 105 may perform basic input/output (I/O) operations; [0042] More specifically, the logic die 105 may include a host manager 615 including an interface PHY 675 ).
Conclusion
Additional Prior Art: Malladi (US 20190079886) - HETEROGENEOUS ACCELERATOR FOR HIGHLY EFFICIENT LEARNING SYSTEMS
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GAUTAM SAIN whose telephone number is (571)270-3555. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached at 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GAUTAM SAIN/Primary Examiner, Art Unit 2135