Prosecution Insights
Last updated: April 19, 2026
Application No. 18/651,480

Control of Memory Access Cycles for Thermal Stability and Performance

Non-Final OA §103
Filed
Apr 30, 2024
Examiner
RUTZ, JARED IAN
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
251 granted / 315 resolved
+24.7% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
11 currently pending
Career history
326
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
43.0%
+3.0% vs TC avg
§102
18.7%
-21.3% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 315 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587). Regarding claim 1, WALKER discloses a method (FIGs. 10 and 11, process/method for managing thermal stress in system in FIGs. 1 and 4, with memory controllers 120, processors 110, and memory system 130), comprising: generating, in a controller, data indicative of memory access activities in a plurality of memories during a first time period, the plurality of memories operable by a […] memory controller[…] ([0070], “In this manner, thermal management mechanisms already implemented on the processor 110 can build a thermal model for temperature prediction, independent of the type of memory device actually connected to it. Similarly, the memory device can provide other static information indicating the energy and power implications of certain controller actions to the processor 110 or the memory controller 120. The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”, e.g. processor 110 or memory controller 120 can generate data for indicating memory access activities, such as request bandwidths and traffic features, in memory system 130); evaluating, by the controller, thermal stresses applied to the plurality of memories during the first time period ([0070], “Example embodiments may include mechanisms for control logic to query memory devices on their temperature, to set thermal limits on memory devices, to communicate thermal maps and notification of status and thermal emergencies to connected control or processing logic. Memory devices, for example the memory system 130, may communicate their physical thermal properties to an external processing unit, for example the memory controller 120 or the host processor 110. In this manner, thermal management mechanisms already implemented on the processor 110 can build a thermal model for temperature prediction, independent of the type of memory device actually connected to it”. E.g. memory controller 120 may obtain and evaluate the thermal properties/ temperature measurements from memory system 130, to evaluate thermal stresses, during some given time period to build thermal model for temperature prediction); grouping, by the controller, the plurality of memories into a plurality of memory batches based at least in part on the thermal stresses (FIG. 4, [0041]-[0047], “the physical address space may be subdivided into large `uniform regions` relating to the physical placement of the uniform regions. This address translation may occur within a memory controller 120 (FIG. 1) enhanced with static and runtime thermal information. The same techniques may be used for mapping between non-uniform memory regions such as those with different latencies and bandwidths. In some systems according to example embodiments, memory die may include banks of addresses operating with different latencies, so thermal information according to example embodiments could indicated to the host processor 110 as permanently `cold` regions for allocating the most heavily used data.”. E.g. different physical memory addresses /locations are grouped into different “thermal regions” /memory batches, by the controller, based in part on thermal stress information); scheduling, by the controller, idle times for the plurality of memory batches in a second time period following the first time period ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” E.g. in case of thermal events, the controller may disable data access of some regions /batches of memory, i.e. schedule idle times in a second time period, which is after the thermal events); receiving, in the controller, requests to access the plurality of memories during the second time period; and enforcing, by the controller, the idle times pre-scheduled for the plurality of memory batches in the second time period via throttling communications of the requests to the memory controller[…] ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. receive access requests during the second time period, and enforcing idle time by throttling communications of the requests. The idle times are pre-scheduled, because memory controller makes the decision to throttle before the throttling). Walker does not expressly teach performing the throttling in an environment having a plurality of controllers operating a plurality of memories. With respect to claim 1, Mukker teaches a system including a plurality of memory controllers (figure 8, items 814, 104) each coupled to a plurality of memory devices (figure 8 shows 814 connected to 826 and 822, and 104 connected to 106 and 108). Paragraph 0037 shows that power management circuitry 204 can send a message to host circuitry (analogous to the claimed controller) to transition power state. As of the earliest priority date of the application, it would have been obvious to combine the plurality of memory controllers of Mukker with the system of Walker. The motivation for doing so would have been to coordinate active and idle periods across all agents in a workload pipeline, see Mukker paragraph 0018.Further motivation comes from In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960), finding that the duplication of parts, in this case the memory controller and connected memories, has no patentable significance unless a new and unexpected result is produced. Therefore, it would have been obvious to combine Walker and Mukker to obtain the invention as recited in claims 1-5 and 9. Regarding claim 2, WALKER discloses the method of claim 1, wherein the grouping of the plurality of memories into the plurality of memory batches is further based on heat dissipation characteristics of the plurality of memory batches ([0061], “Thermal information may include a graph, for example a data structure, of material regions along with the regions' location in space, and the regions' thermal capacitance and resistance (RC) properties”. E.g. thermal information used for mapping the regions /batches may include thermal RC properties, which are heat dissipation characteristics). Regarding claim 3, WALKER discloses the method of claim 2, wherein each of the plurality of memories is configured on a separate integrated circuit die ([0071], “The memory system 630 may include 3D-stacked memory dies”). Regarding claim 4, WALKER discloses the method of claim 2, wherein each of the plurality of memories is enclosed within a separate integrated circuit package (FIG. 6, memories are enclosed in “3D-stacked, interposer-mounted or off-chip memory controller logic layer + memory” in a IC package). Regarding claim 5, WALKER discloses the method of claim 2, wherein the plurality of memory batches include a first memory batch and a second memory batch; and the idle times include first idle times for the first memory batch and second idle times for the second memory batch; and the first idle times and the second idle times are stacked with offset over time ([0048], “the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory. The memory controller 120 may migrate data from those memory cells to other memory cells to remove the need to refresh those memory cells and to minimize or reduce power consumption in the region including those memory cells.” E.g. enforcing idle times for different regions/batches, different regions/batches may have different idle times due to local thermal events, and some of the idle times may be stacked with offset over time. Additionally, migrating data from 1 region to another region avoid the need to refresh the 1st region, so staggering idle time between different regions after data is moved between them). Regarding claim 9, WALKER discloses the method of claim 2, wherein the scheduling of the idle times for the plurality of memory batches includes reducing performance impact of the idle times on memory access requests having a pattern measured in the first time period ([0048], “the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory. The memory controller 120 may migrate data from those memory cells to other memory cells to remove the need to refresh those memory cells and to minimize or reduce power consumption in the region including those memory cells.” E.g. enforcing idle times for different regions/batches, different regions/batches may have different idle times due to local thermal events, and some of the idle times may be alternating for different batches over time. Additionally, migrating data from 1 region to another region avoid the need to refresh the 1st region, so staggering idle time between different regions after data is moved between them to reduce performance impact of idle times, for a pattern of accesses in the first time period, in [0070], “The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”). Claims 6-8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587) and further in view of Turner et al. (US 20240045464). Regarding claim 6, the combination of Walker and Mukker does not explicitly disclose the method of claim 5, wherein the controller is configured in an optical interface circuit. TURNER discloses the method of claim 5, wherein the controller is configured in an optical interface circuit (FIG. 2A, memory controller 204 is coupled to the E/O transceiver 210 on in a photonic substrate, e.g. optical interface circuit). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Walker and Mukker’s processor system with thermal throttling to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 7, Walker discloses the method of claim 6, further comprising: refraining, by the controller during an idle time pre-scheduled for a memory batch having first memories, from sending requests to the memory controllers to access the first memories ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. receive access requests, and enforcing idle time by throttling communications of the requests. The idle times are pre-scheduled, because memory controller makes the decision to throttle before the throttling). WALKER does not explicitly disclose receiving, by the controller via an optical fiber, the requests. TURNER discloses receiving, by the controller via an optical fiber, the requests (FIG. 2A, memory controller 204 manages all accesses to the memories 206A and 206B, and FIG. 1C, the requests are received from processors 100A and 100B via optical transceivers 114A and optical channels /optical fibers 112A and 112B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 8, WALKER discloses the method of claim 6, further comprising: receiving, by the controller from a processor sub-system, the requests; and refraining, by the controller during an idle time pre-scheduled for a memory batch having first memories, from sending requests, to a memory sub-system containing the memory controllers and the plurality of memories, to access the first memories ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. receive access requests, and enforcing idle time by throttling communications of the requests. The idle times are pre-scheduled, because memory controller makes the decision to throttle before the throttling). WALKER does not explicitly disclose sending requests, through an optical fiber to a memory sub-system containing the memory controllers and the plurality of memories. TURNER discloses sending requests, through an optical fiber to a memory sub-system containing the memory controllers and the plurality of memories (FIG. 2A, memory controller 204 manages all accesses to the memories 206A and 206B, and FIG. 1C, the requests are sent to memories via optical transceivers 114A and optical channels /optical fibers 112A and 112B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 11, WALKER discloses a device (system in FIGs. 1 and 4, with memory controllers 120, processors 110, and memory system 130), comprising: at least one buffer; and a controller coupled to the at least one buffer; wherein the controller is configured to buffer, in the at least one buffer, requests to access a […] memory sub-system[…] ([0055], “several processors 110 in a system 100 may share the same physical bus or buses and queues but the processors 110 may continue to have different arbitration results and slave-way responses based on the virtual network to which the processor 110 belongs. In some embodiments, when a processor 110 is on a virtual network and the processor 110 targeting a hot memory region, the memory controller 120 may reduce the queue entries associated with that network to reduce the request rate without having to use software intervention or adding sideband signals back to the processor 110. In this manner, the memory controller 120 may shape the traffic targeting only the hot spot(s) without affecting traffic that targets cooler or cold regions leaving those regions to operate at their pre-determined operating points.”. E.g. memory controller has queues/ buffers to buffer requests for target memory regions to manage hot spots in memory), […] the […] of memory sub-system[…] including a memory controller and a plurality of memories ([0071], “The memory system 630 may include a controller logic layer for example a logic die. The logic die may be stacked directly under the memory chips. All or some of the memory controller 120 logic may be implemented in the logic die.” E.g. each memory chip may have its own controller logic. [0020]-[0022], “In some embodiments, the processor 110 or the memory controller 120 may use on-die thermal sensors and thermal models to direct how data is mapped and relocated. In some example embodiments, the processor 110 or the memory controller 120 may use on-die thermal sensors and thermal models to individually adjust memory cell refresh across regions based on their temperature. FIG. 2 is a diagram of a memory system 200 according to various embodiments. The memory system 200 may serve functions of the memory system 130 (FIG. 1). The memory system 200 may comprise memory dies 210 and 220. While two memory dies are illustrated, the memory system 200 may include fewer or more than two memory dies…. The memory system 200 may include independent logic and memory dies, dies stacked via silicon interposers or directly stacked dies ("3D stacking"), or any other arrangement of logic and storage dies. In example embodiments, thermal sensors (TS) may be included in the logic die 230, in one or more memory dies, or in both logic die 230 and memory dies 210 and 220.” E.g. processor 110, memory controller 120, and various control logic dies may distribute or independently control the data mapping and memory regions, WALKER disclosed any number of possible combinations of multiple “control logics”/memory controllers for multiple “memory subsystems”); wherein the controller is further configured to group memories of the […] memory sub-system[…] into memory batches (FIG. 4, [0041]-[0047], “the physical address space may be subdivided into large `uniform regions` relating to the physical placement of the uniform regions. This address translation may occur within a memory controller 120 (FIG. 1) enhanced with static and runtime thermal information. The same techniques may be used for mapping between non-uniform memory regions such as those with different latencies and bandwidths. In some systems according to example embodiments, memory die may include banks of addresses operating with different latencies, so thermal information according to example embodiments could indicated to the host processor 110 as permanently `cold` regions for allocating the most heavily used data.”. E.g. different physical memory addresses /locations are grouped into different “thermal regions” /memory batches, by the controller, based in part on thermal stress information), schedule idle times for the memory batches ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” E.g. in case of thermal events, the controller may disable data access of some regions /batches of memory, i.e. schedule idle times), and throttle dispatching of the requests to the plurality of memory sub-systems according to a schedule having the idle times ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. enforcing idle time by throttling communications of the requests. The idle times are pre-scheduled, because memory controller makes the decision to throttle before the throttling). Walker does not expressly teach performing the throttling in an environment having a plurality of controllers operating a plurality of memories. With respect to claim 11, Mukker teaches a system including a plurality of memory controllers (figure 8, items 814, 104) each coupled to a plurality of memory devices (figure 8 shows 814 connected to 826 and 822, and 104 connected to 106 and 108). Paragraph 0037 shows that power management circuitry 204 can send a message to host circuitry (analogous to the claimed controller) to transition power state. As of the earliest priority date of the application, it would have been obvious to combine the plurality of memory controllers of Mukker with the system of Walker. The motivation for doing so would have been to coordinate active and idle periods across all agents in a workload pipeline, see Mukker paragraph 0018.Further motivation comes from In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960), finding that the duplication of parts, in this case the memory controller and connected memories, has no patentable significance unless a new and unexpected result is produced. WALKER and Mukker does not explicitly disclose an optical transceiver; a controller coupled to the optical transceiver; wherein the controller is configured to receive requests via the optical transceiver to access a plurality of memory sub-systems. TURNER discloses an optical transceiver (FIG. 2A, E/O transceiver 210); a controller coupled to the optical transceiver (FIG. 2A, memory controller 204 is coupled to the E/O transceiver 210); wherein the controller is configured to receive requests via the optical transceiver to access a plurality of memory sub-systems (FIG. 2A, memory controller 204 manages all accesses to the memories 206A and 206B, and FIG. 1C, the requests are received from processors 100A and 100B via optical transceivers 114A). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Therefore, it would have been obvious to combine Walker, Mukker, and Turner to obtain the invention as recited in claim 11. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587) and further in view of Kale et al. (US 20210255799). Regarding claim 10, WALKER discloses the method of claim 2, further comprising: wherein the scheduling of the idle times for the plurality of memory batches includes reducing performance impact of the idle times on the pattern of memory access requests ([0048], “the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory. The memory controller 120 may migrate data from those memory cells to other memory cells to remove the need to refresh those memory cells and to minimize or reduce power consumption in the region including those memory cells.” E.g. enforcing idle times for different regions/batches, different regions/batches may have different idle times due to local thermal events, and some of the idle times may be alternating for different batches over time. Additionally, migrating data from 1 region to another region avoid the need to refresh the 1st region, so staggering idle time between different regions after data is moved between them to reduce performance impact of idle times, for a pattern of accesses in the first time period, in [0070], “The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”). WALKER does not explicitly disclose predicting a pattern of memory access requests in the second time period. KALE discloses predicting a pattern of memory access requests in the second time period ([0038] “For example, an Artificial Neuron Network (ANN) (e.g., a Spiking Neural Network (SNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or any combination thereof) can be configured to predict data changes and/or movements to be implemented in the data storage device and thus predict future power/temperature based on access patterns (e.g., read/write), access frequency, address locations, chunk sizes, operation conditions/environment, etc. Intelligent throttling of data storage activities can improve user experiences by avoiding rigidly-forced throttling of performance of the data storage device, which can be a result of temperature exceeding a threshold.” E.g. predicting future data access patterns/movements, based on past memory access patterns). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER and Mukker’s processor system with thermal throttling to further include KALE’s prediction of data access patterns, to keep temperature rise at safe level while optimize the overall performance (see KALE [0034]). Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587) and Turner et al. (US 20240045464), and further in view of Reed et al. (US 7495985). Regarding claim 12, WALKER discloses the device of claim 11, further comprising: a plurality of interfaces configured as hosts to the plurality of memory sub-systems respectively (FIG. 1, [0016], buses 121 and 122 between processors, memory controller and memory system, thus, buses must also have interfaces between the components for the buses); WALKER, Mukker, and TURNER do not explicitly disclose wherein the device and the plurality of memory sub-systems are configured on a same printed circuit board. REED discloses wherein the device and the plurality of memory sub-systems are configured on a same printed circuit board (FIG. 3, col. 6 lines 62-65, “FIG. 3 shows a diagram illustrating a top-down view of a typical ATX form factor motherboard with respect to the locations of the CPU, the memory controller, and the system memory 115”. E.g. both processor and memories are configured on a printed circuit board /motherboard). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER and Mukker’s processor system with thermal throttling, with TURNER’s optical circuit, to further include REED’s single printed circuit board system, to allow for better heat dissipation and higher performance (see REED background). Regarding claim 13, WALKER discloses the device of claim 12, wherein the controller is further configured to track a pattern of accessing the plurality of memory sub-systems, and schedule the idle times based on reducing delays in servicing requests in the pattern of accessing the plurality of memory sub-systems ([0048], “the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory. The memory controller 120 may migrate data from those memory cells to other memory cells to remove the need to refresh those memory cells and to minimize or reduce power consumption in the region including those memory cells.” E.g. enforcing idle times for different regions/batches, different regions/batches may have different idle times due to local thermal events, and some of the idle times may be alternating for different batches over time. Additionally, migrating data from 1 region to another region avoid the need to refresh the 1st region, so staggering idle time between different regions after data is moved between them to reduce performance impact of idle times, for a pattern of accesses in the first time period, in [0070], “The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”). WALKER does not explicitly disclose accessing the plurality of memory sub-systems over the optical transceiver. TURNER discloses accessing the plurality of memory sub-systems over the optical transceiver (FIG. 2A, memory controller 204 manages all accesses to the memories 206A and 206B, and FIG. 1C, data in memories are accessed by processor via optical transceivers 114A and optical channels /optical fibers 112A and 112B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 14, WALKER discloses the device of claim 12, wherein the controller is further configured to evaluate thermal stresses in memories in the plurality of memory sub-systems and identify the memory batches based on similarity in access timing and thermal stress ([0070], “In this manner, thermal management mechanisms already implemented on the processor 110 can build a thermal model for temperature prediction, independent of the type of memory device actually connected to it. Similarly, the memory device can provide other static information indicating the energy and power implications of certain controller actions to the processor 110 or the memory controller 120. The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”. [0046] “The necessity for data migration may be minimized if the two thermal regions have similar temperatures, areas, and thermal capacitance”, e.g. evaluate thermal stress and build a thermal model to map memory batches, mapping regions/batches based on similarities of thermal stresses/characteristics). Regarding claim 15, WALKER discloses the device of claim 14, wherein the controller is further configured to identify the memory batches based on heat dissipation characteristics of memories in the plurality of memory sub-systems ([0061], “Thermal information may include a graph, for example a data structure, of material regions along with the regions' location in space, and the regions' thermal capacitance and resistance (RC) properties”. E.g. thermal information used for mapping the regions /batches may include thermal RC properties, which are heat dissipation characteristics); wherein the plurality of memory sub-systems include first memories having thermal stresses above a threshold during a past period of time and second memories having thermal stresses below the threshold during the past period of time; and wherein the memory batches configured for a next period of time include the first memories but not the second memories ([0073]-[0075], “the host processor 710 has produced a hot spot 750 that overlaps a hot spot 751 in the memory system 730. Accordingly, in FIG. 7B, the host processor 710 or the memory controller 120 (FIG. 1) may move the hot spot 751 so that the hot spot 751 no longer overlaps with hot spot 750. The host process 710 or the memory controller 120 may copy data from its original location and update the map RAM 140 (FIG. 1) to reflect the new mapped region of hot spot 751…. FIG. 9 depicts a similar system as described above with respect to FIG. 6-8. In FIG. 9, memory address regions 955 are migrated to be away from the hot spot 960.” E.g. selecting different memory batches based on whether the memory was a “hot spot” or not “hot spot”, or relative to at least each other as thermal threshold. Selecting “hot spot” region for throttling by moving data away). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587) and Reed et al. (US 7495985). Regarding claim 16, WALKER discloses a system (system in FIGs. 1 and 4, with memory controllers 120, processors 110, and memory system 130), comprising: a processor sub-system (FIG. 1, processors 110); a memory sub-system (FIG. 1, memory system 130), the memory sub-system having a […] memory controller[…] configured to operate a plurality of memories ([0071], “The memory system 630 may include a controller logic layer for example a logic die. The logic die may be stacked directly under the memory chips. All or some of the memory controller 120 logic may be implemented in the logic die.” E.g. each memory chip may have its own controller logic. [0020]-[0022], “In some embodiments, the processor 110 or the memory controller 120 may use on-die thermal sensors and thermal models to direct how data is mapped and relocated. In some example embodiments, the processor 110 or the memory controller 120 may use on-die thermal sensors and thermal models to individually adjust memory cell refresh across regions based on their temperature. FIG. 2 is a diagram of a memory system 200 according to various embodiments. The memory system 200 may serve functions of the memory system 130 (FIG. 1). The memory system 200 may comprise memory dies 210 and 220. While two memory dies are illustrated, the memory system 200 may include fewer or more than two memory dies…. The memory system 200 may include independent logic and memory dies, dies stacked via silicon interposers or directly stacked dies ("3D stacking"), or any other arrangement of logic and storage dies. In example embodiments, thermal sensors (TS) may be included in the logic die 230, in one or more memory dies, or in both logic die 230 and memory dies 210 and 220.” E.g. processor 110, memory controller 120, and various control logic dies may distribute or independently control the data mapping and memory regions, WALKER disclosed any number of possible combinations of multiple “control logics”/memory controllers for multiple “memory subsystems”); and a controller configured to manage thermal stress in memories of the memory sub-system via regulation of timing of accesses by the processor sub-system to the memories of the memory sub-system (FIG. 4, [0041]-[0047], “the physical address space may be subdivided into large `uniform regions` relating to the physical placement of the uniform regions. This address translation may occur within a memory controller 120 (FIG. 1) enhanced with static and runtime thermal information. The same techniques may be used for mapping between non-uniform memory regions such as those with different latencies and bandwidths. In some systems according to example embodiments, memory die may include banks of addresses operating with different latencies, so thermal information according to example embodiments could indicated to the host processor 110 as permanently `cold` regions for allocating the most heavily used data.”. E.g. different physical memory addresses /locations are grouped into different “thermal regions” /memory batches, by the controller, based in part on thermal stress information. [0048], “the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory. The memory controller 120 may migrate data from those memory cells to other memory cells to remove the need to refresh those memory cells and to minimize or reduce power consumption in the region including those memory cells.” E.g. enforcing idle times for different regions/batches, different regions/batches may have different idle times due to local thermal events, and some of the idle times may be alternating for different batches over time. Additionally, migrating data from 1 region to another region avoid the need to refresh the 1st region, so staggering idle time between different regions after data is moved between them to reduce performance impact of idle times, for a pattern of accesses in the first time period, in [0070], “The static information may include a table of absolute or relative energy, power or power density, as a function of request bandwidth, size, page hit rate or other measurable traffic features”). Walker does not expressly teach performing the throttling in an environment having a plurality of controllers operating a plurality of memories. With respect to claim 16, Mukker teaches a system including a plurality of memory controllers (figure 8, items 814, 104) each coupled to a plurality of memory devices (figure 8 shows 814 connected to 826 and 822, and 104 connected to 106 and 108). Paragraph 0037 shows that power management circuitry 204 can send a message to host circuitry (analogous to the claimed controller) to transition power state. As of the earliest priority date of the application, it would have been obvious to combine the plurality of memory controllers of Mukker with the system of Walker. The motivation for doing so would have been to coordinate active and idle periods across all agents in a workload pipeline, see Mukker paragraph 0018.Further motivation comes from In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960), finding that the duplication of parts, in this case the memory controller and connected memories, has no patentable significance unless a new and unexpected result is produced. WALKER does not explicitly disclose a printed circuit board; a processor sub-system configured on the printed circuit board; a memory sub-system configured on the printed circuit board, REED discloses a printed circuit board; a processor sub-system configured on the printed circuit board; a memory sub-system configured on the printed circuit board (FIG. 3, col. 6 lines 62-65, “FIG. 3 shows a diagram illustrating a top-down view of a typical ATX form factor motherboard with respect to the locations of the CPU, the memory controller, and the system memory 115”. E.g. both processor and memories are configured on a printed circuit board /motherboard). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling to further include REED’s single printed circuit board system, to allow for better heat dissipation and higher performance (see REED background). Therefore, it would have been obvious to combine Walker Mukker, and Reed to obtain the invention as recited in claim 16. Claims 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Walker et al, (US 20140281311) in view of Mukker et al. (US 20210109587) and Reed et al. (US 7495985) and further in view of Turner et al. (US 20240045464. Regarding claim 17, WALKER does not explicitly disclose the system of claim 16, further comprising: a first optical interface circuit coupled to the processor sub-system; a second optical interface circuit coupled to the memory sub-system; and an optical fiber connected between the first optical interface circuit and the second optical interface circuit; wherein the accesses by the processor sub-system are configured to go through the optical fiber. TURNER discloses further comprising: a first optical interface circuit coupled to the processor sub-system (FIG. 1C, first optical interface circuit 110A and 110B, each coupled to a processor 100A and 100B); a second optical interface circuit coupled to the memory sub-system (FIG. 1C, second optical interface circuit 110C and 110D, or FIG. 2A, E/O transceiver 210, each coupled to a memory sub-system, memory controller 106A with its memory units, or memory controller 106B with its memory units); and an optical fiber connected between the first optical interface circuit and the second optical interface circuit (FIG. 1C, optical fibers 112A and 112B between interface 110A and 110C, or between interface 110B and 110D); wherein the accesses by the processor sub-system are configured to go through the optical fiber (FIG. 1C, accesses by the processors go through optical fibers 112A and 112B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling with REED’s single printed circuit board system, to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 18, WALKER discloses the system of claim 17, wherein the controller is configured to assign selected memories of the memory sub-system into memory batches and impose idle times for the memory batches in the accesses by the processor sub-system (FIG. 4, [0041]-[0047], “the physical address space may be subdivided into large `uniform regions` relating to the physical placement of the uniform regions. This address translation may occur within a memory controller 120 (FIG. 1) enhanced with static and runtime thermal information. The same techniques may be used for mapping between non-uniform memory regions such as those with different latencies and bandwidths. In some systems according to example embodiments, memory die may include banks of addresses operating with different latencies, so thermal information according to example embodiments could indicated to the host processor 110 as permanently `cold` regions for allocating the most heavily used data.”. E.g. different physical memory addresses /locations are grouped into different “thermal regions” /memory batches, by the controller, based in part on thermal stress information. [0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. receive access requests during the second time period, and enforcing idle time by throttling communications of the requests). Regarding claim 19, WALKER discloses the system of claim 18, wherein the controller is to distribute requests for the accesses to the plurality of memory controllers according to a schedule that includes the idle times predetermined for the memory batches ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. controller distribute requests for the accesses to the plurality of memory controllers ([0071], “The memory system 630 may include a controller logic layer for example a logic die. The logic die may be stacked directly under the memory chips. All or some of the memory controller 120 logic may be implemented in the logic die.” E.g. each memory chip may have its own controller logic), and enforcing idle time by throttling communications of the requests. The idle times are predetermined, because memory controller makes the decision to throttle before the throttling). WALKER does not explicitly disclose wherein the controller is coupled to the second optical interface circuit. TURNER discloses wherein the controller is coupled to the second optical interface circuit (FIG. 2A, memory controller 204 is coupled to second optical interface / E/O transceiver 210). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling with REED’s single printed circuit board system, to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Regarding claim 20, WALKER discloses the system of claim 18, dispatch requests, received from the processor sub-system, to the memory sub-system according to a schedule that includes the idle times predetermined for the memory batches ([0048], “These and other embodiments may reduce or eliminate the occurrence of memory failure or degradation when sudden thermal events do not leave sufficient time for a data migration, or when thermal events are too short for implementation of data migration and re-mapping. In some embodiments, the memory controller 120 may turn off (e.g., disable) data accesses to certain channels, banks or regions of the memory.” [0050]-[0051], “Throttling may be performed at different granularities within the memory device. In order of increasing complexity, throttling may be done at the channel or vault level, the rank level, the bank level, the sub-bank level, or the row or column level in a memory device. In some embodiments, the memory controller 120 may perform throttling.” E.g. memory controller dispatch requests, received from the processor sub-system, to the memory sub-system, and enforcing idle time by throttling communications of the requests. The idle times are predetermined, because memory controller makes the decision to throttle before the throttling). WALKER does not explicitly disclose wherein the controller is coupled to the first optical interface circuit. TURNER discloses wherein the controller is coupled to the first optical interface circuit (FIG. 2A, memory controller 204 is coupled to FIG. 1C, first optical interface circuit 110A and 110B, via fiber attach 202 and optical channels 112A, 112B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify WALKER’s processor system with thermal throttling with REED’s single printed circuit board system, to further include TURNER’s optical circuit, to provide higher bandwidth and lower latency (see TURNER [0036]). Response to Arguments Applicant’s arguments, see page 1 of the response filed 12/29/2025, with respect to the rejection of claims 14-15 under 35 USC 112(b) have been fully considered and are persuasive. rejection of claims 14-15 under 35 USC 112(b) has been withdrawn. Applicant’s arguments, see page 3 of the response filed 12/29/2025, with respect to the rejection(s) of claim(s) 1-5 and 9 under 35 USC 102, and also all other claims under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Mukker, as shown supra. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jared Ian Rutz whose telephone number is (571)272-5535. The examiner can normally be reached Monday-Friday, 8:00 AM to 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Cottingham can be reached at (571)272-1400. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JARED I RUTZ/Supervisory Patent Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
May 12, 2025
Non-Final Rejection — §103
Aug 18, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103
Dec 29, 2025
Response after Non-Final Action
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602161
Accelerated Read, Modify, Write Operations
2y 5m to grant Granted Apr 14, 2026
Patent 12596647
CACHE MANAGEMENT USING SHARED CACHE LINE STORAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12591392
STORAGE DEVICE UPDATING ATTRIBUTE OF DATA AND OPERATING METHOD OF THE STORAGE DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12578715
APPARATUS, METHOD, AND SYSTEM FOR WIDE TO SHORT RANGE WIRELESS COMMUNICATION CONVERSION
2y 5m to grant Granted Mar 17, 2026
Patent 12578712
PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
86%
With Interview (+6.3%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 315 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month