DETAILED ACTION
This action is responsive to the RCE filed on 11/17/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered.
Claim Status
Claim 6 is cancelled. Claims 1-5 and 7-18 are amended. Claims 1-5 and 7-18 are pending and have been examined.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 7-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shirota et al. (US 20170228012 A1)(hereafter referred to as Shirota) further in view of Geml et al. (US 20170269860 A1)(cited by examiner in previous action)(hereafter referred to as Geml).
Regarding Claim 1,
Shirota discloses the following limitations:
A storage device (SoC 210, Fig. 33) to optimize performance and operate under a predefined power ceiling, the storage device comprises:
a controller (Region Controller 110, Fig. 33) to receive an indication of a power mode (“the active region changer 113 instructs the power setter 114 to change power supplied to any one of one or more inactive power supply unit regions” [0070]) – As shown in Fig. 33 and detailed in ¶0070, a region controller 110 directed by an active region changer 113 causes a power setter 114 to make a change in power applied to various memory regions. In this context, examiner considers region controller 110 as “receiv[ing] an indication” (i.e., from an active region changer 113) about a particular “power mode” under which to operate (i.e., about which memory regions should be supplied with power to retain data; see [Abstract])-- … and
a power optimization module (Switching Controller 170, Fig. 33) …, to issue (Fig. 34, step S1302) a first random-access memory (RAM) usage policy (“processing using the internal memory” [0160]) wherein the controller processes host data in an internal RAM (Internal Memory 220, Fig. 32 // “SRAM” [0155]) when using the first RAM usage policy (“the switching controller 170 performs control to switch the first processing to the processing using the internal memory 220 (using the internal memory 220 alone) as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed solely using an internal memory 220. In this context, examiner considers processing using internal memory 220 alone as “a first random-access memory (RAM) usage policy”--, and
… to issue (Fig. 34, step S1303) a second RAM usage policy (“processing using the first memory 20 and the internal memory 220 in combination” [0160]) wherein the controller processes host data in one of an external RAM (First Memory 20, Fig. 32 // “DRAM” [0154]) and the external RAM and portions of the internal RAM when using the second RAM usage policy (“the switching controller performs control to switch the first processing to the processing using the first memory 20 and the internal memory 220 in combination as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed using both internal memory 220 and an external first memory 20. In this context, examiner considers processing using both internal memory 220 and first memory 20 as “a second RAM usage policy”--,
Shirota is silent regarding the storage device receiving an indication of a power mode from a host device. In addition, although Shirota ¶0163 discloses that the RAM usage policies depicted in Fig. 34 achieve lower power consumption, Shirota does not explicitly disclose issuing the RAM usage policies of Fig. 34 based on the indicated power mode. Specifically, Shirota does not explicitly disclose the following limitations:
a controller to receive an indication of a power mode from a host device, the power mode is associated with a power ceiling under which the storage device is to operate;
a power optimization module to determine power usage by the storage device,
wherein when the power usage is below a power ceiling threshold, to issue a first … policy
when the power usage is above the power ceiling threshold, to issue a second … policy
However, Geml discloses the following limitations:
a controller (Controller 8, Fig. 2) to receive an indication of a power mode (“a command … to adjust a power consumption target” [0027]) from a host device (Host 4, Fig. 2), the power mode is associated with a power ceiling (“a power consumption target” [0022]) under which the storage device is to operate (“host system 4 may issue a command that instructs the particular storage device of storage devices 6 to adjust a power consumption target of the particular storage device.” [0027] // ¶¶0022; 0047) – As shown in Geml Fig. 2 and taught in ¶0047, a storage controller 8 adjusts power consumption of a storage device by selectively applying power to individual memories within the storage device, similar to how the region controller 110 of Shirota Fig. 33 selectively applies power to individual regions of a first memory. Examiner accordingly considers controller 8 of Geml Fig. 2 as analogous to region controller 110 of Shirota Fig. 33 (i.e., the “controller”). As disclosed in Geml ¶¶0022 and 0027 and shown in Fig. 2, a Host 4 commands storage devices 6 to operate within “power consumption targets”--
a power optimization module (Controller 8, Fig. 2 // ¶0049) to determine (Fig. 4, step 402) power usage by the storage device (“controller 8 of FIGS. 1-3 may perform the techniques of FIG. 4 … obtain power and performance data for each storage device of a plurality of storage devices (402) … respective data indicating … an amount of power being consumed by the respective storage device (e.g., watts).” [0049-50]) – As shown in Fig. 4 step 402, controller 8 receives data indicating an amount of power being consumed (i.e., indicating “power usage”) by storage device 6--,
wherein when the power usage is below a power ceiling threshold (“a performance level envelope” [0051]), to issue (Fig. 4, step 406) a first … policy (“Where the performance level of the particular storage device is not within the performance level envelope (“No” branch of 404), host system 4 may adjust a power consumption level of the particular storage device (406). As an example, where the performance level of the particular storage device is less than the lower performance level threshold, the one or more processors of host system 4 may increase the power consumption level of the particular storage device” [0052]) – As shown in Geml Fig. 4 and detailed in ¶0052, the power consumption level of a storage device can be increased or decreased as needed, similar to how the Shirota regions of first (external) memory are powered on or are powered off as needed. In this context, as would be understood by one or ordinary skill in the art, increasing a power consumption level of a storage device in Geml would be analogous to powering off a region of first (i.e., external) memory 20 in Shirota (i.e., thereby increasing relative power consumption of internal memory 220). As shown in Geml Fig. 4 and taught in ¶0052, when an amount of power being consumed by a storage device is less than the lower threshold of a predetermined envelope, the power consumption level of the storage device is increased--
when the power usage is above the power ceiling threshold, to issue (Fig. 4, step 406) a second … policy (“As another example, where the performance level of the particular storage device is greater than the upper performance level threshold, the one or more processors of host system 4 may decrease the power consumption level of the particular storage device” [0052]) – As taught in Geml ¶0052, when an amount of power being consumed by a storage device is greater than the upper threshold of a predetermined envelope, the power consumption level of the storage device is decreased. In this context, as would be understood by one of ordinary skill in the art, decreasing a power consumption level of a storage device in Geml would be analogous to powering on a region of first (i.e., external) memory 20 in Shirota (i.e., thereby decreasing relative power consumption of internal memory 220).
Shirota and Geml are considered analogous to the claimed invention because they all relate to the same field of dynamically adjusting power consumption of storage devices during operation. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirota with the teachings of Geml and realize a storage controller which operates using an internal RAM when storage device power consumption is below a threshold and which operates using both internal and external RAM when storage device power consumption is above a threshold. Doing so would improve the synchronization of a storage environment by establishing uniform power consumption across a plurality of storage devices, resulting in improved stability of a storage environment, as disclosed in Geml ¶¶0016-17: “As discussed above, the power consumption of storage devices may be influenced by a wide variety of factors and may change over time. As a result, the respective efficiencies of each storage device of a plurality of storage devices may not be initially uniform and/or may not uniformly change over time … adjust the power consumption level of the particular storage device in response to determining that the respective performance level of the particular storage device is not within the performance level envelope. In this way, the device may improve the stability of the storage environment (i.e., by improving the synchronization of the plurality of storage devices).”
The combined teachings of Shirota and Geml additionally disclose the following limitations:
wherein as part of the second RAM usage policy (Shirota, Fig. 34, step S1303) the power optimization module instructs the controller to
reduce usage of the internal RAM and increase usage of the external RAM (Shirota, “The power setter 114 … changes power supplied to the region 2 from the second power to the first power (the second state illustrated in FIG. 3) … reduces the occurrence frequency of swapping and also reduces the swapping overhead” [0070] // Fig. 3) – As previously discussed and as shown in Shirota Fig. 3 and detailed in ¶0070, power setter 114 can increase the number of active regions (i.e., “increase usage”) of first memory (i.e., “the external RAM”) in order to reduce swapping overhead by powering on another region of first memory. As further shown in Fig. 34, such a mode of operation is in place of processing exclusively using the internal memory 220 (i.e., according to the first RAM usage policy shown in step S1302). One of ordinary skill in the art would accordingly understand that the second RAM usage policy uses relatively less internal memory 220 as compared to the first RAM usage policy (i.e., “reduce usage of the internal RAM”) --
until a given criterion is met (Geml, “In some examples, after adjusting the power consumption level of the particular storage device or where the performance level of the particular storage device is within the performance level envelope (“Yes” branch of 404), host system 4 may continue to periodically obtain power and performance data for each of the plurality of storage devices” [0053] // Fig. 4) – As shown Geml Fig. 4 and explicitly taught in ¶0053, storage devices are continually monitored and are accordingly adjusted based on the monitored power consumption data. One of ordinary skill in the art would accordingly understand that the second RAM usage policy (i.e., decreasing a power consumption level during step 406) would be performed until the monitored power consumption is below the performance envelope; after which the first RAM usage policy (i.e., increasing a power consumption level during step 406) would be performed. In such a context, the second RAM usage policy is issued until power consumption is below the performance envelope (i.e., “until a given criterion is met”), after which the first RAM usage policy is issued to increase the power consumption level.
Regarding Claim 2,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 2. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above), further comprising a host interface (Geml, Interface 14, Fig. 2) to receive commands from the host device and provide an indication of a host command to the power optimization module for use in determining the power mode which the storage device is to operate (Geml, “As illustrated in FIG. 1, host system 4 may communicate with storage device 6 via interface 14” [0020]) – As previously discussed (see Claim 1 limitation mappings above) and as disclosed in Geml ¶0052, host 4 commands storage device 6A to increase/decrease respective power consumption. As shown in Geml Fig. 1 and clarified in ¶0020, host 4 interacts with storage device 6A via an interface 14.
Regarding Claim 3,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 3. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above), further comprising a flash translation layer (FTL) module (Geml, Address Translation Module 22, Fig. 3) to process backend operations and store FTL mappings (Geml, “Address translation module 22 of controller 8 may utilize a flash translation layer … address translation module 22 may store the flash translation layer or table in volatile memory 12” [0040]) – As shown in Geml Fig. 3 and disclosed in ¶0040, controller 8 includes a flash translation layer table (i.e., “FTL mappings”) stored in volatile memory 12 (i.e., in the internal RAM)-- according to a RAM usage policy issued by the power optimization module (Shirota, “When the number of active regions is reduced … the mapping changer 115 moves the page mapped to the target active region (the active region to be changed to an inactive region) to another active region … and changes a page table indicating the correspondence between the virtual address specified by the application and the physical address (information indicating the position in memory) in units of pages, together with moving the pages” [0071]) – As clarified in Shirota ¶0071, after a change in a number of active regions (i.e., due to the RAM usage policies issued in Fig. 34), a mapping changer 115 updates page table information (i.e., “FTL mappings”) to reflect the updated usage policy.
Regarding Claim 7,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 7. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1, wherein the power mode is associated with a high- performance mode (Geml, “By increasing the power consumption level of the particular storage device, the device may enable the performance of the particular storage device to handle increased workload” [0018]), wherein when the storage device is operating under the high-performance mode, the storage device disregards the power ceiling and operates according to the first RAM usage policy (Geml, “the one or more processor may output a command that causes the particular storage device to increase its power consumption target” [0052] // ¶¶0026-27) – As previously discussed (see Claim 1 limitation mappings above), a host issues the first RAM usage policy by transmitting a command to a storage device to increase a respective power consumption target, thereby improving the performance of the storage device (see Geml ¶0018). In this context, examiner considers increasing the power consumption target of a storage device as the storage device effectively “disregard[ing]” the lower power consumption target in favor of the increased power consumption target.
Regarding Claim 8,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 8. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 7, wherein the power optimization module determines that the storage device is operating under the high-performance mode based on one of
a host command (Geml, “the one or more processors of host system 4 may increase the power consumption level of the particular storage device. For instance, the one or more processor may output a command that causes the particular storage device to increase its power consumption target” [0052]) and
an indication from an internal mechanism on the storage device (Geml, “computing devices having configurations different than that of host system 4 may perform the techniques of FIG. 4 (e.g., one or more of storage devices 6 or controller 8 of FIGS. 1-3 may perform the techniques of FIG. 4” [0049])
Regarding Claim 9,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 9. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above), wherein based on the second RAM usage policy (Shirota, S1303, Fig. 34) the controller switches off banks in the internal RAM and uses the external RAM proportionately (Shirota, “The first memory 20 includes a plurality of DIMMs … The DIMM includes a plurality of ranks (each rank includes a plurality of banks) … The settings of the low power consumption state can be finely controlled in units of DIMMs, ranks, or banks” [0058] // “power saving control for the first memory 20 as described in the forgoing embodiments may be replaced with power saving control for the internal memory 220” [0164]) – As previously discussed (see Claim 1 limitation mappings above) and as taught in Shirota Fig. 34, the second RAM usage policy causes processing to take place using both (i.e., “proportionately”) first memory 20 (i.e., “the external RAM”) and internal memory 220 (i.e., “the internal RAM”). As clarified in ¶¶0058 and 0164, both first memory 20 (see ¶0058) and internal memory 220 (see ¶0164) are power controlled in the units of individual memory banks. One of ordinary skill in the art would accordingly understand that the second RAM usage policy of Shirota Fig. 34 would decrease the usage of internal memory 220 (i.e., “switches off banks in the internal RAM”) and would increase the usage of first memory 20 (i.e., “uses the external RAM proportionately”) relative to the first RAM usage policy (i.e., step S1302 of Shirota Fig. 34). --
to meet a power target specification. (Geml, “a performance level envelope” [0051]) – As previously discussed (see Claim 1 limitation mappings above) and as taught in Geml, increasing and decreasing power levels of storage devices ensures performance within a predetermined “performance level envelope” (i.e., “a power target specification”)--.
Regarding Claim 10,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 10. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above), when one of
the storage device receives a high-performance mode command (Geml, “a command” [0052]) from the host device (Geml, “the one or more processor may output a command that causes the particular storage device to increase its power consumption target” [0052]),
determines that the host device is operating in a high-performance mode,
that there is a drop in cache hits on the external RAM, and
that congestion on a link between the host device and the storage device is above a congestion threshold, --Claim 10 is being interpreted, in view of MPEP 2143.03, as a claim requiring selection of an element from a list of alternatives. The above listed limitations are considered as the list of alternative elements--
the power optimization module issues the first RAM usage policy (Geml, “where the performance level of the particular storage device is less than the lower performance level threshold, the one or more processor host system 4 may increase the power consumption level of the particular storage device. For instance, the one or more processors may output a command that causes the particular storage device to increase its power consumption target” [0052]) – As previously discussed (see Claim 1 limitation mappings above) and as detailed Geml ¶0052, as part of the first RAM usage policy, storage device 6A receives a command from host 4 which causes the storage device to increase its power consumption target. Examiner considers a command transmit by a host to a storage device to increase a power consumption target of the storage device as “a high-performance mode command”.
Regarding Claim 11,
Shirota discloses the following limitations:
A method for optimizing performance on a storage device (SoC 210, Fig. 33) and operating under a predefined power ceiling, one or more processors (Processor Core 101, Fig. 33) on the storage device being configured to execute the method comprising:
receiving an indication of a power mode under which the storage device is to operate(“the active region changer 113 instructs the power setter 114 to change power supplied to any one of one or more inactive power supply unit regions” [0070]) – As shown in Fig. 33 and detailed in ¶0070, a region controller 110 directed by an active region changer 113 causes a power setter 114 to make a change in power applied to various memory regions. In this context, examiner considers region controller 110 as “receiving an indication” (i.e., from an active region changer 113) about a particular “power mode” under which to operate (i.e., about which memory regions should be supplied with power to retain data; see [Abstract])--; …
issuing (Fig. 34, step S1302) a first random-access memory (RAM) usage policy (“processing using the internal memory” [0160]) and processing host data using an internal RAM (Internal Memory 220, Fig. 32 // “SRAM” [0155])(“the switching controller 170 performs control to switch the first processing to the processing using the internal memory 220 (using the internal memory 220 alone) as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed solely using an internal memory 220. In this context, examiner considers processing using internal memory 220 alone as “a first random-access memory (RAM) usage policy”--, and …;
issuing (Fig. 34, step S1303) a second RAM usage policy (“processing using the first memory 20 and the internal memory 220 in combination” [0160]) and processing host data using one of an external RAM (First Memory 20, Fig. 32 // “DRAM” [0154]) and the external RAM and portions of the internal RAM (“the switching controller performs control to switch the first processing to the processing using the first memory 20 and the internal memory 220 in combination as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed using both internal memory 220 and an external first memory 20. In this context, examiner considers processing using both internal memory 220 and first memory 20 as “a second RAM usage policy”--,
Shirota is silent regarding the storage device receiving an indication of a power mode from a host device. In addition, although Shirota ¶0163 discloses that the RAM usage policies depicted in Fig. 34 achieve lower power consumption, Shirota does not explicitly disclose issuing the RAM usage policies of Fig. 34 based on the indicated power mode. Specifically, Shirota does not explicitly disclose the following limitations:
receiving an indication of a power mode under which the storage device is to operate from a host device
determining a power usage of the storage device; and
when the power usage is below a power ceiling threshold, issuing a first … policy;
when the power usage is above the power ceiling threshold, issuing a second … policy
However, Geml discloses the following limitations:
receiving an indication (“a command … to adjust a power consumption target” [0027]) of a power mode under which the storage device is to operate from a host device (Host 4, Fig. 2)(“host system 4 may issue a command that instructs the particular storage device of storage devices 6 to adjust a power consumption target of the particular storage device.” [0027] // ¶¶0022; 0047) – As shown in Geml Fig. 2 and taught in ¶0047, a storage controller 8 adjusts power consumption of a storage device by selectively applying power to individual memories within the storage device, similar to how the region controller 110 of Shirota Fig. 33 selectively applies power to individual regions of a first memory. Examiner accordingly considers controller 8 of Geml Fig. 2 as analogous to region controller 110 of Shirota Fig. 33. As disclosed in Geml ¶¶0022 and 0027 and shown in Fig. 2, a Host 4 commands storage devices 6 to operate within “power consumption targets”.
determining (Fig. 4, step 402) a power usage by the storage device (“controller 8 of FIGS. 1-3 may perform the techniques of FIG. 4 … obtain power and performance data for each storage device of a plurality of storage devices (402) … respective data indicating … an amount of power being consumed by the respective storage device (e.g., watts).” [0049-50]) – As shown in Fig. 4 step 402, controller 8 receives data indicating an amount of power being consumed (i.e., indicating “power usage”) by storage device 6--; and
when the power usage is below a power ceiling threshold (“a performance level envelope” [0051]), issuing (Fig. 4, step 406) a first … policy (“Where the performance level of the particular storage device is not within the performance level envelope (“No” branch of 404), host system 4 may adjust a power consumption level of the particular storage device (406). As an example, where the performance level of the particular storage device is less than the lower performance level threshold, the one or more processors of host system 4 may increase the power consumption level of the particular storage device” [0052]) – As shown in Geml Fig. 4 and detailed in ¶0052, the power consumption level of a storage device can be increased or decreased as needed, similar to how the Shirota regions of first (external) memory are powered on or are powered off as needed. In this context, as would be understood by one or ordinary skill in the art, increasing a power consumption level of a storage device in Geml would be analogous to powering off a region of first (i.e., external) memory 20 in Shirota (i.e., thereby increasing relative power consumption of internal memory 220). As shown in Geml Fig. 4 and taught in ¶0052, when an amount of power being consumed by a storage device is less than the lower threshold of a predetermined envelope, the power consumption level of the storage device is increased--
;
when the power usage is above the power ceiling threshold, issuing (Fig. 4, step 406) a second … policy (“As another example, where the performance level of the particular storage device is greater than the upper performance level threshold, the one or more processors of host system 4 may decrease the power consumption level of the particular storage device” [0052]) – As taught in Geml ¶0052, when an amount of power being consumed by a storage device is greater than the upper threshold of a predetermined envelope, the power consumption level of the storage device is decreased. In this context, as would be understood by one of ordinary skill in the art, decreasing a power consumption level of a storage device in Geml would be analogous to powering on a region of first (i.e., external) memory 20 in Shirota (i.e., thereby decreasing relative power consumption of internal memory 220).
Shirota and Geml are considered analogous to the claimed invention because they all relate to the same field of dynamically adjusting power consumption of storage devices during operation. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirota with the teachings of Geml and realize a storage controller which operates using an internal RAM when storage device power consumption is below a threshold and which operates using both internal and external RAM when storage device power consumption is above a threshold. Doing so would improve the synchronization of a storage environment by establishing uniform power consumption across a plurality of storage devices, resulting in improved stability of a storage environment, as disclosed in Geml ¶¶0016-17: “As discussed above, the power consumption of storage devices may be influenced by a wide variety of factors and may change over time. As a result, the respective efficiencies of each storage device of a plurality of storage devices may not be initially uniform and/or may not uniformly change over time … adjust the power consumption level of the particular storage device in response to determining that the respective performance level of the particular storage device is not within the performance level envelope. In this way, the device may improve the stability of the storage environment (i.e., by improving the synchronization of the plurality of storage devices).”
The combined teachings of Shirota and Geml additionally disclose the following limitations:
instructing a controller (Shirota, Region Controller 110, Fig. 33) to reduce usage of the internal RAM and increase usage of the external RAM (Shirota, “The power setter 114 … changes power supplied to the region 2 from the second power to the first power (the second state illustrated in FIG. 3) … reduces the occurrence frequency of swapping and also reduces the swapping overhead” [0070] // Fig. 3) -- As previously discussed and as shown in Shirota Fig. 3 and detailed in ¶0070, power setter 114 can increase the number of active regions (i.e., “increase usage”) of first memory (i.e., “the external RAM”) in order to reduce swapping overhead by powering on another region of first memory. As further shown in Fig. 34, such a mode of operation is in place of processing exclusively using the internal memory 220 (i.e., according to the first RAM usage policy shown in step S1302). One of ordinary skill in the art would accordingly understand that the second RAM usage policy uses relatively less internal memory 220 as compared to the first RAM usage policy (i.e., “reduce usage of the internal RAM”) --
until a given criterion is met as part of the second RAM usage policy (Geml, “In some examples, after adjusting the power consumption level of the particular storage device or where the performance level of the particular storage device is within the performance level envelope (“Yes” branch of 404), host system 4 may continue to periodically obtain power and performance data for each of the plurality of storage devices” [0053] // Fig. 4) – As shown Geml Fig. 4 and explicitly taught in ¶0053, storage devices are continually monitored and are accordingly adjusted based on the monitored power consumption data. One of ordinary skill in the art would accordingly understand that the second RAM usage policy (i.e., decreasing a power consumption level during step 406) would be performed until the monitored power consumption is below the performance envelope; after which the first RAM usage policy (i.e., increasing a power consumption level during step 406) would be performed. In such a context, the second RAM usage policy is issued until power consumption is below the performance envelope (i.e., “until a given criterion is met”), after which the first RAM usage policy is issued to increase the power consumption level.
Regarding Claim 12,
The same motivation to combine provided in Claim 11 is equally applicable to Claim 12. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 11, further comprising switching off banks in the internal RAM and using the external RAM proportionately (Shirota, “The first memory 20 includes a plurality of DIMMs … The DIMM includes a plurality of ranks (each rank includes a plurality of banks) … The settings of the low power consumption state can be finely controlled in units of DIMMs, ranks, or banks” [0058] // “power saving control for the first memory 20 as described in the forgoing embodiments may be replaced with power saving control for the internal memory 220” [0164]) – As previously discussed (see Claim 11 limitation mappings above) and as taught in Shirota Fig. 34, the second RAM usage policy causes processing to take place using both (i.e., “proportionately”) first memory 20 (i.e., “the external RAM”) and internal memory 220 (i.e., “the internal RAM”). As clarified in ¶¶0058 and 0164, both first memory 20 (see ¶0058) and internal memory 220 (see ¶0164) are power controlled in the units of individual memory banks. One of ordinary skill in the art would accordingly understand that the second RAM usage policy of Shirota Fig. 34 would decrease the usage of internal memory 220 (i.e., “switches off banks in the internal RAM”) and would increase the usage of first memory 20 (i.e., “uses the external RAM proportionately”) relative to the first RAM usage policy (i.e., step S1302 of Shirota Fig. 34). --
to meet a power target specification (Geml, “a performance level envelope” [0051]) – As previously discussed (see Claim 1 limitation mappings above) and as taught in Geml, increasing and decreasing power levels of storage devices ensures performance within a predetermined “performance level envelope” (i.e., “a power target specification”)--
based on the second RAM usage policy (Shirota, S1303, Fig. 34)
Regarding Claim 13,
Shirota discloses the following limitations:
A method for optimizing performance on a storage device (SoC 210, Fig. 33) and operating under a predefined power ceiling, one or more processors (Processor Core 101, Fig. 33) on the storage device being configured to execute the method comprising:
receiving an indication of a power mode under which the storage device is to operate(“the active region changer 113 instructs the power setter 114 to change power supplied to any one of one or more inactive power supply unit regions” [0070]) – As shown in Fig. 33 and detailed in ¶0070, a region controller 110 directed by an active region changer 113 causes a power setter 114 to make a change in power applied to various memory regions. In this context, examiner considers region controller 110 as “receiving an indication” (i.e., from an active region changer 113) about a particular “power mode” under which to operate (i.e., about which memory regions should be supplied with power to retain data; see [Abstract])--; … and
issuing(Fig. 34, step S1302) a first random-access memory (RAM) usage policy (“processing using the internal memory” [0160]) and processing host data in an internal RAM (Internal Memory 220, Fig. 32 // “SRAM” [0155])(“the switching controller 170 performs control to switch the first processing to the processing using the internal memory 220 (using the internal memory 220 alone) as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed solely using an internal memory 220. In this context, examiner considers processing using internal memory 220 alone as “a first random-access memory (RAM) usage policy”--,,
issuing (Fig. 34, step S1303) a second RAM usage policy (“processing using the first memory 20 and the internal memory 220 in combination” [0160]) and processing host data using one of an external RAM (First Memory 20, Fig. 32 // “DRAM” [0154]) and the external RAM and portions of the internal RAM (“the switching controller performs control to switch the first processing to the processing using the first memory 20 and the internal memory 220 in combination as the memory used for read/write of the first data by the SoC 210” [0160]) – As shown in Fig. 34 and taught in ¶0160, a switching controller 170 causes processing of reads and writes to be performed using both internal memory 220 and an external first memory 20. In this context, examiner considers processing using both internal memory 220 and first memory 20 as “a second RAM usage policy”--,
Shirota is silent regarding the storage device receiving an indication of a power mode from a host device. In addition, although Shirota ¶0163 discloses that the RAM usage policies depicted in Fig. 34 achieve lower power consumption, Shirota does not explicitly disclose issuing the RAM usage policies of Fig. 34 based on the indicated power mode. Specifically, Shirota does not explicitly disclose the following limitations:
receiving an indication of a power mode under which the storage device is to operate from a host device
determining a power usage of the storage device; and
issuing a first … policy when the power usage is below a power ceiling threshold;
issuing a second … policy when the power usage is above the power ceiling threshold,
However, Geml discloses the following limitations:
receiving an indication (“a command … to adjust a power consumption target” [0027]) of a power mode under which the storage device is to operate from a host device (Host 4, Fig. 2)(“host system 4 may issue a command that instructs the particular storage device of storage devices 6 to adjust a power consumption target of the particular storage device.” [0027] // ¶¶0022; 0047) – As shown in Geml Fig. 2 and taught in ¶0047, a storage controller 8 adjusts power consumption of a storage device by selectively applying power to individual memories within the storage device, similar to how the region controller 110 of Shirota Fig. 33 selectively applies power to individual regions of a first memory. Examiner accordingly considers controller 8 of Geml Fig. 2 as analogous to region controller 110 of Shirota Fig. 33. As disclosed in Geml ¶¶0022 and 0027 and shown in Fig. 2, a Host 4 commands storage devices 6 to operate within “power consumption targets”.
determining (Fig. 4, step 402) a power usage by the storage device (“controller 8 of FIGS. 1-3 may perform the techniques of FIG. 4 … obtain power and performance data for each storage device of a plurality of storage devices (402) … respective data indicating … an amount of power being consumed by the respective storage device (e.g., watts).” [0049-50]) – As shown in Fig. 4 step 402, controller 8 receives data indicating an amount of power being consumed (i.e., indicating “power usage”) by storage device 6—
issuing (Fig. 4, step 406) a first … policy when the power usage is below a power ceiling threshold(“a performance level envelope” [0051] )“Where the performance level of the particular storage device is not within the performance level envelope (“No” branch of 404), host system 4 may adjust a power consumption level of the particular storage device (406). As an example, where the performance level of the particular storage device is less than the lower performance level threshold, the one or more processors of host system 4 may increase the power consumption level of the particular storage device” [0052]) – As shown in Geml Fig. 4 and detailed in ¶0052, the power consumption level of a storage device can be increased or decreased as needed, similar to how the Shirota regions of first (external) memory are powered on or are powered off as needed. In this context, as would be understood by one or ordinary skill in the art, increasing a power consumption level of a storage device in Geml would be analogous to powering off a region of first (i.e., external) memory 20 in Shirota (i.e., thereby increasing relative power consumption of internal memory 220). As shown in Geml Fig. 4 and taught in ¶0052, when an amount of power being consumed by a storage device is less than the lower threshold of a predetermined envelope, the power consumption level of the storage device is increased--
issuing (Fig. 4, step 406) a second … policy when the power usage is above the power ceiling threshold (“As another example, where the performance level of the particular storage device is greater than the upper performance level threshold, the one or more processors of host system 4 may decrease the power consumption level of the particular storage device” [0052]) – As taught in Geml ¶0052, when an amount of power being consumed by a storage device is greater than the upper threshold of a predetermined envelope, the power consumption level of the storage device is decreased. In this context, as would be understood by one of ordinary skill in the art, decreasing a power consumption level of a storage device in Geml would be analogous to powering on a region of first (i.e., external) memory 20 in Shirota (i.e., thereby decreasing relative power consumption of internal memory 220).
Shirota and Geml are considered analogous to the claimed invention because they all relate to the same field of dynamically adjusting power consumption of storage devices during operation. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirota with the teachings of Geml and realize a storage controller which operates using an internal RAM when storage device power consumption is below a threshold and which operates using both internal and external RAM when storage device power consumption is above a threshold. Doing so would improve the synchronization of a storage environment by establishing uniform power consumption across a plurality of storage devices, resulting in improved stability of a storage environment, as disclosed in Geml ¶¶0016-17: “As discussed above, the power consumption of storage devices may be influenced by a wide variety of factors and may change over time. As a result, the respective efficiencies of each storage device of a plurality of storage devices may not be initially uniform and/or may not uniformly change over time … adjust the power consumption level of the particular storage device in response to determining that the respective performance level of the particular storage device is not within the performance level envelope. In this way, the device may improve the stability of the storage environment (i.e., by improving the synchronization of the plurality of storage devices).”
The combined teachings of Shirota and Geml additionally disclose the following limitations:
instructing a controller (Shirota, Region Controller 110, Fig. 33) to reduce usage of the internal RAM and increase usage of the external RAM (Shirota, “The power setter 114 … changes power supplied to the region 2 from the second power to the first power (the second state illustrated in FIG. 3) … reduces the occurrence frequency of swapping and also reduces the swapping overhead” [0070] // Fig. 3) -- As previously discussed and as shown in Shirota Fig. 3 and detailed in ¶0070, power setter 114 can increase the number of active regions (i.e., “increase usage”) of first memory (i.e., “the external RAM”) in order to reduce swapping overhead by powering on another region of first memory. As further shown in Fig. 34, such a mode of operation is in place of processing exclusively using the internal memory 220 (i.e., according to the first RAM usage policy shown in step S1302). One of ordinary skill in the art would accordingly understand that the second RAM usage policy uses relatively less internal memory 220 as compared to the first RAM usage policy (i.e., “reduce usage of the internal RAM”) --
until a given criterion is met as part of the second RAM usage policy (Geml, “In some examples, after adjusting the power consumption level of the particular storage device or where the performance level of the particular storage device is within the performance level envelope (“Yes” branch of 404), host system 4 may continue to periodically obtain power and performance data for each of the plurality of storage devices” [0053] // Fig. 4) – As shown Geml Fig. 4 and explicitly taught in ¶0053, storage devices are continually monitored and are accordingly adjusted based on the monitored power consumption data. One of ordinary skill in the art would accordingly understand that the second RAM usage policy (i.e., decreasing a power consumption level during step 406) would be performed until the monitored power consumption is below the performance envelope; after which the first RAM usage policy (i.e., increasing a power consumption level during step 406) would be performed. In such a context, the second RAM usage policy is issued until power consumption is below the performance envelope (i.e., “until a given criterion is met”), after which the first RAM usage policy is issued to increase the power consumption level.
and
when operating under the second RAM usage policy, one of
i) receiving a high-performance mode command (Geml, “a command” [0052]) from the host device (Geml, “the one or more processor may output a command that causes the particular storage device to increase its power consumption target” [0052]),
ii) determining that the host device is operating in a high-performance mode,
iii) determining that there is a drop in cache hits on the external RAM, and
iv) determining that a congestion level on a link between the host device and the storage device is above a congestion threshold, -- Claim 13 is being interpreted, in view of MPEP 2143.03, as a claim requiring selection of an element from a list of alternatives. The above listed limitations are considered as the list of alternative elements--
and issuing the first RAM usage policy to use the internal RAM in processing host data (Geml, “where the performance level of the particular storage device is less than the lower performance level threshold, the one or more processor host system 4 may increase the power consumption level of the particular storage device. For instance, the one or more processors may output a command that causes the particular storage device to increase its power consumption target” [0052]) – As shown in Geml Fig. 4 and detailed in ¶0053, after commanding a storage device to decrease its power consumption target (i.e., when operating under “the second RAM usage policy”), storage device power consumption is continually monitored (step 402) and a policy is issued (steps 404-406) based on the monitoring result. One of ordinary skill in the art would accordingly understand that a storage device would receive a command to increase its power consumption target (i.e., to operate under “the first RAM usage policy”) after having previously received a command to decrease its power consumption target. As previously discussed (see limitation mappings above) and as detailed Geml ¶0052, as part of the first RAM usage policy, storage device 6A receives a command from host 4 which causes the storage device to increase its power consumption target. Examiner considers a command transmit by a host to a storage device to increase a power consumption target of the storage device as “a high-performance mode command”.
Regarding Claim 14,
The same motivation to combine provided in Claim 13 is equally applicable to Claim 14. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 13, further comprising receiving commands (Geml, “the one or more processors of host system 4 may … output a command” [0052]) from the host device and using an indication of a host command in determining the power mode which the storage device is to operate (Geml, ¶0052) – As previously discussed (see Claim 13 limitation mappings above) and as detailed in Geml ¶0052, host 4 commands storage device 6A to decrease its respective power consumption target and to increase its respective power consumption target.
Regarding Claim 15,
The same motivation to combine provided in Claim 13 is equally applicable to Claim 15. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 13, further comprising processing backend operations and storing mappings (Geml, “Address translation module 22 of controller 8 may utilize a flash translation layer … address translation module 22 may store the flash translation layer or table in volatile memory 12” [0040]) – As shown in Geml Fig. 3 and disclosed in ¶0040, controller 8 includes a flash translation layer table (i.e., “FTL mappings”) stored in volatile memory 12 (i.e., in the internal RAM)-- according to a RAM usage policy(Shirota, “When the number of active regions is reduced … the mapping changer 115 moves the page mapped to the target active region (the active region to be changed to an inactive region) to another active region … and changes a page table indicating the correspondence between the virtual address specified by the application and the physical address (information indicating the position in memory) in units of pages, together with moving the pages” [0071]) – As clarified in Shirota ¶0071, after a change in a number of active regions (i.e., due to the RAM usage policies issued in Fig. 34), a mapping changer 115 updates page table information (i.e., “FTL mappings”) to reflect the updated usage policy.
Regarding Claim 18,
The same motivation to combine provided in Claim 13 is equally applicable to Claim 18. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 13, wherein when the storage device is operating under the high-performance mode, the method further comprises disregarding the power ceiling and operating according to the first RAM usage policy (Geml, “By increasing the power consumption level of the particular storage device, the device may enable the performance of the particular storage device to handle increased workload” [0018] // “the one or more processor may output a command that causes the particular storage device to increase its power consumption target” [0052] // ¶¶0026-27) – As previously discussed (see Claim 13 limitation mappings above), a host issues the first RAM usage policy by transmitting a command to a storage device to increase a respective power consumption target, thereby improving the performance of the storage device (see Geml ¶0018). In this context, examiner considers increasing the power consumption target of a storage device as the storage device effectively “disregard[ing]” the lower power consumption target in favor of the increased power consumption target.
Claims 4 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Shirota further in view of Geml and Liu (US PGPUB No. 20180181302 A1)(cited by examiner in previous action)(hereafter referred to as Liu).
Regarding Claim 4,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 4. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above),
Although Shirota Fig. 33 and ¶0073 disclose a monitor 111 which measures values including “data transfer time” and “OS processing time” associated with the storage device, the combined teachings of Shirota and Geml do not explicitly disclose the following limitations:
further comprising a traffic monitor to monitor traffic between the host device and the storage device and provide updates on traffic flow to the power optimization module for use in monitoring a congestion level on a link between the host device and the storage device.
However, Liu discloses within the context of a data storage device (Data Storage Device 120, Fig. 1) controller (Controller 130, Fig. 1) receiving commands from a host (Host System 110, Fig. 1) that a traffic monitor provides the controller with I/O statistics used to adjust operation of the storage device.
Liu discloses the following limitations:
a traffic monitor (I/O Traffic Monitor 132, Fig. 1) to monitor traffic between the host device (Host System 110, Fig. 1) and the storage device (Data Storage Device 120, Fig. 1) and provide updates on traffic flow to the power optimization module (Controller 130, Fig. 1) for use in monitoring a congestion level (“I/O traffic states” [0020]) on a link (Host Interface 160, Fig. 1 // ¶0017) between the host device and the storage device (“The controller 130 includes an I/O traffic monitor module 132 configured to manage, and respond to, I/O traffic profile changes in the data storage device 120. For example, the I/O traffic monitor may be configured to determine various I/O traffic states associated with the data storage device according to host I/O profiler 131” [0020] // “In certain embodiments, the process 200 involves taking a comprehensive assessment of the I/O traffic environment on the data storage device, and controlling one or more functionalities (e.g., firmware operations) based thereon” [0028] // Fig. 2 // ¶¶0026-31) – In this case, Host System 110, Data Storage Device 120, and Controller 130 of Liu Fig. 1 are considered analogous to Host 4, Storage Device 6A, and Controller 8 of Geml Fig. 2, respectively.
Shirota, Geml, and Liu are all considered to be analogous to the claimed invention because they all relate to the same field of real-time performance adjustment of a storage device based on commands received from a host device. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirota and Geml with the teachings of Liu and realize a storage device controller comprising a traffic monitor which monitors a congestion level on a link between the storage device and a host device. Doing so would enable improved spatial locality of write operations executed on the storage device based on I/O information provided by the traffic monitor, resulting in improved quality of service of the data storage device at least with respect to latency and throughput, as disclosed in Liu ¶0024: “the controller 130 may utilize the I/O traffic monitor 132 to improve the quality of service of the data storage device with respect to latency, throughput, variation of certain I/O profile throughput, and/or the like … The I/O profile information provided by the host I/O profiler 131 … may be used to intelligently control spatial locality of write operations executed in the non-volatile memory array 140.”
Regarding Claim 16,
The same motivation to combine provided in Claim 13 is equally applicable to Claim 16. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 13 (see Claim 13 limitation mappings above)
Although Shirota Fig. 33 and ¶0073 disclose a monitor 111 which measures values including “data transfer time” and “OS processing time” associated with the storage device, the combined teachings of Shirota and Geml do not explicitly disclose the following limitations:
further comprising monitoring traffic between the host device and the storage device and using updates on traffic flow to monitor the congestion level on the link between the host device and the storage device.
However, Liu discloses within the context of a data storage device (Data Storage Device 120, Fig. 1) controller (Controller 130, Fig. 1) receiving commands from a host (Host System 110, Fig. 1) that a traffic monitor provides the controller with I/O statistics used to adjust operation of the storage device.
Liu discloses the following limitations:
monitoring traffic between the host device (Host System 110, Fig. 1) and the storage device (Data Storage Device 120, Fig. 1) and using updates on traffic flow to monitor the congestion level (“I/O traffic states” [0020]) on the link (Host Interface 160, Fig. 1 // ¶0017) between the host device and the storage device (“The controller 130 includes an I/O traffic monitor module 132 configured to manage, and respond to, I/O traffic profile changes in the data storage device 120. For example, the I/O traffic monitor may be configured to determine various I/O traffic states associated with the data storage device according to host I/O profiler 131” [0020] // “In certain embodiments, the process 200 involves taking a comprehensive assessment of the I/O traffic environment on the data storage device, and controlling one or more functionalities (e.g., firmware operations) based thereon” [0028] // Fig. 2 // ¶¶0026-31) – In this case, Host System 110, Data Storage Device 120, and Controller 130 of Liu Fig. 1 are considered analogous to Host 4, Storage Device 6A, and Controller 8 of Geml Fig. 2, respectively.
Shirota, Geml, and Liu are all considered to be analogous to the claimed invention because they all relate to the same field of real-time performance adjustment of a storage device based on commands received from a host device. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shirota and Geml with the teachings of Liu and realize a storage device controller comprising a traffic monitor which monitors a congestion level on a link between the storage device and a host device. Doing so would enable improved spatial locality of write operations executed on the storage device based on I/O information provided by the traffic monitor, resulting in improved quality of service of the data storage device at least with respect to latency and throughput, as disclosed in Liu ¶0024: “the controller 130 may utilize the I/O traffic monitor 132 to improve the quality of service of the data storage device with respect to latency, throughput, variation of certain I/O profile throughput, and/or the like … The I/O profile information provided by the host I/O profiler 131 … may be used to intelligently control spatial locality of write operations executed in the non-volatile memory array 140.”
Claims 5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Shirota further in view of Geml and Byun (US PGPUB No. 20200364157 A1)(cited by examiner in previous action)(hereafter referred to as Byun).
Regarding Claim 5,
The same motivation to combine provided in Claim 1 is equally applicable to Claim 5. The combined teachings of Shirota and Geml disclose the following limitations:
The storage device of claim 1 (see Claim 1 limitation mappings above),
Although Shirota Fig. 33 and ¶0071 disclose a mapping changer 115 which swaps pages between memories, the combined teachings of Shirota and Geml do not explicitly disclose the following limitations:
further comprising a cache hit monitor to track how many pages are swapped in and out of the external RAM and provide an indication of a swap in/swap out rate to the power optimization module for use in monitoring the swap in/swap out rate on the external RAM
However, Byun discloses within the context of a data storage device (Memory System 110, Fig 2) controller (Memory Controller 130, Fig. 2) receiving commands from a host (Host 102, Fig. 2) that a cache hit monitor tracks both how many pages are swapped in an out of an external RAM (Host Cache 106, Fig. 2) and the rate at which pages are swapped in and out of the external RAM.
Byun discloses the following limitations:
further comprising a cache hit monitor (Map Management Table 198, Fig. 2) to track how many pages (“memory map segments” [0041]) are swapped in and out of the external RAM (Host Cache 106, Fig. 2)(“The provision count indicates the total number of memory map segments provided to the host 102. The map management data 198 may further include … the provision count” [0044]) and provide an indication of a swap in/swap out rate (“a miss count MISS_CNT” [0044]) to the power optimization module (Memory Controller 130, Fig. 2 // ¶0049) for use in monitoring the swap in/swap out rate on the external RAM (“The memory system 110 may store map management data 198 in order to selectively provide the host 102 with one or more memory map segments” [0041] // “In accordance with an embodiment, the memory system 110 may adjust … according to a set condition in order to increase the map cache hits probability of host 102 … based on a miss count MISS_CNT … The miss count indicates the total number of map cache misses that have occurred in the host 102” [0044] // “When a map cache miss occurs … The MM 44 may provide the map segment stored in the memory device 150 to the host cache 106” [0056]) – In this case, Host 102, Storage System 110, and Memory Controller 130 of Byun Fig. 2 are considered analogous to Host 4, Storage Device 6A, and Controller 8 of Geml Fig. 2, respectively. As disclosed in Byun ¶0056, whenever a host cache (i.e., an “external RAM”) experiences a “miss”, a map manager (MM) 44 swaps a map segment from the memory system back into the host cache. As further detailed in ¶0044, memory system 110 tracks the “miss count” (i.e., the number of times a miss has occurred in the host) for each map segment using Map Management Table 198 (see also Fig. 1). Accordingly, the “miss count” associated with a map segment is considered as “an indication of a swap in/swap out rate” of an external (e.g., a host) RAM.
Shirota, Geml, and Byun are all considered to be analogous to the claimed invention because they all relate to the same field of adjusting performance of a storage device based on commands received from a host device. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Shirota and Geml with the teachings of Byun and realize a storage device which tracks pages swapped in and out of an external RAM and which provides to a controller an indication of a swap in/ swap out rate. Doing so would enable a controller to provide an external RAM with memory map segments having a relatively lower read count, thereby decreasing the map cache miss probability in the external RAM and improving read performance of the memory system overall, as disclosed in Byun ¶0094: “In accordance with an embodiment, when there are many misses, reflected by a high miss count, and few memory map segments that have been provided, i.e., a low provision count, the MM 44 may … provide the host 102 with memory map segments having a slightly lower read count. When the memory map segments having a slightly lower read count is provided to the host 102, the map cache miss probability of the host 102 is reduced, so that it is possible to improve the read performance of the memory system 110.”
Regarding Claim 17,
The same motivation to combine provided in Claim 13 is equally applicable to Claim 17. The combined teachings of Shirota and Geml disclose the following limitations:
The method of claim 13 (see Claim 13 limitation mappings above)
Although Shirota Fig. 33 and ¶0071 disclose a mapping changer 115 which swaps pages between memories, the combined teachings of Shirota and Geml do not explicitly disclose the following limitations:
further comprising tracking how many pages are swapped in and out of the external RAM and using an indication of a swap in/swap out rate to monitor the swap in/swap out rate on the external RAM
However, Byun discloses within the context of a data storage device (Memory System 110, Fig 2) controller (Memory Controller 130, Fig. 2) receiving commands from a host (Host 102, Fig. 2) that a cache hit monitor tracks both how many pages are swapped in an out of an external RAM (Host Cache 106, Fig. 2) and the rate at which pages are swapped in and out of the external RAM.
Byun discloses the following limitations:
tracking how many pages (“memory map segments” [0041]) are swapped in and out of the external RAM (Host Cache 106, Fig. 2)(“The provision count indicates the total number of memory map segments provided to the host 102. The map management data 198 may further include … the provision count” [0044]) and using an indication of a swap in/swap out rate (“a miss count MISS_CNT” [0044]) to monitor the swap in/swap out rate on the external RAM (“The memory system 110 may store map management data 198 in order to selectively provide the host 102 with one or more memory map segments” [0041] // “In accordance with an embodiment, the memory system 110 may adjust … according to a set condition in order to increase the map cache hits probability of host 102 … based on a miss count MISS_CNT … The miss count indicates the total number of map cache misses that have occurred in the host 102” [0044] // “When a map cache miss occurs … The MM 44 may provide the map segment stored in the memory device 150 to the host cache 106” [0056]) – In this case, Host 102, Storage System 110, and Memory Controller 130 of Byun Fig. 2 are considered analogous to Host 4, Storage Device 6A, and Controller 8 of Geml Fig. 2, respectively. As disclosed in ¶0056, whenever a host cache (i.e., an “external RAM”) experiences a “miss”, a map manager (MM) 44 swaps a map segment from the memory system back into the host cache. As further detailed in ¶0044, memory system 110 tracks the “miss count” (i.e., the number of times a miss has occurred in the host) for each map segment using Map Management Table 198 (see also Fig. 1). Accordingly, the “miss count” associated with a map segment is considered as “an indication of a swap in/swap out rate” of an external (e.g., a host) RAM.
Shirota, Geml, and Byun are all considered to be analogous to the claimed invention because they all relate to the same field of adjusting performance of a storage device based on commands received from a host device. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined teachings of Shirota and Geml with the teachings of Byun and realize a storage device which tracks pages swapped in and out of an external RAM and which provides to a controller an indication of a swap in/ swap out rate. Doing so would enable a controller to provide an external RAM with memory map segments having a relatively lower read count, thereby decreasing the map cache miss probability in the external RAM and improving read performance of the memory system overall, as disclosed in Byun ¶0094: “In accordance with an embodiment, when there are many misses, reflected by a high miss count, and few memory map segments that have been provided, i.e., a low provision count, the MM 44 may … provide the host 102 with memory map segments having a slightly lower read count. When the memory map segments having a slightly lower read count is provided to the host 102, the map cache miss probability of the host 102 is reduced, so that it is possible to improve the read performance of the memory system 110.”
Response to Arguments
The previous 35 U.S.C. 112(d) rejection of Claim 6 is withdrawn.
Applicant’s arguments with respect to claims 1-5 and 7-18 have been considered but are moot in view of the newly-identified Shirota reference because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to applicant’s argument located within the final paragraph of the 3rd page of remarks (numbered as page 10) continuing to the 4th page of remarks (numbered as page 11), which recites:
“Claim 1 has been amended, in part, to recite a power optimization module to determine power usage by the storage device, wherein when the power usage is below a power ceiling threshold, to issue a first random-access memory (RAM) usage policy wherein the controller processes host data in an internal RAM when using the first RAM usage policy, and when the power usage is above the power ceiling threshold, to issue a second RAM usage policy wherein the controller processes host data in one of an external RAM and the external RAM and portions of the internal RAM when using the second RAM usage policy, wherein as part of the second RAM usage policy the power optimization module instructs the controller to reduce usage of the internal RAM and increase usage of the external RAM until a given criterion is met.
Claims 11 and 13 have been amended to recite similar features.
Applicant respectfully submits that neither Therene and/or Khatib and Geml, Motoyama and/or Therene, either alone or in combination, teach or suggest the claim features defined in independent Claims 1, 11, and 13, as amended. Liu and Byun fail to cure the noted deficiencies of Therene and/or Khatib and Geml, Motoyama and/or Therene. Thus, Applicant respectfully submits that independent claims 1, 11, and 13 are allowable for at least these reasons. Claims 1- 5, 7-10, 12, and 14-18 which depend on claim 1, 11, and 13, respectively, are also patentable over the prior art at least by virtue of their dependence. Therefore, it is respectfully requested that the rejections of claims 1-5 and 7-18 be reconsidered and withdrawn for at least these reasons.”
Examiner has fully considered the aforementioned argument but does not find it persuasive. In particular, examiner notes that aforementioned argument is moot in view of the newly-identified Shirota reference. Examiner additionally notes that the outstanding prior art rejections do not rely on Geml for any teaching which is expressly challenged in the aforementioned argument. See 35 U.S.C. 103 rejections above for additional details.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kurita et al. (US 20220300172 A1) – Discloses a method of controlling power modes of a storage device based on a calculated power consumption (see Figs. 19 + 20)
Swami et al. (US 20220075536 A1) – Discloses a method of controlling a DRAM device in a “suspend” mode of operation (see Fig. 3) by selectively powering off memory banks (see ¶0063)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIAN SCOTT MENDEL whose telephone number is (703)756-1608. The examiner can normally be reached M-F 10am - 4pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocío del Mar Pérez-Vélez can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.S.M./Examiner, Art Unit 2133
/ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133