DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s Remarks/Arguments filed on December 29th, 2025, have been carefully considered.
Claims 1, 7, and 10 have been amended.
No claims have been canceled or added.
Claims 1-22 are currently pending in the instant application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 7 and 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1, 7, and 10, the examiner has determined the claims are drawn to updating a portion of the address map in the host with the data that is in the active portion of the map currently stored in the memory system. The examiner has determined from the claims that the address map is split into portions, with one portion being stored in the host memory and the other portion being stored in the volatile DRAM as seen in paragraph 0035 of the specification. The specification also states clearly in paragraph 0026 that least a portion of an address map is cached in the host memory system. The examiner points to figure 2, which shows host address map cache 127 and active portion address map 125. Thus, the examiner has interpreted the claims to mean a portion of the full L2P mapping would be split between the two storage areas, such that say for a simple example, a L2P mapping table that has 200 entries would have the 1-100 entries to the host memory and keep the 101-200 entries in the volatile DRAM as the active portion. Thus, if the controller is to update the mappings in the host system with the changes or updates that occur in the active portion, the examiner fails to understand how you update the entries if there are no entries in that table portion. For example, how does the update to entry 198 get updated into the host memory portion when that portion doesn’t contain the 198 listing since its not part of the portion that was kept in the host memory. The claims are clear that only portions of the maps are stored at each location, and not that the full mapping is available at the host cache memory. The specification implies that the full mapping is stored in the non-volatile storage (the SSD), yet the full mapping is not what is updated per the current claim limitations. Therefore, it is unclear how the mapping is updated. For these reasons the claim limitations are considered indefinite.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 6, and 12 of U.S. Patent No. 11,966,329. Although the claims at issue are not identical, they are not patentably distinct from each other.
Claim 1 of U.S. Patent 11,966,329 contains every element of claims 1-5, 10-14, 16, and 20 of the instant application and as such anticipates claims of the instant 3application.
Claims 1 and 5 of U.S. Patent 11,966,329 contains every element of claim 17 of the instant application and as such anticipates claim 17 of the instant application.
Claims 1 and 12 of U.S. Patent 11,966,329 contains every element of claim 7 of the instant application and as such anticipates claim 7 of the instant application.
Claim 2 of U.S. Patent 11,966,329 contains every element of claim 15 of the instant application and as such anticipates claim 15 of the instant application.
Claim 3 of U.S. Patent 11,966,329 contains every element of claim 19 of the instant application and as such anticipates claim 19 of the instant application.
Claim 6 of U.S. Patent 11,966,329 contains every element of claim 18 of the instant application and as such anticipates claim 18 of the instant application.
Claim 12 of U.S. Patent 11,966,329 contains every element of claim 9 of the instant application and as such anticipates claim 9 of the instant application.
“A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness-type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). “ ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-6, 10-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Hahn et al. [US2019/0138220] hereinafter Hahn2 for consistency across the parent application, in view of Hahn et al [US10,268,584] hereinafter Hahn1 for consistency across the parent application, further in view of Song et al. [US9,213,632]. Hahn2 teaches adaptive device quality of service by host memory buffer range. Hahn1 teaches adaptive host memory buffer (HMB) caching using unassisted hinting. Song teaches system and methods for data storage device to use external resources.
Regarding claim 1, Hahn2 teaches an apparatus [Hahn2 paragraph 0017, first lines “…a non-volatile memory system is disclosed…”] comprising:
a host system configured to communicate with a memory system, the host system [Hahn2 paragraph 0015, middle lines “…The controller is configured to store and retrieve data from a host memory buffer on a host in communication with the non-volatile memory system…”] further comprising:
host memory [Hahn2 paragraph 0041, first lines “…the physical memory on the host 212, such as RAM 216 on the host 212…”]; and
an address map cache of the host memory [Hahn2 paragraph 0029, first lines “…The RAM 116 in the NVM system 100, whether outside the controller 102, inside the controller or present both outside and inside the controller 102, may contain a number of items, including a copy of one or more pieces of the logical-to-physical mapping tables for the NVM system…”];
wherein the host memory is accessible to a controller of the memory system [Hanh2 paragraph 0020, first lines “…a flash memory controller is a device that manages data stored on flash memory and communicates with a host, such as a computer or electronic device…”];
wherein the controller is configured to store a first portion of an address map in the memory system and a second portion of the address map in the address map cache of the host memory [Hanh2 paragraph 0041, middle lines “…The portion of the RAM 216 allocated for the host memory buffer (HMB) 218 under control of the NVM system 100 may have multiple different regions each containing a different type or types of data than each other region. These regions may include a flash translation layer (FTL) mapping region 222 containing logical-to-physical mapping data for the NVM system 100…” and paragraph 0047, all lines “…Depending on the amount of mapping table information maintained in the FTL mappings region 222 in the HMB 218 and the need to swap in a different portion of the mapping table information into the HMB 218, for example if the NVM system 100 uses the HMB 218 for mapping information storage and the mapping being searched for is not currently in the HMB 218, the mapping information may need to be swapped into the HMB 218 (at 618). When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”].
Hahn2 fails to explicitly teach such that the first portion is in the memory system while the second portion is in the host memory.
However, Hahn1 does teach such that the first portion is in the memory system while the second portion is in the host memory [Figure 3, feature 102 “primary FTL cache (SRAM)” and feature 104, “Secondary FTL cache”].
Hahn2 and Hahn1 are both analogous arts in that they deal with improve address translation in memory systems.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2’s swapping of portions of addresses from HMB to RAM with Hahn1 teachings that portions of addresses come from the memory system and go to the HMB and vice versa for the benefit of reducing latency by storing address translation information on faster HMB cache [Hahn1 column 2, lines 44-47 “…The adaptive HMB caching module utilizes the hints to determine how to cache FTL data in the HMB and on the storage device to reduce latency in future accesses…”].
Hahn2 and Hahn1 fail to explicitly teach wherein the second portion of the address map in the host memory is updated according to an active portion of the first portion of the address map in the memory system.
However, Song does teach wherein the second portion of the address map in the host memory is updated according to an active portion of the first portion of the address map in the memory system [Song column 3, lines 54-56 “…The address-mapping data (e.g., FTL metadata) on the host memory 306 may be updated during data storage operations…” and column 5, lines 8-14 “…At 606, part of the address-mapping data can be transferred back to the data storage device on demand for data storage operations. For example, when the non-volatile memory of the data storage device is to be accessed upon a request, part of the address-mapping data associated with the request can be transferred to a volatile memory of the data storage device for processing…”(The examiner has determined that Song’s broad teaching of updating the address-mapping on the host memory during data storage operations would read on the active portion updating the mapping in the host memory.)].
Hahn2, Hahn1, and Song are both analogous arts in that they deal with improving address translation in a memory system.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2 and Hahn1 with Song’s use of host memory to hold mapping data for the benefit of increasing the storage device performance and lifespan [Song column 3, lines 5-20 “…keeping the entire FTL metadata on the non-volatile memory 106 for access may have negative effects on the performance of the data storage device 100, and also may cause wear problems of the non-volatile memory 106 and reduce the lifespan of the data storage device 100…”].
Hahn2 teaches wherein the address map defines logical addresses in terms of physical addresses of memory units in the memory system [Hanh2 paragraph 0020, middle lines “…the flash memory controller can convert the logical address received from the host to a physical address in the flash memory…”].
Regarding claim 2, Hanh2 teaches the controller is configured to process requests from the host system to store data in the memory system or retrieve data from the memory system [Hanh2 paragraph 0020, middle lines “…when a host needs to read data from or write data to the flash memory, it will communicate with the flash memory controller…”].
Regarding claim 3, Hahn2 teaches the controller is configured to, in response to an operation that uses a logical address defined in the second portion of the address map, retrieve at least part of the second portion of the address map from the host system [Hanh2 paragraph 0041, middle lines “…The portion of the RAM 216 allocated for the host memory buffer (HMB) 218 under control of the NVM system 100 may have multiple different regions each containing a different type or types of data than each other region. These regions may include a flash translation layer (FTL) mapping region 222 containing logical-to-physical mapping data for the NVM system 100…” and paragraph 0047, all lines “…Depending on the amount of mapping table information maintained in the FTL mappings region 222 in the HMB 218 and the need to swap in a different portion of the mapping table information into the HMB 218, for example if the NVM system 100 uses the HMB 218 for mapping information storage and the mapping being searched for is not currently in the HMB 218, the mapping information may need to be swapped into the HMB 218 (at 618). When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”].
Regarding claim 4, Hahn2 teaches the host memory comprises random access memory [Hahn2 paragraph 0041, first lines “…the physical memory on the host 212, such as RAM 216 on the host 212…”].
Regarding claim 5, Hahn2 fails to explicitly teach wherein a portion of the random access memory of the host system where the second portion of the address map is stored is identified to the memory system during a powering up setup operation.
However, Hahn1 does teach a portion of the random access memory of the host system where the second portion of the address map is stored is identified to the memory system during a powering up setup operation [Hahn1 column 5, lines 46-51 “…at startup to initialize FTL caches 102 and 104 are illustrated. Referring to FIG. 4B, in step 500, on startup of the host system, storage device 200 is initialized. In step 502, the primary and secondary FTL caches are populated with frequently read data, such as data that is frequently read on boot up…”].
Hahn2 and Hahn1 are analogous arts in that they both deal with improving memory access by using the host memory buffer.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2’s use of address map swapping with Hahn1’s teachings of initialize address maps at start up for the benefit of reducing future access latency by using adaptive HMB caching of FTL data [Hahn1 column 2, lines 32-36 “…adaptive HMB caching of FTL data using hints derived from accesses to a storage device and from file system metadata and for caching the FTL data in a manner that reduces latency in future FTL access…”].
Regarding claim 6, Hahn2 teaches the power up setup operation comprises a basic input/output system (BIOS) setup [Hahn2 paragraph 0028, first lines “…A read only memory (ROM) 118 stores system boot code….”(The examiner has determined that BIOS is a ROM that stores boot code and thus a ROM storing boot code reads on a BIOS when giving its BRI.)].
Regarding claim 10, Hahn2 teaches a memory system [Hahn2 paragraph 0017, first lines “…a non-volatile memory system is disclosed…”], comprising:
non-volatile media having a quantity of memory units [Hahn2 paragraph 0039, first lines “…The non-volatile flash memory array 142 in the non-volatile memory 104 may be arranged in blocks of memory cells…”];
a volatile memory that stores a first portion of an address map [Hahn2 paragraph 0029, first lines “…The RAM 116 in the NVM system 100, whether outside the controller 102, inside the controller or present both outside and inside the controller 102, may contain a number of items, including a copy of one or more pieces of the logical-to-physical mapping tables for the NVM system…”], the address map defining logical addresses in terms of physical addresses of the memory units in the non-volatile media [Hanh2 paragraph 0020, middle lines “…the flash memory controller can convert the logical address received from the host to a physical address in the flash memory…”];
Hanh2 fails to explicitly teach a cache manager.
However, Hanh1 does teach a cache manager [Hahn1 column 4, lines 3-13 “…An address translation module 207 translates from the address space by the host to the address space used by storage device 200 to access nonvolatile storage 208…address translation module 207 may translate between the logical address space and the physical address space using FTL data stored in HMB 204, storage device SRAM and/or nonvolatile storage 208…”].
Hahn2 and Hahn1 are both analogous arts in that they deal with improve address translation in memory systems.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2’s swapping of portions of addresses from HMB to RAM with Hahn1 teachings that portions of addresses come from the memory system and go to the HMB and vice versa for the benefit of reducing latency by storing address translation information on faster HMB cache [Hahn1 column 2, lines 44-47 “…The adaptive HMB caching module utilizes the hints to determine how to cache FTL data in the HMB and on the storage device to reduce latency in future accesses…”].
a controller configured to process requests from a host system to store data in the non-volatile media or retrieve data from the non-volatile media [Hanh2 paragraph 0020, middle lines “…when a host needs to read data from or write data to the flash memory, it will communicate with the flash memory controller…”], the host system having a memory connected to the memory system via a communication channel [Hahn2 paragraph 0046, last lines “…if the NVM system 100 used the HMB 218 on the host 212 to store the mapping, then a PCIe access is used to retrieve the mapping information from the HMB 218…”(Where PCIe is the communication channel.)];
wherein the cache manager [Hahn1 column 2, lines 44-47 “…The adaptive HMB caching module utilizes the hints to determine how to cache FTL data in the HMB and on the storage device to reduce latency in future accesses…”] stores a second portion of the address map in the memory of the host system and in response to an operation that uses a logical address defined in the second portion, retrieves the second portion of the address map from the memory of the host system through the communication channel to the volatile memory of the memory system [Hanh1 column 2, lines 48-53 “…an adaptive HMB caching module according to the subject matter described herein may maintain a tiered structure where portions of FTL data are stored in the HMB cache and other portions are stored in primary storage on the storage device and in nonvolatile storage on the storage device …”] and [Hanh2 paragraph 0041, middle lines “…The portion of the RAM 216 allocated for the host memory buffer (HMB) 218 under control of the NVM system 100 may have multiple different regions each containing a different type or types of data than each other region. These regions may include a flash translation layer (FTL) mapping region 222 containing logical-to-physical mapping data for the NVM system 100…” and paragraph 0047, all lines “…Depending on the amount of mapping table information maintained in the FTL mappings region 222 in the HMB 218 and the need to swap in a different portion of the mapping table information into the HMB 218, for example if the NVM system 100 uses the HMB 218 for mapping information storage and the mapping being searched for is not currently in the HMB 218, the mapping information may need to be swapped into the HMB 218 (at 618). When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”(The examiner has determined that swapping portions of the page table may change the terminology of what is considered first and second portions. The examiner believes that any teachings of first and second can be swapped and would still render the claims as obvious over the prior art.)].
Hahn2 and Hahn1 fail to explicitly teach wherein the cache manager is configured to update the second portion of the address map in the host memory according to an active portion of the first portion of the address map in the memory system.
However, Song does teach wherein the cache manager is configured to update the second portion of the address map in the host memory according to an active portion of the first portion of the address map in the memory system [Song column 3, lines 54-56 “…The address-mapping data (e.g., FTL metadata) on the host memory 306 may be updated during data storage operations…” and column 5, lines 8-14 “…At 606, part of the address-mapping data can be transferred back to the data storage device on demand for data storage operations. For example, when the non-volatile memory of the data storage device is to be accessed upon a request, part of the address-mapping data associated with the request can be transferred to a volatile memory of the data storage device for processing…”(The examiner has determined that Song’s broad teaching of updating the address-mapping on the host memory during data storage operations would read on the active portion updating the mapping in the host memory.)].
Hahn2, Hahn1, and Song are both analogous arts in that they deal with improving address translation in a memory system.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2 and Hahn1 with Song’s use of host memory to hold mapping data for the benefit of increasing the storage device performance and lifespan [Song column 3, lines 5-20 “…keeping the entire FTL metadata on the non-volatile memory 106 for access may have negative effects on the performance of the data storage device 100, and also may cause wear problems of the non-volatile memory 106 and reduce the lifespan of the data storage device 100…”].
Regarding claim 11, as per claim 10, Hahn2 teaches the non-volatile media includes a flash memory [Hahn2 paragraph 0039, first lines “…The non-volatile flash memory array 142 in the non-volatile memory 104 may be arranged in blocks of memory cells…”].
Regarding claim 12, as per claim 10, Hahn2 teaches the memory system is a solid-state drive [Hahn2 paragraph 0001, first lines “…Storage systems, such as solid state drives (SSDs) including NAND flash memory…”].
Regarding claim 13, as per claim 10, Hahn2 teaches in response to the operation that uses a logical address defined in the second portion, the cache manager stores the first portion of the address map from the volatile memory of the memory system through the communication channel into the memory of the host system [Hahn2 paragraph 0029, first lines “…The RAM 116 in the NVM system 100, whether outside the controller 102, inside the controller or present both outside and inside the controller 102, may contain a number of items, including a copy of one or more pieces of the logical-to-physical mapping tables for the NVM system…” and paragraph 0047, all lines “…Depending on the amount of mapping table information maintained in the FTL mappings region 222 in the HMB 218 and the need to swap in a different portion of the mapping table information into the HMB 218, for example if the NVM system 100 uses the HMB 218 for mapping information storage and the mapping being searched for is not currently in the HMB 218, the mapping information may need to be swapped into the HMB 218 (at 618). When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”].
Regarding claim 14, as per claim 10, Hahn2 and Hahn1 fail to explicitly teach in response to a request to shut down the memory system, the cache manager stores in the non-volatile media portions of the address map in the memory of the host system and in the volatile memory of the memory system.
However, Song does teach in response to a request to shut down the memory system, the cache manager stores in the non-volatile media portions of the address map in the memory of the host system and in the volatile memory of the memory system [Song column 3, lines 32-35 “…transfers part of the address-mapping data back to the data storage device 308 on demand…” and column 3, lines 54-57 “…The address-mapping data (e.g., FTL metadata) on the host memory 306 may be updated during data storage operations, and stored to the non-volatile memory 314 upon system shut-down…”].
Hahn2, Hahn1 and Song are all analogous arts in that they are related to storing mapping data in host memory for use in a memory system.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2 and Hahn1 with Song’s use of host memory to hold mapping data for the benefit of increasing the storage device performance and lifespan [Song column 3, lines 5-20 “…keeping the entire FTL metadata on the non-volatile memory 106 for access may have negative effects on the performance of the data storage device 100, and also may cause wear problems of the non-volatile memory 106 and reduce the lifespan of the data storage device 100…”].
Regarding claim 15, as per claim 10, Hahn2 teaches wherein the first portion of the address map is updated during at least write operations made using logical addresses defined in the first portion of the address map [Hahn2 paragraph 0047, last lines “…When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”].
Regarding claim 16, as per claim 10, Hahn2 and Song fail to explicitly teach during powering up the memory system, the cache manager copies the first portion of the address map from the non-volatile media to the volatile memory and the second portion of the address map from the non-volatile media to the memory of the host system
However, Hahn1 teaches during powering up the memory system, the cache manager copies the first portion of the address map from the non-volatile media to the volatile memory and the second portion of the address map from the non-volatile media to the memory of the host system [Hahn column 5, lines 46-51 “…at startup to initialize FTL caches 102 and 104 are illustrated. Referring to FIG. 4B, in step 500, on startup of the host system, storage device 200 is initialized. In step 502, the primary and secondary FTL caches are populated with frequently read data, such as data that is frequently read on boot up…”].
Hahn2 and Hahn1 are analogous arts in that they both deal with improving memory access by using the host memory buffer.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2’s use of address map swapping with Hahn1’s teachings of initialize address maps at start up for the benefit of reducing future access latency by using adaptive HMB caching of FTL data [Hahn1 column 2, lines 32-36 “…adaptive HMB caching of FTL data using hints derived from accesses to a storage device and from file system metadata and for caching the FTL data in a manner that reduces latency in future FTL access…”].
Regarding claim 18, as per claim 10, Hahn2 teaches the memory of the host system includes message queues for communications between the host system and the memory system [Hahn2 paragraph 0041, middle lines “…such as general data buffers 228 for temporary storage of host data being written to the NVM system 100 (for example data associated with a host write command)…” and paragraph 0046, first lines “…The queue may be designated the submission queue (SQ) in some implementations…” and paragraph 0041, middle lines “…such as general data buffers 228 for temporary storage of…data retrieved from storage locations on the NVM system (for example data accessed based on a host read command)…” and paragraph 0048, middle lines “…then the data read from the non-volatile memory 104 is transferred to the appropriate data buffer in the data buffer region 228 of host RAM 216 and the controller 102 signals completion of the read to the host 212 (at 626, 628)…”].
Regarding claim 19, as per claim 10, Hahn2 teaches the host system and the memory system communicate over the communication channel in accordance with a communication protocol for peripheral component interconnect express bus [Hahn2 paragraph 0047, all lines “…When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”].
Regarding claim 20, as per claim 10, Hahn2 teaches the memory of the host system is accessible to the controller at a speed greater than accessing the non-volatile media [Hahn2 figure 5, feature 508 “Time” HMB 1 µsec”].
Claim 7-9 and 21-22 is rejected under 35 U.S.C. 103 as being unpatentable over Hahn et al. [US2019/0138220] hereinafter Hahn2 for consistency across the parent application, in view of Hahn et al. [US20190294350], hereinafter Hahn3, in view of Benisty et al. [US2017/0322897] further in view of Song et al. [US9,213,632]. Hahn2 teaches adaptive device quality of service by host memory buffer range. Hahn3 teaches dynamic host memory allocation to a memory controller. Benisty teaches systems and methods for processing a submission queue. Song teaches system and methods for data storage device to use external resources.
Regarding claim 7, Hahn2 teaches an apparatus [Hahn2 paragraph 0017, first lines “…a non-volatile memory system is disclosed…”] comprising:
a host system configured to communicate with a memory system, the host system [Hahn2 paragraph 0015, middle lines “…The controller is configured to store and retrieve data from a host memory buffer on a host in communication with the non-volatile memory system…”] further comprising:
host memory [Hahn2 paragraph 0041, first lines “…the physical memory on the host 212, such as RAM 216 on the host 212…”];
an address map cache of the host memory [Hahn2 paragraph 0029, first lines “…The RAM 116 in the NVM system 100, whether outside the controller 102, inside the controller or present both outside and inside the controller 102, may contain a number of items, including a copy of one or more pieces of the logical-to-physical mapping tables for the NVM system…”];
Hahn2 fails to explicitly teach an admin submission queue.
However, Hahn3 does teach an admin submission queue [Hahn3 paragraph 0049, all lines “…the host system 140 may use host memory 160 to store admin submission queues 152, admin completion queues 154, command submission queues (SQs) 162 and command completion queues (CQs) 164. The admin submission queues 152 and admin completion queues 154 may be used to control and manage the memory controller 122. In one embodiment, the admin submission queues 152 and admin completion queues 154 are NVMe admin submission queues and admin completion queues…”].
Hahn2 and Hahn3 are analogous arts in that they both deal with improving memory efficiency.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2’s swapping of portions of addresses from HMB to RAM with Hahn3’s teachings of admin submission queues for the benefit of increasing memory efficiency [Hahn3 paragraph 0020, last lines “…dynamically allocates host memory to the non-volatile memory controller during runtime make more efficient use of host memory…”].
a submission queue different from the admin submission queue [Hahn2 paragraph 0041, middle lines “…such as general data buffers 228 for temporary storage of host data being written to the NVM system 100 (for example data associated with a host write command)…” and paragraph 0046, first lines “…The queue may be designated the submission queue (SQ) in some implementations…”]; and
Hahn2 and Hahn3 fail to explicitly teach a completion queue.
However, Benisty does teach a completion queue [Benisty paragraph 0031, first lines “…A particular completion queue may correspond to a circular buffer with a fixed slot size used by the controller 102 to post status for completed commands.…”]
Hahn2, Hahn3 and Benisty are analogous arts in that they both deal with memory systems that use queues to improve performance.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2 and Hahn3 with Benisty’s submission queues being made of circular buffers with fixed sized slots for the benefit of reducing delay due to not having enough space in the completion queue [Benisty paragraph 0019, last lines “…The data storage device may thus reduce (e.g., eliminate) delay associated with being unable to report the completed command due to the corresponding CQ not having space to store the CQ entry…”].
wherein the host memory is accessible to a controller of the memory system [Hanh2 paragraph 0020, first lines “…a flash memory controller is a device that manages data stored on flash memory and communicates with a host, such as a computer or electronic device…”];
wherein the controller is configured to store a portion of an address map in the address map cache of the host memory [Hanh2 paragraph 0041, middle lines “…The portion of the RAM 216 allocated for the host memory buffer (HMB) 218 under control of the NVM system 100 may have multiple different regions each containing a different type or types of data than each other region. These regions may include a flash translation layer (FTL) mapping region 222 containing logical-to-physical mapping data for the NVM system 100…” and paragraph 0047, all lines “…Depending on the amount of mapping table information maintained in the FTL mappings region 222 in the HMB 218 and the need to swap in a different portion of the mapping table information into the HMB 218, for example if the NVM system 100 uses the HMB 218 for mapping information storage and the mapping being searched for is not currently in the HMB 218, the mapping information may need to be swapped into the HMB 218 (at 618). When this swapping in of mapping information is needed, the controller 102 will need to make a PCIe access to update the mapping information (e.g. the FTL mappings region 222 of the HMB 218) (at 622)…”];
Hahn2, Hahn3, and Benisty fails to explicitly teach wherein the controller is configured to update the portion of the address map in the host memory according to an active portion of the address map in the memory system.
However, Song does teach wherein the controller is configured to update the portion of the address map in the host memory according to an active portion of the address map in the memory system [Song column 3, lines 54-56 “…The address-mapping data (e.g., FTL metadata) on the host memory 306 may be updated during data storage operations…” and column 5, lines 8-14 “…At 606, part of the address-mapping data can be transferred back to the data storage device on demand for data storage operations. For example, when the non-volatile memory of the data storage device is to be accessed upon a request, part of the address-mapping data associated with the request can be transferred to a volatile memory of the data storage device for processing…”(The examiner has determined that Song’s broad teaching of updating the address-mapping on the host memory during data storage operations would read on the active portion updating the mapping in the host memory.)].
Hahn2, Hahn3, Benisty and Song are analogous arts in that they deal with improving address translation in a memory system.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2, Hahn3, and Benisty with Song’s use of host memory to hold mapping data for the benefit of increasing the storage device performance and lifespan [Song column 3, lines 5-20 “…keeping the entire FTL metadata on the non-volatile memory 106 for access may have negative effects on the performance of the data storage device 100, and also may cause wear problems of the non-volatile memory 106 and reduce the lifespan of the data storage device 100…”].
wherein the host system is configured to submit requests to the memory system via the submission queue [Hahn2 paragraph 0041, middle lines “…such as general data buffers 228 for temporary storage of host data being written to the NVM system 100 (for example data associated with a host write command)…” and paragraph 0046, first lines “…The queue may be designated the submission queue (SQ) in some implementations…”]; and
wherein the host system is configured to receive responses from the storage system via the completion queue [Hahn2 paragraph 0041, middle lines “…such as general data buffers 228 for temporary storage of…data retrieved from storage locations on the NVM system (for example data accessed based on a host read command)…” and paragraph 0048, middle lines “…then the data read from the non-volatile memory 104 is transferred to the appropriate data buffer in the data buffer region 228 of host RAM 216 and the controller 102 signals completion of the read to the host 212 (at 626, 628)…”].
Regarding claim 8, as per claim 7, Benisty does teach at least one of the submission queue or the completion queue comprises a circular buffer with a fixed slot size [Benisty paragraph 0030, first lines “…A particular submission queue may correspond to a circular buffer with a fixed slot size that the access device 130 uses to submit commands for execution by the controller 102…”]
Regarding claim 9, as per claim 7, Hahn2 teaches the controller is configured to, in response to a request received from the host system via the submission queue [Hahn2 paragraph 0046, first lines “…When the READ command is available for the NVM system 100 to execute…”], retrieve data from the memory system based on the portion of the address map in the address map cache of the host memory [Hahn2 paragraph 0046, most lines “…the controller 102 will first fetch the command from the queue/PRP region 230 in host RAM 216 (at 602). The queue may be designated the submission queue (SQ) in some implementations. The controller 102 will then decode and parse the command (at 604). If the command includes a physical region page (PRP) list, then the controller 102 will fetch the PRP list from the queue/PRP region 230 in host RAM 216 (at 606, 610). If no PRP list is included, then the controller 102 determines the location of the current logical-to-physical mapping information for logical addresses provided in the READ command (at 608). If the mapping information is already in NVM system RAM 116 it is retrieved from there (at 612)…”].
Regarding claim 21, Benisty teaches wherein the submission queue comprises a plurality of submission queues [Benisty figure 2, feature 0-7 and paragraph 0020, last lines “…The one or more host buffers 108 may store one or more submission queues (SQs) 150 and one or more completion queues (CQs) 152. The SQs 150, the CQs 152, or a combination thereof, may correspond to a non-volatile memory express (NVMe) protocol…”].
Regarding claim 22, Hanh3 teaches wherein admin submission queue is configured to receive commands comprising at least one of commands to manage namespaces, commands to attach namespaces, commands to create input/output submission or completion queues, commands to delete input/output submission or completion queues, or commands for firmware management [Hanh3 paragraph 66, all lines “…The host queue manager 246 is also configured to fetch and parse commands from the admin submission queue 152. In one embodiment, one of the commands that is fetched is for the memory controller to provide the host system 140 with an asynchronous event notification. The asynchronous event notification may be used to request a dynamic change in the size of the HMB 170. In one embodiment, the asynchronous event notification command that is fetched from the admin submission queue 152 is compliant with an NVMe Asynchronous Event Request. In one embodiment, the host queue manager 246 post to the admin completion queue 154 in order to inform the host of the asynchronous event. For example, the host queue manager 246 may post a response to the admin completion queue 154 to trigger a request a dynamic change in the size of the HMB 170…”].
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Hahn et al. [US2019/0138220] hereinafter Hahn2 for consistency across the parent application, in view of Hahn [US10,268,584] hereinafter Hahn1 for consistency across the parent application, in view of Song et al. [US9,213,632], further in view of Salihun [System address map initialization in x86/x64 architecture part 1: PCI-based systems]. Hahn2 teaches adaptive device quality of service by host memory buffer range. . Hahn teaches adaptive host memory buffer (HMB) caching using unassisted hinting. Song teaches systems and methods for data storage devices to use external resources Salihun teaches PCI boot processes that map system memory addresses.
Regarding claim 17, as per claim 10, Hahn2, Hahn1, and Song fail to explicitly teach the memory of the host system is identified to the memory system during the powering up via a base address register.
However, Salihun does teach the memory of the host system is identified to the memory system during the powering up via a base address register [Salihun page 2, heading “The boot process at a glance”, first lines “This sect ion explains the boot process in sufficient detail to understand the system address map and other bus protocol-related matters that are explained later in this article”. And page 3, 4th bullet point, “Chipset initialization. In this step the chipset registers are initialized, particularly the chipset base address register (BAR). We’ll have a look deeper into BAR later. For the time being, it’s sufficient that you know BAR controls how the chip registers and memory (if the device has its own memory) are mapped to the system address map”, and page 15, section “PCI bus base address registers initialization”, page 16, 2nd paragraph “Figure 7 shows the BAR format for the BAR that maps to CPU memory space. This article deals with this type of BAR because the focus is on the system address map, particularly the system memory map” and last paragraph of page 17, “The size of the memory range required by a PCI device is calculated from the number of writeable bits in the base address bits part of the BAR”]. The examiner has determined from Salihun’s teachings that it was well known to identify the host system memory accessible by the PCI device during the powering up process using the base address register. The examiner suggests applicant also consider the additional teachings in part 2 of this NPL titled “System address map initialization in x86/x64 architecture part 2: PCI express-based systems”].
Hahn2, Hahn1, Song and Salihun are all analogous arts in that they are related to mapping host memory for use in a memory system.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hahn2, Hahn1, and Song, with Salihun’s teachings of accessing a BAR during boot-up to map host memory for the benefit of improving the overall PCI device memory read speed by prefetching the BAR that maps to CPU memory space [Salihun page 16, 2nd to last paragraph, last lines “…this feature is used to improve the overall PCI device memory read speed…”].
Response to Arguments
Applicant's arguments filed on December 29th have been fully considered but they are not persuasive.
The examiner maintains that Song does teach the updating of the mapping as required by the new amendments. The examiner has also pointed out the ambiguity in understanding the new claims as thus the examiner has rejected the amendments as being indefinite and suggests the applicant amends the claims to clarify the amendments.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC CARDWELL whose telephone number is (571)270-1379. The examiner can normally be reached on Monday - Friday 10-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC CARDWELL/Primary Examiner, Art Unit 2139