DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 7-12,16 and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asano et al. 20180260334 herein Asano and Geml et al 20180004559 herein Geml in view of Kanno 20170262228 herein Kanno.
Per claim 1, Asano discloses: a host interface; (fig, 1 120) a controller; (fig. 1 130) non-volatile storage media; (fig.1 140) and firmware; (fig. 1 comp 110) and wherein the computer system is configured to limit access to the storage device by the respective account to logical addresses that are in the namespace and that are within a range from zero to a maximum logical address (¶0038-¶0039; LBAs for a given namespace are restricted to a range 0 to N−1, where N is the size of the namespace defined at the time it created ¶0041; SSD controller 130 first converts the namespace-based address to a linear, internal address, termed a logical cluster address (LCA), using the NSID and uses the LCA as an index to a logical-to-physical lookup table. Within the linear address space that is associated with the NSID, the namespaces are arrayed in a back-to-back manner, so that the NSID corresponding to one namespace are adjacent to the NSID corresponding to the subsequent namespace. This effectively converts the namespace-based address space into an address space that includes a single set of numbers that begin at 0 and increase to a maximum number. The use of the NSID allows for efficient indexing of a logical-to-physical conversion table; the examiner notes that the limiting of access to the storage device is merely a result of the logical address allocated to the respective namespace)
Asano discloses a plurality of namespace and NSID’s but does not specifically disclose: a host configured to operate a plurality of accounts assigned to users; and wherein each respective account in the plurality of accounts has a namespace identifier that identifies allocation of a portion of the non-volatile storage media to the respective account.
However, Geml discloses: a host configured to operate a plurality of accounts assigned to users; (¶0011; storage device may be logically divided into one or more namespaces. The host device may include multiple users and may control which namespaces of the data storage device each user may access) and wherein each respective account in the plurality of accounts has a namespace identifier that identifies allocation of a portion of the non-volatile storage media to the respective account (fig. 3, ¶0011; The host device may include multiple users and may control which namespaces of the data storage device each user may access…. a request from a particular VM for a set of namespaces identifiers corresponding to a set of namespaces associated with the storage device).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Asano and namespace access control of Geml because it prevents unauthorized namespace accesses. Geml improves the security of the storage device (¶0011; By sending only the namespace identifiers of the namespaces that the particular VM is authorized to access, the VMM may hide other namespaces from the VM. In this way, the VMM may help prevent the particular VM from accessing the namespaces that the particular VM is not authorized to access).
The combined teachings of Asano and Geml discloses user namespaces and allocating 0 to n-1 range of addresses but foes not specifically disclose: limit access to the storage device; wherein the controller is configured to receive a request to adjust a size of the namespace and remove or add a portion of the logical addresses from the namespace based on the request.
However, Kanno discloses: limit access to the storage device; (¶0174; if the capacity corresponding to the number (LBA range) of logic block addresses (LBAs) for a certain namespace is 100 Gbytes and blocks equivalent to 150 Gbytes (storage quota) are secured for this namespace, an over-provision area having a size that is 50% of the capacity (the capacity of a user space) corresponding to the LBA range can be secured) wherein the controller is configured to receive a request to adjust a size of the namespace and remove or add a portion of the logical addresses from the namespace based on the request (¶0692; If the amount of wear of physical resources of the SSD 3 due to a specific namespace is greater than a threshold, the resource manager 45 may perform processing for increasing the number of blocks to be secured for the specific namespace. In this case, the resource manager 45 may transmit, to the SSD 3, a namespace allocate command to add a designed number of blocks to the specific namespace. This increases the size of the over-provision area of the specific namespace, to thereby enable the write amplification of the specific namespace to be reduced, with the result that the amount of wear of physical resources of the SSD 3 due to the specific namespace can be reduced).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Asano, Geml and Kanno because the over provisioning enables available block that exceed the user accessible space. Kanno delays garbage collection to optimize erase counts (¶0177; he execution of the garbage collection in the specific namespace can be delayed. As data is written to blocks in the over-provision areas, data in blocks of the user space may be invalidated by updating. A block where all data is invalidated can be reused without its garbage collection. This means that an increase in the erase count can be suppressed by optimizing the size of the over-provision area).
Per claim 2, Asano discloses: wherein entire storage resources allocated for user data in the respective account are identified by the namespace identifier (¶0039).
Per claim3, Asano discloses: wherein a size of the namespace corresponds to -- 59 --Patent ApplicationAttorney Docket No. 120426-129000/US a quota of the entire storage resources allocated for user data in the respective account (fig. 6, ¶0039 and ¶0043).
Per claim 4, Asano discloses: wherein a size of the portion of the non-volatile storage media allocated to the respective account is smaller than the quota (fig. 6, ¶0043).
Per claim 5, Asano discloses: wherein the controller of the storage device is configured by the firmware to increase the size of the portion of the non-volatile storage media allocated to the respective account based on usage of the portion of the non-volatile storage media allocated to the respective account (fig. 7, ¶0053).
Per claim 7, Asano discloses: wherein the controller of the storage device is configured by the firmware to increase the size of the portion of the non-volatile storage media allocated to the respective account by adjusting a namespace map (¶0052) that defines a mapping between: first logical addresses defined in the namespace identified by the namespace identifier; and second logical addresses defined in a capacity of the storage device; (fig. 1 comp 132 and 134; ¶0038 and ¶0041; SSD controller 130 first converts the namespace-based address to a linear, internal address, termed a logical cluster address (LCA), using the NSID and uses the LCA as an index to a logical-to-physical lookup table. Within the linear address space that is associated with the NSID, the namespaces are arrayed in a back-to-back manner, so that the NSID corresponding to one namespace are adjacent to the NSID corresponding to the subsequent namespace. =This effectively converts the namespace-based address space into an address space that includes a single set of numbers that begin at 0 and increase to a maximum number. The use of the NSID allows for efficient indexing of a logical-to-physical conversion table) and wherein the firmware includes instructions which when executed by the controller, cause the controller to convert, using the namespace map, the first logical addresses in the namespace to physical addresses of a portion of the non-volatile storage media accessible to the respective account (¶0038 and ¶0042).
Per claim 8, Asano discloses: wherein the controller of the storage device is configured by the firmware to adjust the size of the namespace in response to a change in the quota for the respective account (¶0052).
Per claim 9, Asano discloses: wherein the controller of the storage device is configured by the firmware to adjust the size of the namespace by adjusting the namespace map. (¶0011-0012 and ¶0052 and 0054; “the controller 130 determines the number of NSAUs required for such an increase in step S710. This is done by dividing the LCA provided by the host by the granularity k of the SSD. Next in step S720, the controller determines the number of unallocated NSAUs available in the LCA space. The controller then determines if the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required to increase the NSA as required by the host (step S730). If the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required, the controller creates a new entry in the NSAU LUT 450 for NSIDx in step S740. This new entry in the NSAU LUT 450 is added as the very last entry in the NSAU LUT 450;”) wherein the block has a predetermined size (¶0047; a disk size of 5,000 clusters where the disk has been divided into 100 parts (i.e. a granulation factor of 100), the NSAU size k would be 50 clusters, i.e. the size of each part (namespace allocation unit) will be 50 clusters).
Per claim 10, Asano discloses: wherein the adjusting of the namespace map is performed by adding or removing an identifier of a block of logical addresses in the capacity of the storage device (¶0011-0012 and ¶0052 and 0054; “the controller 130 determines the number of NSAUs required for such an increase in step S710. This is done by dividing the LCA provided by the host by the granularity k of the SSD. Next in step S720, the controller determines the number of unallocated NSAUs available in the LCA space. The controller then determines if the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required to increase the NSA as required by the host (step S730). If the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required, the controller creates a new entry in the NSAU LUT 450 for NSIDx in step S740. This new entry in the NSAU LUT 450 is added as the very last entry in the NSAU LUT 450;”)
Per claim 11, Asano discloses: wherein the block has a predetermined size (¶0047; a disk size of 5,000 clusters where the disk has been divided into 100 parts (i.e. a granulation factor of 100), the NSAU size k would be 50 clusters, i.e. the size of each part (namespace allocation unit) will be 50 clusters).
Per claim 12, Asano discloses: wherein the predetermined size is a power of two (¶0052)
Per claim 16, Asano discloses: wherein the each respective account in the plurality of accounts is configured with a single namespace identifier for identifying storage resources available to store user data in the each respective account (¶0039).
Per claim 23, Kanno discloses: wherein the method further comprises: monitoring usage of the namespace; and adjusting a size of the namespace based on the usage of the namespace in accordance with the storage quota such that the size of the namespace is increased but still smaller than the storage quota (¶0174; if the capacity corresponding to the number (LBA range) of logic block addresses (LBAs) for a certain namespace is 100 Gbytes and blocks equivalent to 150 Gbytes (storage quota) are secured for this namespace, an over-provision area having a size that is 50% of the capacity (the capacity of a user space) corresponding to the LBA range can be secured).
Per claim 24, Kanno discloses: wherein the computer system is further configured to, at a time after the namespace is initially defined, increase a size of the namespace until the storage quota is reached (¶0174; if the capacity corresponding to the number (LBA range) of logic block addresses (LBAs) for a certain namespace is 100 Gbytes and blocks equivalent to 150 Gbytes (storage quota) are secured for this namespace, an over-provision area having a size that is 50% of the capacity (the capacity of a user space) corresponding to the LBA range can be secured).
Claims 6, 13 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asano et al. 20180260334 herein Asano, Geml et al 20180004559 herein Geml and Kanno 20170262228 herein Kanno in view of Lewis et al. 20170339158 herein Lewis.
Per claim 6, the combination of Asano, Geml and Kanno does not specifically disclose: wherein the controller of the storage device is configured by the firmware to increase the size of the portion of the non-volatile storage media allocated to the respective account in response to a determination that an unused portion, allocated from the non-volatile storage media to the respective account, is smaller than a threshold.
However, Lewis in an analogous art discloses: wherein the controller of the storage device is configured by the firmware to increase the size of the portion of the non-volatile storage media allocated to the respective account in response to a determination that an unused portion, allocated from the non-volatile storage media to the respective account, is smaller than a threshold (¶0015).
It would have been obvious to one having ordinary skill in the art at the time was filed to combine the teachings of Asano, Geml, Kanno and Lewis because Lewis’s increase/decrease of resources because it improves dynamic scaling (¶0018).
Per claim 13, Lewis discloses: wherein the storage device has a register storing a crypto key of the namespace during data access performed in the namespace (¶0028; key for each service interface).
Per claim 21, Lewis discloses: herein the respective account in the plurality of accounts configured on the host includes an account identification, a credential to authenticate a user using the respective account having the account identification, a device identification configured to identify the storage device, and a namespace identification configured to identify a namespace in the storage device assigned to the account identification (¶0028; Each of the service interfaces may also provide secured and/or protected access to each other via encryption keys and/or other such secured and/or protected access methods, thereby enabling secure and/or protected access between them).
Claims 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asano et al. 20180260334 herein Asano et al. 20180260334 herein Asano, Geml et al 20180004559 herein Geml, Kanno 20170262228 herein Kanno and Lewis et al. 20170339158 herein Lewis s in view of Wysocki e al. 20180188985 herein Wysocki.
Per claim 14 the combined teachings of Asano, Geml, Kanno and Lewis do not specifically disclose: wherein the host interface implements single root 1/O virtualization.
However, Wysocki in an analogous art discloses: wherein the host interface implements single root 1/O virtualization (¶0034).
It would have been obvious to one having ordinary skill in the art at the time was filed to combine the teachings of Asano, Geml, Kanno, Lewis and Wysocke because Wysocki’s virtualized cache using the sr vio improve throughput and performance (¶0034).
Per claim 15, Whysocli discloses: wherein the namespace is attached to a virtual function of the storage device implemented via the single root 1/O virtualization (¶0034).
Claims 17, 19-20 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Asano et al. 20180260334 herein Asano, Geml et al 20180004559 herein Geml and Kanno 20170262228 herein Kanno in view of Dusija et al. 10372342 herein Dusija
Per claim 17, Asano discloses: a host interface; (fig, 1 120) a controller; (fig. 1 130) non-volatile storage media; (fig.1 140) and firmware; (fig. 1 comp 110) and wherein the computer system is configured… the storage device by the respective account to logical addresses that are in the namespace and that are within a range from zero to a maximum logical address (¶0038-¶0039; LBAs for a given namespace are restricted to a range 0 to N−1, where N is the size of the namespace defined at the time it created ¶0041; SSD controller 130 first converts the namespace-based address to a linear, internal address, termed a logical cluster address (LCA), using the NSID and uses the LCA as an index to a logical-to-physical lookup table. Within the linear address space that is associated with the NSID, the namespaces are arrayed in a back-to-back manner, so that the NSID corresponding to one namespace are adjacent to the NSID corresponding to the subsequent namespace. This effectively converts the namespace-based address space into an address space that includes a single set of numbers that begin at 0 and increase to a maximum number. The use of the NSID allows for efficient indexing of a logical-to-physical conversion table; the examiner notes that the limiting of access to the storage device is merely a result of the logical address allocated to the respective namespace).
Asano discloses a plurality of namespace and NSID’s but does not specifically disclose: configuring, in the host, a plurality of accounts assigned to users, wherein each respective account in the plurality of accounts has a namespace identifier that identifies a namespace; limiting, by the computer system, access to the storage device: by the respective account to logical addresses that are in the namespace and that are within a range from zero to a maximum logical address; determining, by the computer system, that an unused portion of the namespace becomes smaller than a threshold; and increasing, by the computer system based on the determination that the unused portion becomes smaller than the threshold, a size of the namespace.
However, Geml discloses: configuring, in the host, a plurality of accounts assigned to users, (¶0011; storage device may be logically divided into one or more namespaces. The host device may include multiple users and may control which namespaces of the data storage device each user may access) wherein each respective account in the plurality of accounts has a namespace identifier that identifies a namespace; (fig. 3, ¶0011; The host device may include multiple users and may control which namespaces of the data storage device each user may access…. a request from a particular VM for a set of namespaces identifiers corresponding to a set of namespaces associated with the storage device).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Asano and namespace access control of Geml because it prevents unauthorized namespace accesses. Geml improves the security of the storage device (¶0011; By sending only the namespace identifiers of the namespaces that the particular VM is authorized to access, the VMM may hide other namespaces from the VM. In this way, the VMM may help prevent the particular VM from accessing the namespaces that the particular VM is not authorized to access).
The combined teachings of Asano and Geml discloses user namespaces and allocating 0 to n-1 range of addresses but does not specifically disclose: limiting, by the computer system, access to the storage device;… determining, by the computer system, that an unused portion of the namespace becomes smaller than a threshold; and increasing by the computer system based on the determination that the unused portion becomes smaller than the threshold, a size of the namespace.
However, Kanno discloses: limiting, by the computer system, access to the storage device by respective account to logical address;(¶0174; if the capacity corresponding to the number (LBA range) of logic block addresses (LBAs) for a certain namespace is 100 Gbytes and blocks equivalent to 150 Gbytes (storage quota) are secured for this namespace, an over-provision area having a size that is 50% of the capacity (the capacity of a user space) corresponding to the LBA range can be secured;).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Asano, Geml and Kanno because the over provisioning enables available block that exceed the user accessible space. Kanno delays garbage collection to optimize erase counts (¶0177; he execution of the garbage collection in the specific namespace can be delayed. As data is written to blocks in the over-provision areas, data in blocks of the user space may be invalidated by updating. A block where all data is invalidated can be reused without its garbage collection. This means that an increase in the erase count can be suppressed by optimizing the size of the over-provision area).
The combined teachings of Asano, Geml and Kanno discloses increasing the overall unused portion of the namespace size but does not specifically disclose the free space of the namespace being smaller than a threshold: determining, by the computer system, that an unused portion of the namespace becomes smaller than a threshold; and increasing by the computer system based on the determination that the unused portion becomes smaller than the threshold, a size of the namespace.
However, Dusija discloses: determining, by the computer system, that an unused portion of the namespace becomes smaller than a threshold; and increasing by the computer system based on the determination that the unused portion becomes smaller than the threshold, a size of the namespace (fig. 9, claim 1; determining free space in the second portion of the NVM; and if the free space is less than a predetermined threshold, increasing the size of the second portion of the NVM for storing the warm data).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Asano, Geml, Kanno and Dusija because the dynamically adjustment of the free space prevents an abrupt decrease in performance during operation. (Abstract; A cascaded data path enables flash memory data access that has a more graceful degradation instead of an abrupt decrease in performance during operational).
Per claim 19, Asano discloses: wherein the adjusting of the size of the namespace includes adding to, or removing from, a namespace map, an identifier of a block of logical addresses in a capacity of the storage device (¶0011-0012 and ¶0052 and 0054; “the controller 130 determines the number of NSAUs required for such an increase in step S710. This is done by dividing the LCA provided by the host by the granularity k of the SSD. Next in step S720, the controller determines the number of unallocated NSAUs available in the LCA space. The controller then determines if the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required to increase the NSA as required by the host (step S730). If the number of unallocated NSAUs in the LCA space is greater than or equal to the number of NSAUs required, the controller creates a new entry in the NSAU LUT 450 for NSIDx in step S740. This new entry in the NSAU LUT 450 is added as the very last entry in the NSAU LUT 450;”)
Claim 20 is the non-transitory CRM claim corresponding to the system claim 17 and is rejected under the same reasons set forth in connection with the rejection of claim 17.
Per claim 25, Kanno discloses: wherein the method further comprises: monitoring usage of the namespace; and adjusting a size of the namespace based on the usage of the namespace in accordance with the storage quota such that the size of the namespace is increased but still smaller than the storage quota (¶0174-175; if the capacity corresponding to the number (LBA range) of logic block addresses (LBAs) for a certain namespace is 100 Gbytes and blocks equivalent to 150 Gbytes (storage quota) are secured for this namespace, an over-provision area having a size that is 50% of the capacity (the capacity of a user space) corresponding to the LBA range can be secured; the examiner notes that the claim merely requires the ability to increase the namespace up to a quota).
Response to Arguments
Applicant’s arguments, filed 10/30/25, with respect to the rejection(s) of claim(s) 1, 17 and 20 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Dusija. Dusija discloses determining free space in the second portion of the NVM; and if the free space is less than a predetermined threshold, increasing the size of the second portion of the NVM for storing the warm data.
Remark
Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABOUCARR FAAL whose telephone number is (571)270-5073. The examiner can normally be reached M-F 8:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim VO can be reached on 5712723642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BABOUCARR . FAAL
Primary Examiner
Art Unit 2138
/BABOUCARR FAAL/Primary Examiner, Art Unit 2138