DETAILED ACTION
This action is responsive to the communication filed on 1/5/2026. Claims 1-22 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/5/2026 has been entered.
Claim Objections
Claim 14 is objected to because of the following informalities:
For claim 14, the abbreviated term ‘PRP’ should be amended to also provide for the full intended terminology (e.g. physical region page).
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 8-10, 18, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1).
As per claim 1,
1. A data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: [Benisty teaches a memory device comprising non-volatile memory and a controller for managing the device and interfacing with a host (para. 44-48, 51-58; fig. 1A, 2A and associated paragraphs)]
evaluate commands in one or more submission queues (SQs), wherein each command is evaluated to determine a number of host buffers that will be released when the command is completed, [Benisty teaches the memory device arbitrating the order of commands to fetch from host submission queues (para. 22; see para. 57-58, fig. 2A showing a controller’s priority determination module and fetch module); where a command may be prioritized based on the command’s transfer data size (para. 27, lines 24-30)] select commands in the one or more SQs based upon the respective number of host buffers determined; and prioritize selected commands for processing during arbitration, wherein selected commands with a higher number of host buffers that will be released when the associated selected command is completed are prioritized. [Benisty teaches arbitrating the order of commands to fetch with priority based on transfer data size as shown above (para. 22, 27, 57-58, fig. 2A and associated paragraphs; also para. 105 indicating prioritizing phases of certain commands for more quickly releasing internal resources such as buffers resident on host system); Benisty further teaches data associated with a command (such as write data for a write command) being stored across a plurality of sections of a predetermined size (e.g. 4kb) in host device memory, as indicated in a PRP list having an entry for each respective section (para. 74-76, claim 13); it would have been obvious for one of ordinary skill in the arts, provided with disclosures of Benisty determining priorities of fetching a command based on the sizes of data associated with commands and additional disclosures by Benisty providing for storing data associated with a command across a plurality of sections of a predetermined size (e.g. 4kb), to provide for representing/determining size of said data in units comprising the predetermined size (e.g. using the number of physical pages or sections in evaluating the commands’ sizes; utilizing a different unit of measurement in addition to kb); doing so would provide for improved host buffer management by allowing determination of stored data size in units that are aligned with the physical architecture of the memory holding the data]
Benisty does not explicitly disclose, but Benisty 2 discloses:
a number of host buffers that will be released when the command is completed; the respective number of host buffers determined; number of host buffers that will be released when the associated selected command is completed, [Benisty as shown above teaches arbitrating the order of commands to fetch from host submission queues based on transfer data size of write data stored in sections or physical pages of a predetermined size (e.g. 4kb) in host device memory as pointed to by a PRP list (see Benisty above; para. 22, 27, 57-58, 74-76); Benisty does not explicitly disclose, but Benisty 2 discloses a PRP list pointing to host memory buffers of the same fixed size (e.g. 4kb) for data transfers (para. 36; fig. 2A and associated paragraphs)] wherein a 512KB command can be disposed in a single host buffer or in multiple host buffers; [Where a plurality of buffers comprising 4kb size may be used to store data associated with a command (see above; Benisty: para. 74-76; Benisty 2: para. 36), multiple of such buffers may necessarily be used to for holding 512kb of data associated with a command; it would have been obvious for one of ordinary skill in the arts, before the effective filing date of the claimed invention, to have modified the size of data associated with a command as recited in the instant claim in order to support flexible command sizes. It has been held that discovering an optimum value of a result effective variable involves only routine skill in the art. In re Boesch, 617 F.2d 272, 205 USPQ 215 (CCPA 1980). Please also see MPEP 2144.05 II. “[W]here the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.” In re Aller, 220 F.2d 454, 456, 105 USPQ 233, 235 (CCPA 1955); Merck & Co. Inc. v. Biocraft Lab. Inc., 874 F.2d 804, 809, 10 USPQ2d 1843, 1848 (Fed. Cir. 1989), cert. denied, 493 U.S. 975 (1989) (Claimed ratios were obvious as being reached by routine procedures and producing predictable results)]]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses:
a number of host buffers that will be released when the command is completed; the respective number of host buffers determined; number of host buffers that will be released when the associated selected command is completed [Benisty in view of Benisty 2 as shown above teaches prioritizing commands based on transfer data size comprising a number of buffers of a fixed size (e.g. 4kb) on the host (see above; Benisty: para. 22, 27, 57-58, 74-76; Benisty 2: para. 36); Benisty also discloses, responsive to completion of command processing, releasing host system internal resources such as buffer (Benisty: para. 105); Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses, after completion of a write command, releasing a region of host buffer comprising the write data associated with the completed write command (para. 142-143, 166-167; figs. 10, 14 and associated paragraphs), where it would have been obvious for one of ordinary skill in the arts, provided with disclosures by Benisty in view of Benisty 2, directed towards prioritizing of commands based on the number of buffers of fixed size holding data associated with the commands and releasing host system internal resources such as buffer upon command completion, with disclosures by Kanno providing releasing a region of host buffer comprising write data associated with a completed write command, to provide for recognizing buffers holding the data to be releasable upon the command’s completion. Doing so would provide for improved host memory management by anticipating the amount of buffers to be freed upon a command’s completion.]
Benisty, Benisty 2, and Kanno are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 and Kanno, to modify the disclosures by Benisty in view of Benisty 2 to include disclosures by Kanno since they both teach data storage, wherein Kanno is directed towards improved controlling of a nonvolatile memory (para. 2). Therefore, it would be applying a known technique (responsive to completion of a write command, releasing host buffer region storing associated data) to a known device (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers anticipated to be released upon a command completion in arbitrating command fetching; doing so would provide for improved host memory management). MPEP 2143
Benisty in view of Benisty 2 in view of Kanno does not explicitly disclose, but Brue discloses:
a higher number of host buffers that will be released when the associated selected command is completed are prioritized. [Brue teaches a system configurable for prioritizing processing of large write commands over small write commands based on storage usage scenario (para. 44-45, 7, 21)]
Benisty, Benisty 2, Kanno, and Brue are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 in view of Kanno and Brue, to modify the disclosures by Benisty in view of Benisty 2 in view of Kanno to include disclosures by Brue since they both teach data storage, wherein Brue is directed towards improved storage performance (para. 8). Therefore, it would be applying a known technique (based on storge usage scenario, prioritizing processing of large write commands over small write commands) to a known device (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion) ready for improvement to yield predictable results (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion, wherein larger commands with larger associated number of buffers may be prioritized for processing in order to provide for greater flexibility in optimizing command processing for different usage scenarios). MPEP 2143
As per claim 2, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
The data storage device of claim 1, wherein the one or more SQs comprise a plurality of SQs. [Benisty teaches a plurality of submission queues (para. 21-22, 65)]
As per claim 3, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
3. The data storage device of claim 1, wherein the evaluating comprises determining a payload size of the commands. [Benisty teaches the memory device arbitrating the order of commands to fetch from host submission queues (para. 22); where a command may be prioritized based on a command’s data transfer size (payload size) (para. 27, lines 24-30)]
As per claim 4, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
4. The data storage device of claim 1, wherein the prioritizing comprises placing the selected commands into an internal SQ prior to placing non-selected commands into the internal SQ. [Benisty teaches the memory device arbitrating the order of commands to fetch from host submission queues (para. 22); Benisty teaches queue(s) of the memory device for queuing the fetched commands (para. 89)]
As per claim 5, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
5. The data storage device of claim 1, wherein at least one selected command is fragmented. [Benisty teaches performing a command such as a write command by retrieving transfer data in divided portions (fragmented) in an out-of-order fashion (para. 75)]
As per claim 8, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
8. The data storage device of claim 1, wherein the controller is configured to process a command of the prioritized selected command, wherein the prioritized selected command is broken down into smaller portions. [Benisty teaches performing a command such as a write command by retrieving transfer data in divided portions (smaller portions) in an out-of-order fashion (para. 75)]
As per claim 9, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 8 as shown above and further teaches:
9. The data storage device of claim 8, wherein the smaller portions are completed individually. [Benisty teaches performing a command such as a write command by retrieving transfer data in divided portions (smaller portions) in an out-of-order fashion (para. 75), where retrieval of a portion is not reliant upon retrieving another portion (e.g. a preceding portion) (para. 75)]
As per claim 10, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 9 as shown above and further teaches:
10. The data storage device of claim 9, wherein the controller is configured to notify a host device after processing of all smaller portions is complete. [Benisty teaches completing the transfer in divided portions as shown above (para. 75) and sending a completion message to a completion queue after completing the data transfer (para. 77)]
As per claim 18,
18. A data storage device, comprising: means to store data; and a controller coupled to the means to store data, wherein the controller is configured to: [Benisty teaches a memory device comprising non-volatile memory and a controller for managing the device and interfacing with a host (para. 44-48, 51-58; fig. 1A, 2A and associated paragraphs)] evaluate first chunks of data of commands of a plurality of commands in one or more submission queues (SQs), wherein each first chunk of data of each command of the plurality of commands is evaluated to determine a number of host buffers that will be released when the first chunk of data is completed, [Benisty teaches the memory device arbitrating the order of commands to fetch from host submission queues (para. 22; see para. 57-58, fig. 2A showing a controller’s priority determination module and fetch module); where a command may be prioritized based on the command’s transfer data (first chunk) size (para. 27, lines 24-30); while the claim recites first chunk(s), the claim does not explicitly require presence of additional chunks (e.g. second or other chunks of each command), where, therefore, data associated with a command (e.g. write data associated with a write command) may correspond to said first chunk] predict a total number of host buffers that will be released by each command's completion based on the number of host buffers that will be released when each command's first chunk of data is completed, wherein the predicting comprises determining a ratio of released host buffers per transfer size for each of the first chunks of data; select commands of the plurality of commands in the one or more SQs based upon the prediction; and prioritize selected commands for processing during arbitration, wherein selected commands with a greater total number of host buffers that will be released when the associated selected command is completed are prioritized. [Benisty teaches arbitrating the order of commands to fetch with priority based on transfer data size as shown above (para. 22, 27, 57-58, fig. 2A and associated paragraphs; also para. 105 indicating prioritizing phases of certain commands for more quickly releasing internal resources such as buffers resident on host system); Benisty further teaches data associated with a command (such as write data (first chunk, transfer size) for a write command) being stored across a plurality of sections of a predetermined size (e.g. 4kb) in host device memory, as indicated in a PRP list having an entry for each respective section (para. 74-76, claim 13); it would have been obvious for one of ordinary skill in the arts, provided with disclosures of Benisty determining priorities of fetching a command based on the sizes of data associated with commands and additional disclosures by Benisty providing for storing data associated with a command across a plurality of sections of a predetermined size (e.g. 4kb), to provide for representing/determining size of said data in units comprising the predetermined size (e.g. using the number of physical pages or sections in evaluating the commands’ sizes; utilizing a different unit of measurement in addition to kb); doing so would provide for improved host buffer management by allowing determination of stored data size in units that are aligned with the physical architecture of the memory holding the data; as stated above, where the claim does not require additional chunks asides from said first chunk(s) and data associated with a command may therefore correspond to said first chunk, determining a size of data for a command (see para. 27 on arbitration based on size) may correspond to predicting size of data for a command based on a first chunk of said command, as the first chunk alone may correspond to the data of the command (please see disclosures by Benistry 2 and Kanno regarding buffers and releases), and, further, a ratio may correspond to said number of pages/sections corresponding to data of a command (i.e. units per transfer size, where the transfer size may correspond to data associated with the command as well as first chunk)]
Benisty does not explicitly disclose, but Benisty 2 discloses:
a number of host buffers that will be released when the first chunk of data is completed,; a total number of host buffers that will be released by each command's completion based on the number of host buffers that will be released when each command's first chunk of data is completed; released host buffers; a greater total number of host buffers that will be released when the associated selected command is completed [Benisty as shown above teaches arbitrating the order of commands to fetch from host submission queues based on transfer data size of write data stored in sections or physical pages of a predetermined size (e.g. 4kb) in host device memory as pointed to by a PRP list (see Benisty above; para. 22, 27, 57-58, 74-76); Benisty does not explicitly disclose, but Benisty 2 discloses a PRP list pointing to host memory buffers of the same fixed size (e.g. 4kb) for data transfers (para. 36; fig. 2A and associated paragraphs)] wherein a 512KB command can be disposed in a single host buffer or in multiple host buffers; [Where a plurality of buffers comprising 4kb size may be used to store data associated with a command (see above; Benisty: para. 74-76; Benisty 2: para. 36), multiple of such buffers may necessarily be used to for holding 512kb of data associated with a command; it would have been obvious for one of ordinary skill in the arts, before the effective filing date of the claimed invention, to have modified the size of data associated with a command as recited in the instant claim in order to support flexible command sizes. It has been held that discovering an optimum value of a result effective variable involves only routine skill in the art. In re Boesch, 617 F.2d 272, 205 USPQ 215 (CCPA 1980). Please also see MPEP 2144.05 II. “[W]here the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.” In re Aller, 220 F.2d 454, 456, 105 USPQ 233, 235 (CCPA 1955); Merck & Co. Inc. v. Biocraft Lab. Inc., 874 F.2d 804, 809, 10 USPQ2d 1843, 1848 (Fed. Cir. 1989), cert. denied, 493 U.S. 975 (1989) (Claimed ratios were obvious as being reached by routine procedures and producing predictable results)]]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses:
a number of host buffers that will be released when the first chunk of data is completed; a total number of host buffers that will be released by each command's completion based on the number of host buffers that will be released when each command's first chunk of data is completed; released host buffers; a greater total number of host buffers that will be released when the associated selected command is completed [Benisty in view of Benisty 2 as shown above teaches prioritizing commands based on transfer data size comprising a number of buffers of a fixed size (e.g. 4kb) on the host (see above; Benisty: para. 22, 27, 57-58, 74-76; Benisty 2: para. 36); Benisty also discloses, responsive to completion of command processing, releasing host system internal resources such as buffer (Benisty: para. 105); Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses, after completion of a write command, releasing a region of host buffer comprising the write data associated with the completed write command (para. 142-143, 166-167; figs. 10, 14 and associated paragraphs), where it would have been obvious for one of ordinary skill in the arts, provided with disclosures by Benisty in view of Benisty 2, directed towards prioritizing of commands based on the number of buffers of fixed size holding data associated with the commands and releasing host system internal resources such as buffer upon command completion, with disclosures by Kanno providing releasing a region of host buffer comprising write data associated with a completed write command, to provide for recognizing buffers holding the data to be releasable upon the command’s completion. Doing so would provide for improved host memory management by anticipating the amount of buffers to be freed upon a command’s completion.]
Benisty, Benisty 2, and Kanno are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 and Kanno, to modify the disclosures by Benisty in view of Benisty 2 to include disclosures by Kanno since they both teach data storage, wherein Kanno is directed towards improved controlling of a nonvolatile memory (para. 2). Therefore, it would be applying a known technique (responsive to completion of a write command, releasing host buffer region storing associated data) to a known device (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers anticipated to be released upon a command completion in arbitrating command fetching; doing so would provide for improved host memory management). MPEP 2143
Benisty in view of Benisty 2 in view of Kanno does not explicitly disclose, but Brue discloses:
a greater total number [Brue teaches a system configurable for prioritizing processing of large write commands over small write commands based on storage usage scenario (para. 44-45, 7, 21)]
Benisty, Benisty 2, Kanno, and Brue are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 in view of Kanno and Brue, to modify the disclosures by Benisty in view of Benisty 2 in view of Kanno to include disclosures by Brue since they both teach data storage, wherein Brue is directed towards improved storage performance (para. 8). Therefore, it would be applying a known technique (based on storge usage scenario, prioritizing processing of large write commands over small write commands) to a known device (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion) ready for improvement to yield predictable results (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion, wherein larger commands with larger associated number of buffers may be prioritized for processing in order to provide for greater flexibility in optimizing command processing for different usage scenarios). MPEP 2143
As per claim 20, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 18 as shown above and further teaches:
20. The data storage device of claim 18, wherein the predicting further comprises assuming a rest of a command will maintain the ratio. [Benisty in view of Benisty 2 in view of Kanno in view of Brue as shown above teaches a number of physical pages or sections (ratio) corresponding to data (first chunk) associated with a command (see above; Benisty: para. 74-76, claim 13), where claim 18 as stated above does not explicitly require further chunks other than said first chunk, said physical pages or sections (ratio) corresponding to the data may comprise a ratio associated with the rest of the command]
As per claim 21, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above and further teaches:
21. The data storage device of claim 1, wherein the host buffers are in physical region page (PRP) lists. [Benisty in view of Benisty 2 in view of Kanno in view of Brue as shown above teaches host buffers containing write data and RPP list comprising entries with location of the write data (Benisty: para. 22, 27, 57-58, 74-76; Benisty 2: para. 36; fig. 3A and associated paragraphs)]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1) in view of Wu et al. (US 20080195810 A1).
As per claim 6, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 1 as shown above. It does not explicitly disclose, but Wu discloses:
6. The data storage device of claim 1, wherein at least one selected command represents a buffer having a size greater than a page size. [Benisty in view of Benisty 2 in view of Kanno in view of Brue as shown above teaches storing transfer data in 4kb buffers (e.g. physical page or sections) (Benisty: para. 74-75, 44-48, 51-58); Benisty in view of Benisty 2 in view of Kanno in view of Brue does not explicitly disclose, but Wu discloses a system where physical pages may be of different size than a logical page, by, for example, being larger than a size of a logical page (e.g. .5kb logical page to 2kb physical page) (para. 55-58)]
The disclosures by Benisty, Benisty 2, Kanno, Brue, and Wu are analogous because they are in the same field of endeavor of data storage.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Benisty in view of Benisty 2 in view of Kanno in view of Brue and Wu to modify the teachings of Benisty in view of Benisty 2 in view of Kanno in view of Brue to include the teaching of Wu since both teach data storage, wherein Wu is directed towards improved methods of data storage (para. 2). Therefore, it would be applying a known technique (system comprising logical pages of different size than physical page) to a known device (system comprising a plurality of buffers storing data, size of buffers corresponding to a section or physical page as pointed to by a PRP list) ready for improvement to yield predictable results (system comprising a plurality of buffers storing data, size of buffers corresponding to a section or physical page as pointed to by a PRP list, wherein size of logical page employed by the system may be of a different size than said physical pages (e.g. smaller) in order to provide for greater granularity of access to the buffers comprising physical pages). MPEP 2143
As per claim 7, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Wu teaches claim 6 as shown above and further teaches:
7. The data storage device of claim 6, wherein the buffer is a single sequential buffer. [Benisty 2 provides for a PRP list mapping to buffers of the same fixed size, each individual buffers being shown to be contiguous (para. 37; see fig. 2A and associated paragraphs showing each buffer forming a contiguous segment)]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Claims 11-13, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1) in view of Hong et al. (US 20180107614 A1).
As per claim 11,
11. A data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: [Benisty teaches a memory device comprising non-volatile memory and a controller for managing the device and interfacing with a host (para. 44-48, 51-58; fig. 1A, 2A and associated paragraphs)]
evaluate outstanding commands in one or more submission queues (SQs), wherein each outstanding command is evaluated to determine a number of host buffers that will be released when the outstanding command is completed, [Benisty teaches the memory device arbitrating the order of commands to fetch from host submission queues (para. 22; see para. 57-58, fig. 2A showing a controller’s priority determination module and fetch module); where a command may be prioritized based on the command’s transfer data size (para. 27, lines 24-30)] select outstanding commands with a highest number of host buffers that will be released; [Benisty teaches arbitrating the order of commands to fetch with priority based on transfer data size as shown above (para. 22, 27, 57-58, fig. 2A and associated paragraphs; also see para. 105 indicating prioritizing phases of certain commands for more quickly releasing internal resources such as buffers resident on host system); Benisty further teaches data associated with a command (such as write data for a write command) being stored across a plurality of sections of a predetermined size (e.g. 4kb) in host device memory, as indicated in a PRP list having an entry for each respective section (para. 74-76, claim 13); it would have been obvious for one of ordinary skill in the arts, provided with disclosures of Benisty determining priorities of fetching a command based on the sizes of data associated with commands and additional disclosures by Benisty providing for storing data associated with a command across a plurality of sections of a predetermined size (e.g. 4kb), to provide for representing/determining size of said data in units comprising the predetermined size (e.g. using the number of physical pages or sections in evaluating the commands’ sizes; utilizing a different unit of measurement in addition to kb); doing so would provide for improved host buffer management by allowing determination of stored data size in units that are aligned with the physical architecture of the memory holding the data]
Benisty does not explicitly disclose, but Benisty 2 discloses:
a number of host buffers that will be released when the outstanding command is completed; a highest number of host buffers that will be released [Benisty as shown above teaches arbitrating the order of commands to fetch from host submission queues based on transfer data size of write data stored in sections or physical pages of a predetermined size (e.g. 4kb) in host device memory as pointed to by a PRP list (see Benisty above; para. 22, 27, 57-58, 74-76); Benisty does not explicitly disclose, but Benisty 2 discloses a PRP list pointing to host memory buffers of the same fixed size (e.g. 4kb) for data transfers (para. 36; fig. 2A and associated paragraphs)] wherein a 512KB command can be disposed in a single host buffer or in multiple host buffers; [Where a plurality of buffers comprising 4kb size may be used to store data associated with a command (see above; Benisty: para. 74-76; Benisty 2: para. 36), multiple of such buffers may necessarily be used to for holding 512kb of data associated with a command; it would have been obvious for one of ordinary skill in the arts, before the effective filing date of the claimed invention, to have modified the size of data associated with a command as recited in the instant claim in order to support flexible command sizes. It has been held that discovering an optimum value of a result effective variable involves only routine skill in the art. In re Boesch, 617 F.2d 272, 205 USPQ 215 (CCPA 1980). Please also see MPEP 2144.05 II. “[W]here the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.” In re Aller, 220 F.2d 454, 456, 105 USPQ 233, 235 (CCPA 1955); Merck & Co. Inc. v. Biocraft Lab. Inc., 874 F.2d 804, 809, 10 USPQ2d 1843, 1848 (Fed. Cir. 1989), cert. denied, 493 U.S. 975 (1989) (Claimed ratios were obvious as being reached by routine procedures and producing predictable results)]]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses:
a number of host buffers that will be released when the outstanding command is completed; a highest number of host buffers that will be released [Benisty in view of Benisty 2 as shown above teaches prioritizing commands based on transfer data size comprising a number of buffers of a fixed size (e.g. 4kb) on the host (see above; Benisty: para. 22, 27, 57-58, 74-76; Benisty 2: para. 36); Benisty also discloses, responsive to completion of command processing, releasing host system internal resources such as buffer (Benisty: para. 105); Benisty in view of Benisty 2 does not explicitly disclose, but Kanno discloses, after completion of a write command, releasing a region of host buffer comprising the write data associated with the completed write command (para. 142-143, 166-167; figs. 10, 14 and associated paragraphs), where it would have been obvious for one of ordinary skill in the arts, provided with disclosures by Benisty in view of Benisty 2, directed towards prioritizing of commands based on the number of buffers of fixed size holding data associated with the commands and releasing host system internal resources such as buffer upon command completion, with disclosures by Kanno providing releasing a region of host buffer comprising write data associated with a completed write command, to provide for recognizing buffers holding the data to be releasable upon the command’s completion. Doing so would provide for improved host memory management by anticipating the amount of buffers to be freed upon a command’s completion.]
Benisty, Benisty 2, and Kanno are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 and Kanno, to modify the disclosures by Benisty in view of Benisty 2 to include disclosures by Kanno since they both teach data storage, wherein Kanno is directed towards improved controlling of a nonvolatile memory (para. 2). Therefore, it would be applying a known technique (responsive to completion of a write command, releasing host buffer region storing associated data) to a known device (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers anticipated to be released upon a command completion in arbitrating command fetching; doing so would provide for improved host memory management). MPEP 2143
Benisty in view of Benisty 2 in view of Kanno does not explicitly disclose, but Brue discloses:
a highest number of host buffers that will be released [Brue teaches a system configurable for prioritizing processing of large write commands over small write commands based on storage usage scenario (para. 44-45, 7, 21)]
Benisty, Benisty 2, Kanno, and Brue are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty in view of Benisty 2 in view of Kanno and Brue, to modify the disclosures by Benisty in view of Benisty 2 in view of Kanno to include disclosures by Brue since they both teach data storage, wherein Brue is directed towards improved storage performance (para. 8). Therefore, it would be applying a known technique (based on storge usage scenario, prioritizing processing of large write commands over small write commands) to a known device (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion) ready for improvement to yield predictable results (a system prioritizing of commands based on the number of buffers anticipated to be released upon command completion, wherein larger commands with larger associated number of buffers may be prioritized for processing in order to provide for greater flexibility in optimizing command processing for different usage scenarios). MPEP 2143
Benisty in view of Benisty 2 in view of Kanno in view of Brue does not explicitly disclose, but Hong discloses:
determine whether there is available memory in a host memory buffer (HMB) to cache write commands; and copy command payload buffers into the HMB. [Hong teaches a storage device fetching a command entry from a host submission queue, copying to host memory buffer write data associated with the command (para. 43-45), and determining that the copy operation is complete (para. 55-57), where confirming completion of copy operation may correspond to determining that that the HMB had available memory for receiving the data]
Benisty, Benisty 2, Kanno, Brue, and Hong are analogous to the claimed invention because they are in the same field of endeavor of data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue with Hong’s disclosures directed towards using host memory buffer as data cache. Doing so would provide for improved overall performance of a device (para. 76)
As per claim 12, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong teaches all the limitations of claim 11 as shown above and further teaches:
12. The data storage device of claim 11, wherein the commands selected do not have a force unit access (FUA) flag set. [Benisty teaches prioritizing commands based on transfer data size as shown above (para. 27, portion discussing the third example), where said method is not stated as requiring setting of a FUA flag]
As per claim 13, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong teaches all the limitations of claim 11 as shown above and further teaches:
13. The data storage device of claim 11, further comprising a volatile write cache. [Hong as shown above teaches copying to host memory buffer write data associated with the command (para. 43-45); Hong teaches that the buffer memory comprising the host memory buffer may comprise DRAM and be used as RAM cache (para. 72-76)]
Benisty, Benisty 2, Kanno, Brue, and Hong are analogous to the claimed invention because they are in the same field of endeavor of data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue with Hong’s disclosures directed towards using host memory buffer as data cache. Doing so would provide for improved overall performance of a device (para. 76)
As per claim 22, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong teaches claim 11 as shown above and further teaches:
22. The data storage device of claim 11, wherein the host buffers are in physical region page (PRP) lists. [Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong as shown above teaches host buffers containing write data and RPP list comprising entries with location of the write data (Benisty: para. 22, 27, 57-58, 74-76; Benisty 2: para. 36; fig. 3A and associated paragraphs)]
Benisty and Benisty 2 are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty and Benisty 2, to modify the disclosures by Benisty to include disclosures by Benisty 2 since they both teach data storage, wherein Benisty 2 is directed towards improved data storage (para 2). Therefore, it would be applying a known technique (use of host data buffers of same fixed size for data transfers) to a known device (storing write data for a write command in respective sections of predetermined size in host memory, use of the number of said sections/pages in arbitrating command fetching) ready for improvement to yield predictable results (system storing write data for a write command in respective buffers of a fixed size in host memory and using the number of said buffers in arbitrating command fetching, where use of the respective buffers may to provide for improved data throughput by utilizing dedicated buffers for holding write data to be transferred). MPEP 2143
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1) in view of Hong et al. (US 20180107614 A1) in view of Huang et al. (US 20200098423 A1).
As per claim 14, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong teaches claim 11 as shown above. It does not explicitly disclose, but Huang discloses:
14. The data storage device of claim 11, wherein the controller is further configured to rewrite PRP target addresses after the copying. [Where Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong teaches PRP entries indicating location of write data (Benisty: para. 74-75) and moving of write data to a host memory buffer (Hong: para. 43-45), it does not explicitly teach updating the new location of the write data. However, Huang discloses a storage controller caching data in an HMB and then updating mapping information corresponding to the write address (para. 39-43)]
Benisty, Benisty 2, Kanno, Brue, Hong, and Huang are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong with Huang’s disclosures directed towards caching data for a write command into HMB and subsequently updating the address of the write data to reflect its location in the HMB. Doing so would allow for an extension of lifetime of the data storage device (para. 39)
As per claim 15, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong in view of Huang teaches claim 14 as shown above and further teaches:
15. The data storage device of claim 14, wherein the controller is further configured to send a completion queue (CQ) entry to a host device after the copying. [Hong teaches transferring a completion notification to completion queue of the host after completion of the copy operation (para. 46)]
Benisty, Benisty 2, Kanno, Brue, and Hong are analogous to the claimed invention because they are in the same field of endeavor of data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue with Hong’s disclosures directed towards using host memory buffer as data cache. Doing so would provide for improved overall performance of a device (para. 76)
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1) in view of Hong et al. (US 20180107614 A1) in view of Huang et al. (US 20200098423 A1) in view of Moon et al. (US 20150278104 A1).
As per claim 16, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong in view of Huang teaches claim 15 as shown above and further teaches:
16. The data storage device of claim 15, wherein the controller is configured to release host memory after the copying and rewriting. [Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong in view of Huang as shown above teaches storing data to HMB and updating the location of the write data to reflect its location in HMB as shown above (Huang: para. 39-43); Huang further teaches subsequently flushing the data from the HMB (para. 43-44)]
Benisty in view of Hong in view of Huang does not explicitly disclose, but Moon discloses:
release [while Huang is not explicitly with respect to the flush operation freeing the HMB locations storing data, Moon discloses performing a flush operation from a first memory to a second memory, the flush operation involving reallocating free space in the first memory (para. 40-43, 56)]
Benisty, Benisty 2, Kanno, Brue, Hong, and Huang are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong with Huang’s disclosures directed towards caching data for a write command into HMB and subsequently updating the address of the write data to reflect its location in the HMB. Doing so would allow for an extension of lifetime of the data storage device (para. 39)
Benisty, Benisty 2, Kanno, Brue, Hong, Huang, and Moon are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong in view of Huang with Moon’s disclosures directed towards implementation of a flush operation involving allocating free space in the flushed memory. Doing so would allow for effective facilitation of data access operations (para. 43).
As per claim 17, Benisty in view of Benisty 2 in view of Kanno in view of Brue in view of Hong in view of Huang in view of Moon teaches claim 16 as shown above and further teaches:
17. The data storage device of claim 16, wherein the controller is configured to process commands after sending the CQ entry. [Hong teaches the completion notification being transferred to the host’s completion queue prior to the write data being written to the storage device(para. 46-47; fig. 4 and associated paragraphs)]
Benisty, Benisty 2, Kanno, Brue, and Hong are analogous to the claimed invention because they are in the same field of endeavor of data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the disclosures provided by Benisty in view of Benisty 2 in view of Kanno in view of Brue with Hong’s disclosures directed towards using host memory buffer as data cache. Doing so would provide for improved overall performance of a device (para. 76)
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Benisty et al. (US 20180321844 A1) in view of Benisty (US 20210240641 A1, hereinafter Benisty 2) in view of Kanno (US 20200241797 A1) in view of Brue et al. (US 20180136839 A1) in view of Goldstein et al. (US 11714767 B1).
As per claim 19, Benisty in view of Benisty 2 in view of Kanno in view of Brue teaches claim 18 as shown above. It does not explicitly disclose, but Goldstein disclose:
19. The data storage device of claim 18, wherein the commands are scatter gather lists (SGLs). [Goldstein teaches a data transfer command for performing read or write, where the command may be in the form of an SGL (col. 3, line 64 – col. 4, line 15)]
Benisty, Benisty 2, Kanno, Brue and Goldstein are analogous to the claimed invention because they are in the same field of endeavor involving data storage.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, having knowledge of Benisty, Benisty 2, Kanno, Brue and Goldstein, to modify the disclosures by Benisty, Benisty 2, Kanno, Brue to include disclosures by Goldstein since they both teach data storage, wherein Goldstein is directed towards improved hardware resource management for storage (col. 4, lines 16-23). Therefore, it would be applying a known technique (read or write commands comprising SGL) to a known device (storage system issuing read or write commands to a device) ready for improvement to yield predictable results (storage system issuing read or write commands comprising SGL to a device to provide more efficient data transfer based on the parameters of the data transfer included in the SGL). MPEP 2143
Response to Arguments
Applicant’s arguments with respect to amended claims 1, 11, 18, and claims depending therefrom under 35 USC 103 have been fully considered.
With respect to the amendments to the claims, while the remarks provide for, for example, processing of a first 512KB command freeing up a single host buffer and processing of a second 512KB command freeing up more than a single host buffer, the examiner respectfully submits that the amended claim limitations, as they are recited in the claims, may be interpreted under the broadest reasonable interpretation as merely requiring a 512KB command to be capable of being stored in a plurality of buffers. Specifically, the limitation(s) stating that a 512KB command can be disposed in a single host buffer or in multiple host buffers may be interpreted as requiring one of the 512KB command being capable of being stored in a single buffer or the 512KB command being capable of being stored using a plurality of buffers. The disclosures of Benisty (para. 74-76) in view Benisty 2 (para. 36) provide for using a plurality of 4kb buffers for storing command data has been interpreted to correspond to using multiple buffers for the command (please see claims 1, 11, and 18 above).
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Toyoda et al. (US 20150095621 A1) teaches buffers configured to hold access requests and an arbitrator configured to select one of the access requests in the buffers in accordance with a number of remaining resources of the memory.
Hwang et al. (US 20170092366 A1) teaches write commands comprising write data having 512KB of data.
Kedem (US 9098203 B1) teaches a scheduler issuing commands to a memory controller in an order based on priority metrics assigned to the memory commands, the priority metric reflecting a number of commands in a sub-buffer exceeding a threshold.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIAS KIM whose telephone number is (571)272-8093. The examiner can normally be reached Monday - Friday: 7:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JARED RUTZ can be reached on 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.Y.K./Examiner, Art Unit 2135
/JARED I RUTZ/Supervisory Patent Examiner, Art Unit 2135