DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 8 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 8, the claim recites “… transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks.” Lines 3-4 recites “without receiving a request from the host system for any of the one or more data blocks…,” however, the claim has already previously recited a request from the host system for the data block. Thus, it is unclear as to whether this request is related to the previously recited request or is a different request altogether. Furthermore, the claim had previously a step of transmitting the data block responsive to receiving the request to access the data block, thus, it is unclear as to how the transmitting of the data block occurs without receiving a request from the host system for said data block. For the purposes of interpretation, this limitation shall be interpreted as transmitting a different set of one or more data blocks to the host memory without receiving a second request from the host system for any of the different set of one or more data blocks.
Regarding claim 19, the claim recites “… transmitting the set of one or more data blocks to the host memory without receiving a read request from the host system for any of the one or more data blocks.” Lines 3-4 recites “without receiving a read request from the host system for any of the one or more data blocks…,” however, the claim has already previously recited a read request from the host system for the data block. Thus, it is unclear as to whether this request is related to the previously recited read request or is a different read request altogether. Furthermore, the claim had previously a step of transmitting the data block responsive to receiving the read request to access the data block, thus, it is unclear as to how the transmitting of the data block occurs without receiving a read request from the host system for said data block. For the purposes of interpretation, this limitation shall be interpreted as transmitting a different set of one or more data blocks to the host memory without receiving a second read request from the host system for any of the different set of one or more data blocks.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 and 11 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jang et al. (US 11,093,174 B1) hereinafter Jang et al.
Regarding claim 1, Jang et al. teaches a system comprising:
a memory device (solid state drive 164); and
a processing device operatively coupled to the memory device (processors coupled to the buffer and SSD Column 6, Lines 31-49), to perform operations comprising:
receiving a request of a host system to access a data block in the memory device (a host (such as information handling system 100) issues read and write requests to solid state drive 164 Column 6, Line 50-65);
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device (the host memory buffer may be any portion of the memory that is configured for the solid state drive (SSD) Column 6, Lines 31-49); and
storing the set of one or more data blocks in a second buffer in the host memory (host memory buffer is a cache memory used to perform read request processing Column 6, Line 50-65).
Claim 11 is rejected under 35 USC 102(a)(1) for the same reasons as claim 1, as outlined above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang et al. in view of Muthiah et al. (US 2022/0292020 A1) hereinafter Muthiah et al.
Regarding claim 8, Jang et al. teaches all of the features with respect to claim 1 as outlined above.
Jang et al. does not appear to explicitly teach, however, Muthiah et al. teaches wherein the operations further comprise transmitting the data block to the host memory responsive to receiving the request to access the data block, and transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks (using an application ID, a pre-laoding of pages for logical regions of data associated with the application ID can occur, that is, “pre-loaded” in the sense that they are cached before the host issues a storage command with a logical address to the storage device Paragraph [0054]).
The disclosures of Jang et al. and Muthiah et al., hereinafter JM, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art, having the teachings of JM before the effective filing date of the invention, to modify the teachings of Jang et al. by transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks, as taught by Muthiah et al.
One of ordinary skill in the art would have been motivated to include transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks because pre-loading data provide for a faster response to a host storage command.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang et al. in view of Hyun (US 2023/0063123 A1) hereinafter Hyun.
Regarding claim 9, Jang et al. teaches all of the features with respect to claim 1 as outlined above.
Jang et al. does not appear to explicitly teach, however, Hyun teaches wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage (the host processor may determine target data to be prefetched from a second tier memory to the first tier memory in an access request of a host Paragraph [0018], where the second tier memory is first memory module Figure 13, 1000 which is a second-tier module having a lower priority and buffer memory Figure 13, 3300 of host is the first tier memory Paragraphs [0176]-[0177]).
The disclosures of Jang et al. and Hyun, hereinafter JH, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of JH before them, to modify the teachings of Jang et al. to include the teachings of Hyun since both JH teach performing I/O operations and transferring data to the host. Therefore it is applying a known technique (prefetching data from a second tier memory to a host memory [0176]-[0177] of Hyun) to a known device (memory system performing storing data in a buffer of a host of Jang et al.) ready for improvement to yield predictable results (data is prefetched from a second tier memory to the host of Hyun), KSR, MPEP 2143.
Claim(s) 10 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jang et al. in view of Choi et al. (US 2022/0222011 A1) hereinafter Choi et al.
Regarding claim 10, Jang et al. teaches all of the features with respect to claim 1 as outlined above.
Jang et al. does not appear to explicitly teach, however, Choi et al. teaches wherein the operations further comprise: transmitting, to the host system, a response indicating that the data block is stored in the first buffer in the host memory (a response signal CR is generated and sent to the file system by host write buffer 210 to indicate that the command was received at the host write buffer Paragraph [0041]).
The disclosures of Jang et al. and Choi et al., hereinafter JC, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of JC before them, to modify the teachings of Choi et al. to include the teachings of Hyun since both JC teach performing I/O operations and transferring data to the host. Therefore it is applying a known technique (a response indicates the data block is stored in a host memory buffer [0041], [0082] of Choi et al.) to a known device (memory system performing storing data in a buffer of a host of Jang et al.) ready for improvement to yield predictable results (the host has received a response indicating the write was received at the host buffer of Choi et al.), KSR, MPEP 2143.
Regarding claim 16, Jang et al. teaches a system comprising:
a memory device (solid state drive 164); and
a processing device operatively coupled to the memory device (processors coupled to the buffer and SSD Column 6, Lines 31-49), to perform operations comprising:
receiving a write request comprising a data block, wherein the data block is stored in a first buffer (a host (such as information handling system 100) issues write requests, where the host memory buffer is used as a write cache Column 6, Lines 50-65);
receiving a read request for the data block (read requests are operations can be directed to the host memory buffer Column 6, Lines 50-65);
determining that the data block is related to a set of one or more data blocks stored at the memory device (the host memory buffer may be any portion of the memory that is configured for the solid state drive (SSD) Column 6, Lines 31-49); and
storing the set of one or more data blocks in a second buffer in the host memory (host memory buffer is a cache memory used to perform read request processing Column 6, Line 50-65).
Jang et al. does not appear to explicitly teach, however, Choi et al. teaches the write request comprising a stream identifier, wherein the data block corresponds to the stream identifier (host write buffer 710 is used to store data for each stream, in which when a write command is transmitted, write data for each stream may be managed by using a stream ID as a delimiter Paragraph [0082]) and transmitting a response to a host system, the response indicating that the data block is stored in a first buffer in host memory (a response signal CR is generated and sent to the file system by host write buffer 210 to indicate that the command was received at the host write buffer Paragraph [0041]).
The disclosures of Jang et al. and Choi et al., hereinafter JC, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of JC before them, to modify the teachings of Choi et al. to include the teachings of Hyun since both JC teach performing I/O operations and transferring data to the host. Therefore it is applying a known technique (the write request includes a stream ID and a response indicates the data block is stored in a host memory buffer [0041], [0082] of Choi et al.) to a known device (memory system performing storing data in a buffer of a host of Jang et al.) ready for improvement to yield predictable results (a stream ID is included in a request and the host has received a response indicating the write was received at the host buffer of Choi et al.), KSR, MPEP 2143.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over JC in rther iew of Muthiah et al. (US 2022/0292020 A1) hereinafter Muthiah et al.
Regarding claim 19, JC teaches all of the features with respect to claim 16 as outlined above.
JC does not appear to explicitly teach, however, Muthiah et al. teaches wherein the operations further comprise transmitting the data block to the host memory responsive to the receiving of the read request for the data block and transmitting the set of one or more data blocks to the host memory without receiving a read request from the host system for any of the one or more data blocks in the set (using an application ID, a pre-laoding of pages for logical regions of data associated with the application ID can occur, that is, “pre-loaded” in the sense that they are cached before the host issues a storage command with a logical address to the storage device Paragraph [0054]).
The disclosures of JC and Muthiah et al., hereinafter JCM, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art, having the teachings of JCM before the effective filing date of the invention, to modify the teachings of Jang et al. by transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks, as taught by Muthiah et al.
One of ordinary skill in the art would have been motivated to include transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks because pre-loading data provide for a faster response to a host storage command.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over JC in further view of Hyun (US 2023/0063123 A1) hereinafter Hyun.
Regarding claim 20, JC teaches all of the features with respect to claim 16 as outlined above.
JC does not appear to explicitly teach, however, Hyun teaches wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage (the host processor may determine target data to be prefetched from a second tier memory to the first tier memory in an access request of a host Paragraph [0018], where the second tier memory is first memory module Figure 13, 1000 which is a second-tier module having a lower priority and buffer memory Figure 13, 3300 of host is the first tier memory Paragraphs [0176]-[0177]).
The disclosures of JC and Hyun, hereinafter JCH, are analogous art to the claimed invention because they are in the same field of endeavor of I/O execution and/or prefetching.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of JCH before them, to modify the teachings of JC to include the teachings of Hyun since both JCH teach performing I/O operations and transferring data to the host. Therefore it is applying a known technique (prefetching data from a second tier memory to a host memory [0176]-[0177] of Hyun) to a known device (memory system performing storing data in a buffer of a host of Jang et al.) ready for improvement to yield predictable results (data is prefetched from a second tier memory to the host of Hyun), KSR, MPEP 2143.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claim 2-7 and 12-14 is/are rejected under 35 U.S.C. 101 as claiming the same invention as that of the claims of prior U.S. Patent No. 12182024 B2 (Parent Application No. 18/508,141). This is a statutory double patenting rejection.
Instant Application No. 19/005,870
Parent Application No. 18/508,141
US Patent No. 12182024 B2
2. The system of claim 1, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory,
wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
3. The system of claim 2, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
2. The system of claim 1, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
4. The system of claim 2, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in the host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
3. The system of claim 1, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in the host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
5. The system of claim 2, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 1, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 2, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 1, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 2, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
8. The system of claim 1, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
12. The method of claim 11, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
10. A method comprising:
receiving a request to access a data block in a memory device from a host system;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
13. The method of claim 12, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
11. The method of claim 10, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
14. The method of claim 12, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
12. The method of claim 10, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
Claim 2-7 and 12-14 is/are rejected under 35 U.S.C. 101 as claiming the same invention as that of the claims of prior U.S. Patent No. 12182024 B2 (Parent Application No. 18/508,141). This is a statutory double patenting rejection.
Instant Application No. 19/005,870
Parent Application No. 17/557,406
US Patent No. 11816035 B2
2. The system of claim 1, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory,
wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
3. The system of claim 2, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
2. The system of claim 1, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
4. The system of claim 2, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in the host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
3. The system of claim 1, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in the host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
5. The system of claim 2, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 1, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 2, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 1, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 2, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
8. The system of claim 1, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
12. The method of claim 11, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
10. A method comprising:
receiving a request to access a data block in a memory device from a host system;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
13. The method of claim 12, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
11. The method of claim 10, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
14. The method of claim 12, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
12. The method of claim 10, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 8-11 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of U.S. Patent No. 12182024 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of Patent No. 12182024 B2.
Instant Application No. 19/005,870
Parent Application No. 18/508,141
US Patent No. 12182024 B2
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory.
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
8. The system of claim 1, wherein the operations further comprise transmitting the data block to the host memory responsive to receiving the request to access the data block, and transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks.
4. The system of claim 1, wherein the operations further comprise transmitting the data block to the host memory responsive to receiving the request to access the data block, and transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks.
9. The system of claim 1, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
5. The system of claim 1, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
10. The system of claim 1, wherein the operations further comprise: transmitting, to the host system, a response indicating that the data block is stored in the first buffer in the host memory.
9. The system of claim 1, wherein the operations further comprise: transmitting, to the host system, a response indicating that the data block is stored in the first buffer in the host memory.
11. A method comprising:
receiving a request to access a data block in a memory device from a host system;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory.
10. A method comprising:
receiving a request to access a data block in a memory device from a host system;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
12. The method of claim 11, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
10. … wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
13. The method of claim 12, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
11. The method of claim 10, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
14. The method of claim 12, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
12. The method of claim 10, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
15. The method of claim 12, further comprising: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
15. The method of claim 10, further comprising: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of U.S. Patent No. 11816035 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are anticipated by the claims of Patent No. 11816035 B2.
Instant Application No. 19/005,870
Parent Application No. 17/557,406
US Patent No. 11816035 B2
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory.
1. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a request of a host system to access a data block in the memory device;
transmitting a response to the host system that indicates the data block is stored in a first buffer in host memory;
determining that the data block is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
2. The system of claim 1, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
1. … wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
3. The system of claim 2, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
2. The system of claim 1, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
4. The system of claim 2, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in the host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
3. The system of claim 1, wherein the operations further comprise establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
5. The system of claim 2, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 1, wherein the memory sub-system comprises a Solid State Drive (SSD) that comprises the processing device, RAM, and NAND, and wherein the processing device of the SSD prefetches a quantity of data from the NAND that exceeds a capacity of the RAM and stores the prefetched data in the second buffer in the host memory.
6. The system of claim 2, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 1, wherein the operations further comprise: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
7. The system of claim 2, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
8. The system of claim 1, wherein determining the data block is related to the set of one or more data blocks comprises: accessing a data structure in the memory sub-system that comprises mapping data corresponding to location identifiers and stream identifiers, wherein the location identifiers comprise a Logical Block Address (LBA); and determining, based on the data structure, that the data block is related to each of the one or more data blocks in the set.
8. The system of claim 1, wherein the operations further comprise transmitting the data block to the host memory responsive to receiving the request to access the data block, and transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks.
4. The system of claim 1, wherein the operations further comprise transmitting the data block to the host memory responsive to the receiving of the request for the data block and transmitting the set of one or more data blocks to the host memory without receiving a request from the host system for any of the one or more data blocks.
9. The system of claim 1, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
5. The system of claim 1, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
10. The system of claim 1, wherein the operations further comprise: transmitting, to the host system, a response indicating that the data block is stored in the first buffer in the host memory.
1. …
transmitting a response to the host system that indicates the data block is stored in a first buffer in host memory;
11. A method comprising:
receiving a request to access a data block in a memory device from a host system;
determining that the data block stored in a first buffer in a host memory is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory.
9. A method comprising:
receiving a request to access a data block in the memory device from a host system;
transmitting a response to the host system that indicates the data block is stored in a first buffer in host memory;
determining that the data block is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
12. The method of claim 11, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
9. … wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
13. The method of claim 12, wherein the method is performed by a processing device that is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
10. The method of claim 9, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
14. The method of claim 12, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
11. The method of claim 9, further comprising establishing the second buffer in the host memory, wherein the establishing comprises: transmitting, by the memory sub-system to the host system, an indication of a size of a region in host memory; receiving, by the memory sub-system from the host system, a location of the region in the host memory; and updating the region to comprise the second buffer to store data blocks and to comprise a data structure indicating the data blocks stored in the second buffer.
15. The method of claim 12, further comprising: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
14. The method of claim 9, further comprising: receiving a plurality of write requests for the data block and at least one data block of the set, wherein each of the plurality of write requests comprise a particular stream identifier; and updating a data structure stored by the memory sub-system to indicate the data block and the at least one data block of the set are related to the particular stream identifier.
16. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a write request comprising a data block and a stream identifier, wherein the data block is stored in a first buffer and corresponds to the stream identifier;
receiving a read request for the data block;
transmitting a response to a host system, the response indicating that the data block is stored in a first buffer in host memory;
determining that the data block is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory.
16. A system comprising:
a memory device; and
a processing device operatively coupled to the memory device, to perform operations comprising:
receiving a write request comprising a data block and a stream identifier; updating a data structure to indicate the data block corresponds to the stream identifier;
receiving a read request for the data block;
transmitting a response to the host system that indicates the data block is stored in a first buffer in host memory;
determining, based on the data structure, that the data block is related to a set of one or more data blocks stored at the memory device; and
storing the set of one or more data blocks in a second buffer in the host memory, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
17. The system of claim 16, wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
16. … wherein the first buffer is controlled by the host system and the second buffer is controlled by a memory sub-system.
18. The system of claim 17, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
17. The system of claim 16, wherein the processing device is a memory controller of the memory sub-system and wherein the first buffer comprises a page cache that is managed by the host system, and wherein the second buffer in the host memory comprises a Host Memory Buffer (HMB) that is exclusively controlled by the memory controller.
19. The system of claim 16, wherein the operations further comprise transmitting the data block to the host memory responsive to the receiving of the read request for the data block and transmitting the set of one or more data blocks to the host memory without receiving a read request from the host system for any of the one or more data blocks in the set.
18. The system of claim 16, wherein the operations further comprise transmitting the data block to the host memory responsive to the receiving of the read request for the data block and transmitting the set of one or more data blocks to the host memory without receiving a read request from the host system for any of the one or more data blocks in the set.
20. The system of claim 16, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
19. The system of claim 16, wherein the processing device is included in a first level of a storage hierarchy that comprises secondary storage and prefetches data of the set and pushes the data of the set to a second level of the storage hierarchy that comprises the host memory as primary storage.
. Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Maxey et al. (US 2018/0074971 A1) which teaches a host buffer and evicting pages from a host buffer.
Kanno (US 2021/0064299 A1) teaches receiving and executing write commands with stream IDs.
Narsale et al. (US 2022/0019536 A1) teaches prefetching data before receiving an explicit request from the host.
Ravimohan et al. (US 2014/0281458 A1) teaches prefetching data before a host issues a read request.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANE W BENNER whose telephone number is (571)270-0067. The examiner can normally be reached Mon - Thurs (8 AM - 5 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, REGINALD BRAGDON can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JANE W. BENNER
Primary Examiner
Art Unit 2131
/JANE W BENNER/ Primary Examiner, Art Unit 2139