DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 4 February 2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3, 4, 6-8, 10, 11, 15, 17, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500).
Claim 1:
Vaghasiya et al. disclose a storage device to minimize localization of random read sensitive application data (RRSAD) on a memory device, the storage device comprises:
a memory device including parallel sense units [figs. 1-3; par. 0027 – “In some cases, a memory device 130 may be or include a NAND device (e.g., NAND flash device). A memory device 130 may be or include a die 160 (e.g., a memory die). For example, in some cases, a memory device 130 may be a package that includes one or more dies 160. A die 160 may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die 160 may include one or more planes 165, and each plane 165 may include a respective set of blocks 170, where each block 170 may include a respective set of pages 175, and each page 175 may include a set of memory cells.”];
a random-access memory to store data received from hosts [figs. 1, 3; par. 0023 – “Additionally, or alternatively, the local memory 120 may serve as a cache for the memory system controller 115. For example, data may be stored in the local memory 120 if read from or written to a memory device 130, and the data may be available within the local memory 120 for subsequent retrieval for or manipulation (e.g., updating) by the host system 105 (e.g., with reduced latency relative to a memory device 130) in accordance with a cache policy.”]; and
a controller to receive the data from the hosts, cache the data in the random-access memory, identify RRSAD received from a first host in cached data, arrange a storage order of the data, and program the RRSAD from the first host across the parallel sense units on the memory device according to an arranged storage order [figs. 1, 3; pars. 0069, 0076 – Data is stored in a selected order across dies. The claim does not set forth what RRSAD data is and how it differs from any other data. RRSAD also does not appear to be an established term in the art. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
However, Vaghasiya et al. do not specifically disclose,
the RRSAD including data from different non-contiguous logical block address ranges [0062 – Vaghasiya et al. disclose that commands from the host may indicate one or more LBAs. Examiner notes that a set of LBAs will have subsets that are non-contiguous even when the set is contiguous. However, in the interest of compact prosecution, an additional reference is being provided.]
In the same field of endeavor, Gunda et al. disclose,
the RRSAD including data from different non-contiguous logical block address ranges [par. 0024 – “Generally, when a controller of the storage device receives a data stream including a range of LBAs to be written to memory (e.g., in a host write command), the controller checks whether this LBA range is sequential or random. If the LBAs in the range are uncorrelated, such as having inconsecutive LBAs in an unrelated pattern (e.g., LBAs 0, 500, 70, 340, 220, etc.), the controller identifies the data stream as a random stream, and accordingly writes the random data stream to a physical block of single-level cells (SLCs) reserved for random data (a “random SLC block”). The controller writes the random data stream to the random SLC block if the block is open (e.g., not full of data); if the random SLC block is closed (e.g., full of data), the controller opens a new random SLC block and writes the random data stream to the new block.”]
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vaghasiya et al. to include segregating random and sequential data, as taught by Gunda et al., in order to improve sequential write and read performance.
Vaghasiya et al. and Gunda et al. disclose all the limitations above but do not specifically disclose,
wherein the arranged storage order includes programming consecutive RRSADs from the first host in parallel sense units to enable subsequent parallel read operations of the consecutive RRSADs as requested by the first host.
In the same field of endeavor, Jang et al. disclose,
wherein the arranged storage order includes programming consecutive RRSADs from the first host in parallel sense units to enable subsequent parallel read operations of the consecutive RRSADs as requested by the first host [pars. 0009, 0014-0015, 0055 – “In accordance with an embodiment of the present disclosure, a data storage device may include one or more nonvolatile memory devices each including a plurality of unit storage spaces; and an address recommending circuit configured to recommend a unit storage space among the plurality of unit storage spaces to process a write request, wherein the address recommending circuit applies a plurality of feature data corresponding to the plurality of unit storage spaces to a neural network to recommend the unit storage space, and wherein the plurality of feature data are generated based on request information for the write request, a target address corresponding to the write request, an address of data stored in the plurality of unit storage spaces.” … “The address recommending circuit 100 determines the recommended address to maximize internal parallelism in an operation of processing a read request to be performed in the future.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al. and Gunda et al. to include storing data, as taught by Jang et al., in order to improve performance by allowing future parallel reads.
Claim 3 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein when programming the RRSAD, the controller maintains information about where a last RRSAD for the first host is written [pars. 0032-0033 – “In some cases, one or more copies of an L2P mapping table may be stored within the memory cells of the memory device 130 (e.g., within one or more blocks 170 or planes 165) for use (e.g., reference and updating) by the local controller 135 or memory system controller 115.”].
Claim 4 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein when programming the RRSAD, the controller programs consecutive RRSADs from the first host in parallel sense units most of the time [figs. 1, 3; pars. 0069, 0076 – Parallel write operation across multiple dies. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
Claim 6 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein the RRSAD from the first host is distributed equally across the parallel sense units at a die level [figs. 1, 3; pars. 0069, 0076 – Parallel write operation across multiple dies. Data is spread across the dies. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
Claim 7 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein a parallel read capability on the memory device is at least one of a die level and a plane level [figs. 1, 3; pars. 0069, 0076 – Parallel write operation across multiple dies. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
Claim 8 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein the memory device supports an asynchronous independent plane read feature, wherein a plane is read independently and is an independent sense unit [figs. 1, 3; pars. 0027, 0031 – Planes contain pages which is the smallest granularity at which independent read operations may occur. (“For some NAND architectures, memory cells may be read and programmed (e.g., written) at a first level of granularity (e.g., at the page level of granularity) but may be erased at a second level of granularity (e.g., at the block level of granularity). That is, a page 175 may be the smallest unit of memory (e.g., set of memory cells) that may be independently programmed or read (e.g., programed or read concurrently as part of a single program or read operation), and a block 170 may be the smallest unit of memory (e.g., set of memory cells) that may be independently erased (e.g., erased concurrently as part of a single erase operation).”)].
Claim 10:
Vaghasiya et al. disclose a storage device to minimize localization of random read sensitive application data (RRSAD) on a memory device, the storage device comprises:
a memory device including parallel sense units [figs. 1-3; par. 0027 – “In some cases, a memory device 130 may be or include a NAND device (e.g., NAND flash device). A memory device 130 may be or include a die 160 (e.g., a memory die). For example, in some cases, a memory device 130 may be a package that includes one or more dies 160. A die 160 may, in some examples, be a piece of electronics-grade semiconductor cut from a wafer (e.g., a silicon die cut from a silicon wafer). Each die 160 may include one or more planes 165, and each plane 165 may include a respective set of blocks 170, where each block 170 may include a respective set of pages 175, and each page 175 may include a set of memory cells.”];
a random-access memory to store data received from a host [figs. 1, 3; par. 0023 – “Additionally, or alternatively, the local memory 120 may serve as a cache for the memory system controller 115. For example, data may be stored in the local memory 120 if read from or written to a memory device 130, and the data may be available within the local memory 120 for subsequent retrieval for or manipulation (e.g., updating) by the host system 105 (e.g., with reduced latency relative to a memory device 130) in accordance with a cache policy.”]; and
a controller to provide a parallel sense capability of the memory device to the host; to receive the data from the host formed according to the parallel sense capability of the memory device, and program the RRSAD across the parallel sense units on the memory device according to an order received from the host [figs. 1, 3; pars. 0069, 0076 – Data is stored in a selected order across dies. The claim does not set forth what RRSAD data is and how it differs from any other data. RRSAD also does not appear to be an established term in the art. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
However, Vaghasiya et al. do not specifically disclose,
the RRSAD including data from different non-contiguous logical block address ranges [0062 – Vaghasiya et al. disclose that commands from the host may indicate one or more LBAs. Examiner notes that a set of LBAs will have subsets that are non-contiguous even when the set is contiguous. However, in the interest of compact prosecution, an additional reference is being provided.]
In the same field of endeavor, Gunda et al. disclose,
the RRSAD including data from different non-contiguous logical block address ranges [par. 0024 – “Generally, when a controller of the storage device receives a data stream including a range of LBAs to be written to memory (e.g., in a host write command), the controller checks whether this LBA range is sequential or random. If the LBAs in the range are uncorrelated, such as having inconsecutive LBAs in an unrelated pattern (e.g., LBAs 0, 500, 70, 340, 220, etc.), the controller identifies the data stream as a random stream, and accordingly writes the random data stream to a physical block of single-level cells (SLCs) reserved for random data (a “random SLC block”). The controller writes the random data stream to the random SLC block if the block is open (e.g., not full of data); if the random SLC block is closed (e.g., full of data), the controller opens a new random SLC block and writes the random data stream to the new block.”]
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Vaghasiya et al. to include segregating random and sequential data, as taught by Gunda et al., in order to improve sequential write and read performance.
Vaghasiya et al. and Gunda et al. disclose all the limitations above but do not specifically disclose,
wherein the order includes alignment of host commands with parallel sense capabilities of the memory device and the order allows for programming consecutive RRSADs from the first host in parallel sense units to enable subsequent parallel read operations of the consecutive RRSADs as requested by the first host.
In the same field of endeavor, Jang et al. disclose,
wherein the order includes alignment of host commands with parallel sense capabilities of the memory device and the order allows for programming consecutive RRSADs from the first host in parallel sense units to enable subsequent parallel read operations of the consecutive RRSADs as requested by the first host [pars. 0009, 0014-0015, 0055 – “In accordance with an embodiment of the present disclosure, a data storage device may include one or more nonvolatile memory devices each including a plurality of unit storage spaces; and an address recommending circuit configured to recommend a unit storage space among the plurality of unit storage spaces to process a write request, wherein the address recommending circuit applies a plurality of feature data corresponding to the plurality of unit storage spaces to a neural network to recommend the unit storage space, and wherein the plurality of feature data are generated based on request information for the write request, a target address corresponding to the write request, an address of data stored in the plurality of unit storage spaces.” … “The address recommending circuit 100 determines the recommended address to maximize internal parallelism in an operation of processing a read request to be performed in the future.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al. and Gunda et al. to include storing data, as taught by Jang et al., in order to improve performance by allowing future parallel reads.
Claim 11 (as applied to claim 10 above):
Vaghasiya et al. disclose,
wherein the controller provides the parallel sense capability of the memory device in response to a query from the host [figs. 1, 3; pars. 0069, 0076 – Parallel write operation across multiple dies is performed in response to a request from the host. (“In response to the one or more commands to write the data sequence 305, the memory system 110-a may perform a write operation 310 (e.g., a first write operation, a parallel write operation). The write operation 310 may include writing respective data subsets 315 (e.g., subsets of the data sequence 305) to each of multiple dies 160-a of the memory system 110-a, which may include reading from the data sequence 305 from the local memory 120-a or another cache or buffer, or communicating the data subsets 315 directly from a command interface (e.g., an interface 220). For example, the write operation 310 may include writing a data subset 315-a (e.g., including portions 0, 1, and 2 of the data sequence 305) to the die 160-a-1, writing a data subset 315-b (e.g., including portions 3, 4, and 5 of the data sequence 305) to the die 160-a-2, and writing a data subset 315-c (e.g., including portions 6, 7, and 8 of the data sequence 305) to the die 160-a-3. Writing of other portions of the data sequence 305 (e.g., portions 9 through n−1) may be performed in another portion of the write operation 310 (not shown), or in another instance of a write operation 310, among other examples.”)].
Claim 15:
Claim 15, directed to a method, is rejected for the same reasons set forth in the rejection of claim 1 above, mutatis mutandis.
Claim 17 (as applied to claim 15 above):
Claim 17, directed to a method, is rejected for the same reasons set forth in the rejection of claim 3 above, mutatis mutandis.
Claim 18 (as applied to claim 15 above):
Claim 18, directed to a method, is rejected for the same reasons set forth in the rejection of claim 4 above, mutatis mutandis.
Claim 20 (as applied to claim 15 above):
Claim 20, directed to a method, is rejected for the same reasons set forth in the rejection of claim 6 above, mutatis mutandis.
Claim(s) 2, 14, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500) as applied to claims 1, 10, and 15 above, respectively, and further in view of Cowling (Pub. No. US 2015/0254320).
Claim 2 (as applied to claim 1 above):
Vaghasiya et al., Gunda et al., and Jang et al. disclose all the limitations above but do not specifically disclose,
wherein the controller identifies the RRSAD based on a host hint included in the RRSAD, wherein the host hint is provided by the first host.
In the same field of endeavor, Cowling discloses,
wherein the controller identifies the RRSAD based on a host hint included in the RRSAD, wherein the host hint is provided by the first host [par. 0097 – “Also note that an application can use different types of colocation hints. For example, if an application is accessing a block store, the application can use the namespace identifier as the colocation hint. On the other hand, if the application is accessing a thumbnail store containing thumbnail images associated with other data items, the application can use an application identifier as the colocation hint. Applications can alternatively make use of other identifiers, such as a "user identifier" or a "geographic location identifier," as a colocation hint. Note that allowing the application to specify colocation hints also allows the application to specify what data items are to be stored together at whatever level of granularity that the application requires.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al., Gunda et al., and Jang et al. to include sending hints from the application, as taught by Cowling in order to improve performance by controlling what data items are stored together.
Claim 14 (as applied to claim 10 above):
Vaghasiya et al. and Gunda et al. disclose all the limitations above but do not specifically disclose,
wherein the controller programs the data to independent sense units in the memory device based on a command group provided by the host, wherein the command group provides an indication to the storage device to select write commands in a group provided by the host.
In the same field of endeavor, Cowling discloses,
wherein the controller programs the data to independent sense units in the memory device based on a command group provided by the host, wherein the command group provides an indication to the storage device to select write commands in a group provided by the host [par. 0097 – “Also note that an application can use different types of colocation hints. For example, if an application is accessing a block store, the application can use the namespace identifier as the colocation hint. On the other hand, if the application is accessing a thumbnail store containing thumbnail images associated with other data items, the application can use an application identifier as the colocation hint. Applications can alternatively make use of other identifiers, such as a "user identifier" or a "geographic location identifier," as a colocation hint. Note that allowing the application to specify colocation hints also allows the application to specify what data items are to be stored together at whatever level of granularity that the application requires.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al. and Gunda et al. to include sending hints from the application, as taught by Cowling in order to improve performance by controlling what data items are stored together.
Claim 16 (as applied to claim 15 above):
Claim 16, directed to a method, is rejected for the same reasons set forth in the rejection of claim 2 above, mutatis mutandis.
Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500) as applied to claims 1 and 15 above, respectively, and further in view of Wu et al. (Pub. No. US 2020/0019346).
Claim 5 (as applied to claim 1 above):
Vaghasiya et al., Gunda et al., and Jang et al. disclose all the limitations above but do not specifically disclose,
wherein the controller delays executing at least one pending write command to program consecutive RRSADs from the first host in parallel sense units.
In the same field of endeavor, Wu et al. disclose,
wherein the controller delays executing at least one pending write command to program consecutive RRSADs from the first host in parallel sense units [par. 0005 – “In order to improve speed performance, solid-state data storage devices write a relatively large data chunk (e.g., 128 kB or 256 kB) to NAND flash memory chips in parallel at the same time. Nevertheless, since the host accesses the solid-state data storage device in the unit of sectors, where each sector is only 512 B or 4 kB, the storage device controller has to use the non-volatile write buffer to accumulate a large enough amount of data before flushing the buffered data into NAND flash memory chips. Let n.sub.c denote the size (e.g., 128 kB or 256 kB) of data chunk that should be written to NAND flash memory in parallel at the same time.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al., Gunda et al., and Jang et al. to include buffering enough data for a parallel write, as taught by Wu et al., in order to improve speed performance by writing to multiple chips at the same time.
Claim 19 (as applied to claim 15 above):
Claim 19, directed to a method, is rejected for the same reasons set forth in the rejection of claim 5 above, mutatis mutandis.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500) as applied to claim 1 above, and further in view of Yarykin et al. (U.S. Patent No. 9,009,836).
Claim 9 (as applied to claim 1 above):
Vaghasiya et al. disclose,
wherein the hosts include applications [par. 0017 – “For example, the host system 105 may include an application configured for communicating with the memory system 110 or a device therein.”].
However, Vaghasiya et al., Gunda et al., and Jang et al. do not specifically disclose,
wherein the hosts include virtual machines.
In the same field of endeavor, Yarykin et al. disclose,
wherein the hosts include virtual machines [column 5, lines 11-17 – "Process virtual machine"--a virtual machine designed to run a single program, which means that it supports a single process. Such virtual machines are usually closely suited to one or more programming languages and built with the purpose of providing program portability and flexibility. Examples include Java Virtual Machine, .Net Framework, Parrot Virtual Machine.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al., Gunda et al., and Jang et al. to include virtual machines, as taught by Yarykin et al., in order to provide program portability and flexibility.
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500) as applied to claim 10 above, and further in view of Szubbocsev (Pub. No. US 2017/0300422).
Claim 12 (as applied to claim 10 above):
Vaghasiya et al., Gunda et al., and Jang et al. disclose all the limitations above but do not specifically disclose,
wherein the controller provides the parallel sense capability of the memory device to the host by triggering a communication with the host.
In the same field of endeavor, Szubbocsev discloses,
wherein the controller provides the parallel sense capability of the memory device to the host by triggering a communication with the host [pars. 0039-0040 – Device may communicate to the host the need to update the cached translation tables in the host. (“Alternately, rather than automatically sending the updated zone(s) to the host device 108 (e.g., after a wear-levelling operation), the routine 420 may instruct the host device 108 to invalidate the second mapping table 134b. In response, the host device 108 can request an updated mapping table at that time or at a later time in order to re-validate the second mapping table 134b. In some embodiments the notification enables the host device 108 to schedule the update rather than timing of the update being dictated by the memory device 100.”)].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al., Gunda et al., and Jang et al. to include caching translation tables in a host, as taught by Szubbocsev, in order to increase performance by allowing the host to directly access the memory with physical addresses.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vaghasiya et al. (Pub. No. US 2024/0053900) in view of Gunda et al. (Pub. No. US 2023/0367498) and Jang et al. (Pub. No. US 2023/0297500) as applied to claim 10 above, and further in view of Anchi et al. (Pub. No. US 2023/0221890).
Claim 13 (as applied to claim 10 above):
Vaghasiya et al., Gunda et al., and Jang et al. disclose all the limitations above but do not specifically disclose,
wherein the controller notifies the host of any changes in the parallel sense capability of the memory device.
In the same field of endeavor, Anchi et al. disclose,
wherein the controller notifies the host of any changes in the parallel sense capability of the memory device [par. 0106 – “In embodiments utilizing an NVMe access protocol, one or more of the storage controllers 120 of storage array 105 are illustratively configured to provide notification of asynchronous events, such as error events, health status events, notice events and vendor specific events, among others, to one or more of the host devices 102. To enable asynchronous events to be reported by the storage controller, host software submits one or more AERs to the storage controller, also referred to herein as simply a “controller.” The controller illustratively notifies an event to the host by “completing” an AER command via a corresponding AEN, in some embodiments by posting what is referred to as a “completion queue entry” for that AER command. The total number of simultaneously outstanding AER commands is limited by the AER limit of the NVMe standard, which is typically set to a maximum value of 16 as specified in an Identify Controller data structure. The controller completes one of the outstanding AER commands to notify an NVMe event to the host software and the corresponding notification is referred to as an asynchronous event notification or AEN.”].
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Vaghasiya et al., Gunda et al., and Jang et al. to include health reporting, as taught by Anchi et al., in order to be able to make informed decisions about replacing a drive.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 10, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
LARRY T. MACKALL
Primary Examiner
Art Unit 2131
28 February 2026
/LARRY T MACKALL/Primary Examiner, Art Unit 2139