Prosecution Insights
Last updated: April 19, 2026
Application No. 18/215,726

BUFFER OPTIMIZATION FOR SOLID-STATE DRIVES

Final Rejection §103
Filed
Jun 28, 2023
Examiner
KRIEGER, JONAH C
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Kioxia Corporation
OA Round
4 (Final)
86%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
127 granted / 147 resolved
+31.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
178
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
69.8%
+29.8% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claim 1 remains cancelled. Claim 2 has been amended. Claims 2-20 remain pending and are ready for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-7 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo (US Publication No. 2008/0059706 -- "Aigo") in view of Springberg et al. (US Publication No. 2022/0083265 – “Springberg”) in further view of Youn et al. (US Publication No. 2021/0334037 – “Youn”). Regarding claim 2, Aigo teaches A solid-state drive (SSD), the SSD comprising: determine availability of the internal buffer to temporarily store data based on a request to write data to the non-volatile semiconductor storage of the SSD, if the internal buffer is available, write the data to the internal buffer, (Aigo paragraph [0009], A storage apparatus according to an exemplary aspect of the invention includes a host controller that receives a write request accompanied by write data, a cache unit that checks if space is available in any one of itself and a cache unit of an external apparatus, and a switch unit that outputs a request to store write data in the cache unit of the external apparatus, on condition that space is available not in the cache unit but in the cache unit of the external apparatus. An external cache is contained in a unit external to the system but can communicate with the storage control unit. The data can be divided into a plurality of data units, based on the cache storage, see Aigo paragraph [0035], Here, cache-storage unit data means an amount of data corresponding to a single entry in cache unit 220. Also see Aigo paragraph [0028], Upon receipt of the write request, cache unit 220 checks itself if space is available to store the write data. On condition that space is available, cache unit 220 stores the requested write data in itself. First, upon receiving a write request, the internal cache checks to see if the internal cache has space available. This determination of available space includes a set data unit size, see Aigo paragraph [0029], Here, "space is available" indicates that the available capacity is equal to or more than a preset reference value) and if the internal buffer is not available, write the data to the external buffer, (Aigo paragraph [0030], On condition that space is available not in cache unit 220 but in the cache unit of the external apparatus, switch unit 230 outputs a cache-write request for storing the write data in the cache unit of the external apparatus. If the cache unit 220 cannot find any space available neither in itself nor in the cache unit of the external apparatus, cache unit 220 expels data stored therein to the external storage, and stores the requested write data in itself. If space is not available, then the data is written and stored in the external buffer). Aigo does not teach A solid-state drive (SSD), the SSD comprising: a non-volatile semiconductor storage; an external buffer within the SSD; and an integrated circuit comprising: an interface communicatively coupled to the external buffer; a memory controller commutatively coupled to the interface; an internal buffer communicatively coupled to the memory controller, write the data to be internal buffer within the integrated circuit of the SSD, the external buffer within the SSD being external to the integrated circuit, and store the data to the non-volatile storage from either the internal buffer or the external buffer without any transfer of the data between the internal buffer and the external buffer. However, Kachare teaches A solid-state drive (SSD), the SSD comprising: a non-volatile semiconductor storage; (Springberg paragraph [0003], A memory sub-system can be a storage system, such as a solid-state drive (SSD), and can include one or more memory components that store data. The memory components can be, for example, non-volatile memory components and volatile memory components. In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components) an external buffer within the SSD; (Springberg Fig. 1, see external memory Ref #119B) and an integrated circuit comprising: (Springberg Fig. 1; see controller Ref #115, also see Springberg paragraph [0022], The memory system controller 115 (hereinafter referred to as “controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a two-stage memory buffer 119, or a combination thereof) an interface communicatively coupled to the external buffer; a memory controller commutatively coupled to the interface; (Springberg Fig. 4; Springberg paragraph [0036], In the following description of data flows, the host system 120 communicates over an electrical interface with a SSD 400, which includes an SSD controller 402 with a staging buffer SRAM 406, an external DRAM component 404 (also referred to herein as the main buffer component), and flash devices 408. The integrated circuit may comprise an interface coupled to the external DRAM buffer, as well as a controller) an internal buffer communicatively coupled to the memory controller, wherein the memory controller is configured to: (Springberg Fig. 1; see Ref #119A local memory for internal buffer coupled to the memory controller) write the data to be internal buffer within the integrated circuit of the SSD, the external buffer within the SSD being external to the integrated circuit (see Springberg Fig. 1; external DRAM memory Ref #119B. The internal buffer can be used to receive write data such as from a host, see Springberg paragraph [0017], Streams provide a way for the host system to identify different access to the memory sub-system, whether it is for read or write access. The streams are separated from each other with the idea that each stream can be for a certain host task or application. When the host system uses the memory sub-system to store data, the host system combines all of its data. The storage media can be more efficient if the host system can provide a multitude of data for various applications or tasks). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo with those of Springberg. Springberg teaches using a two-stage buffer memory system, which utilizes both an internal and external buffer in a memory sub-system, such as an SSD. Using the two-stage buffer system can minimize workload and ensure the memory can provide a larger overall capacity (i.e., see Springberg paragraph [0014], The memory sub-system, however, needs to support all these different streams to be open and running at the same time, whether the host system is performing sequential writes (e.g., sequential access) or randomly accessing the different streams with random writes. The conventional memory sub-systems with a single buffer (external DRAM or internal SRAM) cannot support a high number of streams at high performance (e.g., sequential writes or random writes). The size of the internal SRAM in these conventional memory sub-systems would have to be large enough to store the data for all of the streams. Although SRAM has a higher bandwidth than DRAM, the cost to add a larger internal SRAM to an integrated circuit for the single buffer becomes prohibitive from both a cost and die area perspective, as well as from a power perspective. Using DRAM would be cheaper and provide a large memory capacity, performance would be limited to the bandwidth of DRAM. Although a wider DRAM interface can improve DRAM bandwidth, the increase to the DRAM interface would increase the cost and power of the integrated circuit, as well as make it harder to fit into the small form factors like M.2 or EDSFF 1 U Short). Aigo in view of Springberg does not teach store the data to the non-volatile storage from either the internal buffer or the external buffer without any transfer of the data between the internal buffer and the external buffer. However, Youn teaches store the data to the non-volatile storage from either the internal buffer or the external buffer without any transfer of the data between the internal buffer and the external buffer (Youn Fig. 1-5 (see Ref #110, 300, 200, CWP Path); Youn paragraph [0054], FIG. 3A illustrates a normal writing path NWP of a storage system SSb, according to an embodiment of the invention, and FIG. 3B illustrates a corrected writing path CWP of the storage system SSb, according to another embodiment of the invention. Referring to FIGS. 3A and 3B, the storage system SSb may include a storage device 10″ and the host 20, and the storage device 10″ may include a controller 100″, the NVM 200, and the second memory 300. The controller 100″ may include a write buffer 111b and the buffer manager 112. The write buffer 111b may correspond to an example of the first memory 110 of FIG. 1, and in detail, may correspond to a partial region of the first memory 110. The storage device contains 2 buffers, one internal to the controller, one external but still within the device, wherein either buffer may be used to transfer data to the NVM, but are not used to transfer data between each other, also see Youn paragraph [0055] for details regarding which buffer is determined to be used). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg with those of Youn. Youn teaches using two different memory buffers to store data in the non-volatile memory, which can allow for more efficient I/O operations, depending on various factors such as write speed and error rate (i.e., see Youn paragraph [0077], Also, according to some embodiments, the method according to the current embodiment may further include monitoring an I/O speed of write data or read data, and dynamically determining the first or second memory as the buffer memory for buffering the write data or the read data based on the I/O speed. In detail, the controller 100 may monitor a data I/O speed based on data input to and data output from the first memory 110. The controller 100 may determine the second memory 300 as the buffer memory when the monitored data I/O speed is equal to or higher than a threshold speed. Also, the controller 100 may determine the first memory 110 as the buffer memory when the monitored data I/O speed is lower than the threshold speed). Regarding claim 3, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 2, wherein: the data is segmented into a plurality of units of data, and the memory controller is configured to determine availability of the internal buffer for each unit of data of the plurality of units of data (Aigo paragraph [0028], Upon receipt of the write request, cache unit 220 checks itself if space is available to store the write data. On condition that space is available, cache unit 220 stores the requested write data in itself. First, upon receiving a write request, the internal cache checks to see if the internal cache has space available. This determination of available space includes a set data unit size, see Aigo paragraph [0029], Here, "space is available" indicates that the available capacity is equal to or more than a preset reference value). Regarding claim 4, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 2, wherein the memory controller is further configured to remove the data from the internal buffer or the external buffer after the data is stored into a non- volatile semiconductor storage device (Aigo paragraphs [0032-0033], Now assume a case where the write data are already written in the cache unit of the external apparatus. In this case, when host controller 210 receives a write request for an address (hereinafter, referred to as cache-storage unit address) of the write data, cache unit 220 checks itself if space is available. If cache unit 220 has available space, switch unit 230 outputs, to the external apparatus, a cache-read request accompanied by a delete request and an address. [0033] Thereafter, when switch unit 230 receives, from the external apparatus, cache-storage unit data in response to the cache-read request, cache unit 220 stores the cache-storage unit data in itself. Data units can be flushed to non-volatile storage and deleted from the buffer/cache. For further details, see Aigo paragraph [0086], Cache unit 220 then uses the write data received from first host computer 100 to overwrite the page of this same data in data storage unit 222. Cache unit 220 also sets the page existence bit in directory 221, the related block existence bit (or bits), replacement information and non-written information (S24). Then, cache unit 220 eliminates the related page from external writing management table 224 (S25). After that, storage apparatus 200 performs the operation of steps S7 to 9). Regarding claim 5, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 2, wherein the memory controller is configured to store in the external buffer a backup copy of the data accumulated in the internal buffer (Aigo paragraphs [0105-0106], If data received from second storage apparatus 200 are stored in itself (that is, in data storage unit 222) (K2/Yes), cache unit 220 checks if space is available in second cache unit 220 of second storage apparatus 200, which is the write source (K3). This check by cache unit 220 is carried out by reference to cache management table 223. If space is available in second cache unit 220 of second storage apparatus 200, which is the write source (K3/Yes), cache unit 220 outputs, to second storage apparatus 200, a write-back request accompanied by the logical address and data (in pages) (K4). Cache unit 220 then deletes the data from itself (K5). Specifically, cache unit 220 resets the corresponding page existence bits and block existence bits in directory 221, and deletes the data (in pages) from data storage unit 222. Moreover, cache unit 220 updates external reception management table 225 by eliminating the related pages (K6), and updates cache management table 223 by increasing the number of available cache pages (K7). Thereafter, switch unit 230 outputs the updated cache management table 223 to second storage apparatus 200 (K8). The second (external) cache can receive data units from the internal cache as a way to free up space in the preferred internal cache unit). Regarding claim 6, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 5, wherein the memory controller is configured to remove the backup copy of the data from the external buffer after the data is stored into the non-volatile semiconductor storage (Aigo paragraphs [0084-0085], If space is unavailable in data storage unit 222 (S18/No), switch unit 230 outputs, to second storage apparatus 200, a cache-write request accompanied by write data, block length and the logical address (S19). Thereafter, cache unit 220 updates the corresponding block in external writing management table 224 (S20) After that, host controller 210 outputs a response (indicating completion of writing) to first host computer 100 (S21). If data storage unit 222 has available space (S18/Yes), switch unit 230 outputs, to second storage apparatus 200, a cache-read request accompanied by a delete instruction, the logical address and block length (S22). Upon receipt of data (in pages) from second storage apparatus 200 via switch unit 230, in response to the cache-read request, cache unit 220 stores the data in data storage unit 222 (S23). Data from the second (external) cache can be sent to the non-volatile storage device and then removed from the external cache, should the non-volatile storage device have the required space). Regarding claim 7, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 2, wherein the memory controller is configured to send a message to a host to indicate completion of the write request after the data subject to the write request are written to the internal buffer or the external buffer (Aigo paragraph [0075], As shown in FIG. 8, host controller 210 of storage apparatus 200 receives, from first host computer 100, a write request accompanied by write data (one or more blocks), the logical address, and block length. Then, cache unit 220 checks whether the page corresponding to the logical address of the received data exists in cache unit 220 itself (that is to say, in data storage unit 222), by referencing directory 221 (S1) If the page exists (page existence bit is set) (S1/Yes), cache unit 220 stores the write data in data storage unit 222 and sets the corresponding block existence bit (or bits) (S2). Next, host controller 210 outputs a response (indicating completion of writing) to first host computer 100 (S3). The information corresponding to completion of the write request is sent from the memory control unit to the host (through the host controller), also see Aigo paragraph [0114], If the non-written information is set (R2/Yes), cache unit 220 reads, from data storage unit 222, data for which the non-written information is set, and outputs the data together with its logical address to storage control unit 240 (R3). Upon receipt of the logical address and the data from cache unit 220, storage control unit 240 converts the logical address into a physical address and stores the data in physical disk unit 260 (R4)). Regarding claim 11, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 2, wherein the memory controller is further configured to: receive a request to read the data; (Aigo paragraph [0043], Another one of the reasons for the advantage may be that upon receipt of a write request for a cache-storage unit address of write data stored in the cache unit of the external apparatus, storage apparatus 200 retrieves the cache-storage unit data from the external apparatus and stores the data in cache unit 220. For these reasons, for example, when a read request is received from a host computer and the like at a later time, reading data from cache unit 220 may be faster than reading data from the cache unit of the external apparatus. The request can be to read the data units, rather than write) if the data was written to the internal buffer, read the data from the internal buffer; and if the data was written to the external buffer, read the unit of data from the external buffer (Aigo paragraphs [0032-0034], Now assume a case where the write data are already written in the cache unit of the external apparatus. In this case, when host controller 210 receives a write request for an address (hereinafter, referred to as cache-storage unit address) of the write data, cache unit 220 checks itself if space is available. If cache unit 220 has available space, switch unit 230 outputs, to the external apparatus, a cache-read request accompanied by a delete request and an address. Thereafter, when switch unit 230 receives, from the external apparatus, cache-storage unit data in response to the cache-read request, cache unit 220 stores the cache-storage unit data in itself. In contrast, in a case where switch unit 230 receives a cache-read request accompanied by a delete request and an address, cache unit 220 reads cache-storage unit data. Thereafter, switch unit 230 outputs the read cache-storage unit data to the external apparatus that output the cache-read request. Additionally, cache unit 220 deletes the read cache-storage unit data from itself. The cache-read request will be sent to the corresponding logical address location where the data unit was previously written to, whether it be the internal or external cache location). Regarding claim 12, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 11, wherein the memory controller is further configured to: if the data was written to the internal buffer, remove the data from the internal buffer after the data has been read from the internal buffer; and if the data was written to the external buffer, remove the data from the external buffer after the data has been read from the external buffer (Aigo paragraphs [0032-0034], Now assume a case where the write data are already written in the cache unit of the external apparatus. In this case, when host controller 210 receives a write request for an address (hereinafter, referred to as cache-storage unit address) of the write data, cache unit 220 checks itself if space is available. If cache unit 220 has available space, switch unit 230 outputs, to the external apparatus, a cache-read request accompanied by a delete request and an address. Thereafter, when switch unit 230 receives, from the external apparatus, cache-storage unit data in response to the cache-read request, cache unit 220 stores the cache-storage unit data in itself. In contrast, in a case where switch unit 230 receives a cache-read request accompanied by a delete request and an address, cache unit 220 reads cache-storage unit data. Thereafter, switch unit 230 outputs the read cache-storage unit data to the external apparatus that output the cache-read request. Additionally, cache unit 220 deletes the read cache-storage unit data from itself. The cache-read request will be sent to the corresponding logical address location where the data unit was previously written to, whether it be the internal or external cache location. Additionally, the data unit is deleted and removed from the aforementioned location once it has been read). Regarding claim 13, Aigo in view of Springberg in further view of Youn teaches The SSD of claim 11, wherein the memory controller is configured to: determine, for read data corresponding to the data, availability of the internal buffer to temporarily store the data; if the internal buffer is available, write the data in the internal buffer; and if the internal buffer is not available, write the data to the external buffer (Aigo paragraph [0028], Upon receipt of the write request, cache unit 220 checks itself if space is available to store the write data. On condition that space is available, cache unit 220 stores the requested write data in itself. First, upon receiving a write request, the internal cache checks to see if the internal cache has space available. This determination of available space includes a set data unit size, see Aigo paragraph [0029], Here, "space is available" indicates that the available capacity is equal to or more than a preset reference value. Aigo paragraph [0030], On condition that space is available not in cache unit 220 but in the cache unit of the external apparatus, switch unit 230 outputs a cache-write request for storing the write data in the cache unit of the external apparatus. If the cache unit 220 cannot find any space available neither in itself nor in the cache unit of the external apparatus, cache unit 220 expels data stored therein to the external storage, and stores the requested write data in itself. If space is not available, then the data is written and stored in the external buffer. The read data can be used to determine buffer/cache availability, as well as match to a set data unit size, see Aigo paragraphs [0033], Thereafter, when switch unit 230 receives, from the external apparatus, cache-storage unit data in response to the cache-read request, cache unit 220 stores the cache-storage unit data in itself and Aigo paragraph [0035], Here, cache-storage unit data means an amount of data corresponding to a single entry in cache unit 220). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn as applied to claim 7 above, and further in view of Kapoor et al. (US Publication No. 2009/0089481 -- "Kapoor"). Regarding claim 8, Aigo in view of Springberg in further view of Youn and further in view of Kapoor teaches The SSD of claim 7, wherein the memory controller is configured to send the message to a host prior to storing the data subject to the write request into a non-volatile semiconductor storage (Kapoor claim 30, The memory of claim 29, further including: a reserve power source, wherein, in response to determining a loss of host power subsequent to sending said acknowledgement and prior to completing writing said data into the non-volatile memory, the control circuitry activates the reserve power source and completes programming the host data into the non-volatile memory using the reserve power source. The acknowledgement/completion message sent to the host is sent before the data is written to non-volatile memory (i.e., before data permanence). The data will get completed regardless due to a back-up power source in the event of a power-off or shut down event (Kapoor paragraph [0042-0043], In contrast, by incorporating the availability of the on-device reserve power source or reserve mode, the order of events in FIG. 4 can be changed. Once a unit of data has been transferred from the host onto the controller, it can now be treated as safely stored on the memory system since, in case of shutdown (whether proper or not), the reserve power can be invoked to finish the write process to the non-volatile memory. Therefore, once a unit of data has been buffered in the cache, the memory device can send to the host an acknowledgement that the unit of data is fully written. This early acknowledgment can be sent at the same time or before transferring the data unit on from the controller to the memory. The early acknowledgment concept is illustrated conceptually in FIG. 5. The various elements of FIG. 5 are the same as in FIG. 4. Segments A and B are also the same; however, now, rather than wait for the completion of segment B to send the host an acknowledgment, this can now be done ("B'") once A is complete)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Kapoor. Kapoor teaches sending the completion/acknowledgement message for the data units subject to the write command before the data is written to non-volatile memory. This allows the host to know far earlier than normal that data is being written to non-volatile memory for permanent storage. The data will get completed regardless due to a back-up power source in the event of a power-off or shut down event (Kapoor paragraph [0042-0043], In contrast, by incorporating the availability of the on-device reserve power source or reserve mode, the order of events in FIG. 4 can be changed. Once a unit of data has been transferred from the host onto the controller, it can now be treated as safely stored on the memory system since, in case of shutdown (whether proper or not), the reserve power can be invoked to finish the write process to the non-volatile memory. Therefore, once a unit of data has been buffered in the cache, the memory device can send to the host an acknowledgement that the unit of data is fully written. This early acknowledgment can be sent at the same time or before transferring the data unit on from the controller to the memory. The early acknowledgment concept is illustrated conceptually in FIG. 5. The various elements of FIG. 5 are the same as in FIG. 4. Segments A and B are also the same; however, now, rather than wait for the completion of segment B to send the host an acknowledgment, this can now be done ("B'") once A is complete). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn as applied to claim 2 above, and further in view of Arya (US Publication No. 2009/0157946 -- "Arya"). Regarding claim 9, Aigo in view of Springberg in further view of Youn and further in view of Arya teaches The SSD of claim 2, wherein the memory controller is configured to transfer the data stored in the internal buffer or external buffer to the non-volatile semiconductor storage as and when the non-volatile semiconductor storage becomes available for storage of the data (Arya paragraph [0062], An entire page of data, including data from the address specified on the address bus 22, is read from the NAND memory 14 and is transferred through the MUX 80 and to the RAM memory 16, where it is written into an entire page of locations in the RAM memory 16 specified by the MCC/ECC unit 72 and the index address 66b, and is operated thereon by the MCC/ECC unit 72 to ensure the integrity of the data, through error correction checking and the like. The current page address registers 66a of CAM 66 is then updated to add the address of the address page within the current write miss address and the associated index address 66b (the index address 66b being the upper 9 bits of the address in the RAM memory 16 where the page of data is stored). The Hit/miss compare logic 68 de-asserts the signal on the wait state signal 26. In addition, the MCU switches the MUX 80 to the default position. The Hit/Miss compare logic 68 sends the index address 66b to the MUX 70 where they are combined with the offset address from the address 22, to initiate a write operation in the RAM memory 16. The data is then written into the RAM memory 16 from the host device 20 through the MUX 84 and through the MUX 80, thereby completing the cycle. The data in the RAM memory 16 is now no longer coherent with the data at the same address in the NAND memory 14. This coherence problem be solved by either the memory controller 12 initiating a write cache flush, automatically on an as needed basis, or by the host device 20 initiating a write cache flush, at any time, all as previously discussed. The memory controller can automatically flush the data from the buffer/cache to the non-volatile storage on a required basis, such as NVM space opening up. Also see Arya paragraph [0059], First, the memory device 10 can automatically solve the problem of data coherence, on an as needed basis. As discussed previously, for example, in the case of a Read Miss with Cache Flush operation, data that is more current in the RAM memory 16 will be written back into the NAND memory 14 if the pages of data in the RAM memory 16 need to be replaced to store the newly called for page of data from the NAND memory 14. As will be discussed hereinafter, the MCU 64 will also perform a cache flush on the data in the RAM memory 16 by writing the data back into the NAND memory 14 in a Write Miss with Cache Flush operation). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Arya. Arya teaches the process of a memory controller being configured to automatically flush data from a buffer to non-volatile storage immediately upon storage space opening up for data units. This can improve the functioning of the system by ensuring that the cache operates as maximum capacity and never stays full when the cached data could be flushed to open up room for new data to be cached, as well as improving data coherency (Arya paragraph [0058-0059], It should be noted that the data in the RAM memory 16, after the Write Hit operation will not be coherent with respect to the data from the same location in the NAND memory 14. In fact, the data in the RAM memory 16 will be the most current one. To solve the problem of data coherency, there are two solutions. First, the memory device 10 can automatically solve the problem of data coherence, on an as needed basis. As discussed previously, for example, in the case of a Read Miss with Cache Flush operation, data that is more current in the RAM memory 16 will be written back into the NAND memory 14 if the pages of data in the RAM memory 16 need to be replaced to store the newly called for page of data from the NAND memory 14. As will be discussed hereinafter, the MCU 64 will also perform a cache flush on the data in the RAM memory 16 by writing the data back into the NAND memory 14 in a Write Miss with Cache Flush operation). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn as applied to claim 5 above, and further in view of Rose et al. (US Publication No. 2015/0356033 -- "Rose"). Regarding claim 10, Aigo in view of Springberg in further view of Youn and further in view of Rose teaches The SSD of claim 5, wherein the memory controller is configured to transfer the backup copy of the data to the non-volatile semiconductor storage in the event of a power loss or a program failure (Rose paragraphs [0033-0034], The number of solid state memory devices 460 may vary according to the storage capacity of the individual devices and the SSD as a whole, but would typically be a power of 2 such as 4, 8, 16, 32 and so on. The memory controller 460 may comprise a single semiconductor device with on-chip ROM for firmware storage and RAM for working data structures and buffers, but there may also be provided external DRAM 430 for additional space for large data translation tables and buffers and external NOR flash 440 for upgradeable firmware storage. To provide the various voltages required by the flash memory controller and external memories, there will be DC power regulation circuitry 450 which may also include a provision for backup power using large capacitors in order to safely manage the shutdown of the SSD in the event of sudden power removal or failure. In a block-based storage system composed of multiple memory devices as storage elements, the completion of a read command may be dependent on the completion of multiple individual memory accesses at various times for the sub-commands. The queueing of multiple read commands, which may proceed in parallel or out of order, causes interleaving of multiple memory accesses from different commands to individual memories. In the event of a power loss or fail event, the memory controller can be configured to flush external cache data to non-volatile storage rather than internal cache data). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Rose. Rose teaches in the event of a power loss or fail event; the memory controller can be configured to flush external cache data to non-volatile storage rather than internal cache data. This is a more reliable procedure, as the data in the external cache is less likely to be retained already and takes longer to access/perform operations on, so flushing/storing it permanently first can help prevent data loss (Rose paragraph [0033], The number of solid state memory devices 460 may vary according to the storage capacity of the individual devices and the SSD as a whole, but would typically be a power of 2 such as 4, 8, 16, 32 and so on. The memory controller 460 may comprise a single semiconductor device with on-chip ROM for firmware storage and RAM for working data structures and buffers, but there may also be provided external DRAM 430 for additional space for large data translation tables and buffers and external NOR flash 440 for upgradeable firmware storage. To provide the various voltages required by the flash memory controller and external memories, there will be DC power regulation circuitry 450 which may also include a provision for backup power using large capacitors in order to safely manage the shutdown of the SSD in the event of sudden power removal or failure). Claim(s) 14-15 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn as applied to claim 2 above, and further in view of Krishnan et al. (US Publication No. 2019/0146714 -- "Krishnan"). Regarding claim 14, Aigo in view of Springberg in further view of Youn and further in view of Krishnan teaches The SSD of claim 2, wherein each of the internal buffer and the external buffer comprises a plurality of write buffers and a plurality of read buffers (Krishnan paragraph [0108], In some embodiments, the graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. The internal and external buffers can be comprised of read buffers/caches, as well as write caches, see Krishnan paragraph [0111], In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Krishnan. Krishnan teaches having the internal and external buffers contain a plurality of write and read buffers each. This is a method of improving the functionality of the buffers by having precise regions of each buffer allocated to a specific function. By allocating the regions of the buffers, more efficient operations can take place when accessing the buffers for a read or write request (Krishnan paragraph [0111], In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations). Regarding claim 15, Aigo in view of Springberg in further view of Youn and further in view of Krishnan teaches The SSD of claim 14, wherein each of the plurality of read buffers and each of the plurality of write buffers comprise ring buffers (Krishnan paragraph [0053], In some embodiments, GPE 410 couples with or includes a command streamer 403, which provides a command stream to the 3D pipeline 312 and/or media pipelines 316. In some embodiments, command streamer 403 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from the memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 312 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and media pipeline 316 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 414. In one embodiment the graphics core array 414 include one or more blocks of graphics cores (e.g., graphics core(s) 415A, graphics core(s) 415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic. The read/write buffers can comprise ring buffers). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Krishnan. Krishnan teaches having the internal and external buffers contain a plurality of write and read buffers each. These buffers can then be comprised as ring buffers, which can enable the buffer to perform more efficient operations such as batch commands include multiple commands and allowing for more complex memory objects to be stored (Krishnan paragraph [0053], In some embodiments, GPE 410 couples with or includes a command streamer 403, which provides a command stream to the 3D pipeline 312 and/or media pipelines 316. In some embodiments, command streamer 403 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from the memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 312 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and media pipeline 316 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 414. In one embodiment the graphics core array 414 include one or more blocks of graphics cores (e.g., graphics core(s) 415A, graphics core(s) 415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic). Regarding claim 17, Aigo in view of Springberg in further view of Youn and further in view of Krishnan teaches The SSD of claim 14, further comprising a programmable firmware configuration circuit coupled to the memory controller that is configured to (Springberg paragraph [0018], Aspects of the present disclosure address the above and other deficiencies by buffering RAIN data in the two-stage memory buffer. RAIN parity data for each of these multiple streams can add up in size and the two-stage memory buffer can store the RAIN data for these multiple streams in the host buffer component and temporarily in the staging buffer component. Intelligence is added to the controller to manage the staging host buffer component and the staging buffer component of the two-stage memory buffer. The controller, using firmware for example, can control use of the staging area and manage data flow, including managing die collisions in the NVM dies (flash devices). The firmware configuration in the SSD can be utilized to optimize the memory controller function) set a number of read buffers and a number of write buffers in the external buffer (Krishnan paragraph [0108], In some embodiments, the graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. The internal and external buffers can be comprised of read buffers/caches, as well as write caches, see Krishnan paragraph [0111], In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Krishnan. Krishnan teaches having the internal and external buffers contain a plurality of write and read buffers each. This is a method of improving the functionality of the buffers by having precise regions of each buffer allocated to a specific function. By allocating the regions of the buffers, more efficient operations can take place when accessing the buffers for a read or write request (Krishnan paragraph [0111], In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn in further view of Krishnan as applied to claim 14 above, and further in view of Lim (US Publication No. 2020/0183592 -- "Lim"). Regarding claim 16, Aigo in view of Springberg in further view of Youn in further view of Krishnan and further in view of Lim teaches The SSD of claim 14, further comprising a programmable firmware configuration circuit coupled to the memory controller that is configured to (Springberg paragraph [0018], Aspects of the present disclosure address the above and other deficiencies by buffering RAIN data in the two-stage memory buffer. RAIN parity data for each of these multiple streams can add up in size and the two-stage memory buffer can store the RAIN data for these multiple streams in the host buffer component and temporarily in the staging buffer component. Intelligence is added to the controller to manage the staging host buffer component and the staging buffer component of the two-stage memory buffer. The controller, using firmware for example, can control use of the staging area and manage data flow, including managing die collisions in the NVM dies (flash devices). The firmware configuration in the SSD can be utilized to optimize the memory controller function) set a number of read buffers and a number of write buffers in the internal buffer (Lim paragraph [0056], In an embodiment, the operating environment setting circuit 220 may adjust the number of read buffers and write buffers included in a buffer group based on the final workload state. The operating environment setting circuit 220 may increase the number of read buffers depending on the final workload state. The operating environment setting circuit 220 may decrease the number of write buffers by the same number by which the read buffers are increased. The operating environment setting circuit 220 may decrease the number of write buffers until the number of write buffers reaches a threshold number of write buffers. The threshold number of write buffers may be the minimum number of write buffers required for a write operation of the memory device. Lim paragraph [0116], The buffer controller 221a may adjust the number of read buffers and write buffers included in the buffer group 240 in response to a buffer control signal. The buffer control signal may be determined based on the final workload state and buffer control information). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo, Springberg, Youn and Krishnan with those of Lim. Lim teaches the configuration of the number of read buffers and write buffers contained in the internal buffer of the memory system. This is a significant improvement over not being able to change the number of read/write buffers, since the amount can be changed based on a current workload in order to adjust the number of buffers based on the number of active read/write commands, improving the performance of the memory system (Lim paragraphs [0058-0059], The operating environment setting circuit 220 may change the queueing order of commands queued in the command queue so that a read command is output to the memory device earlier than a write command based on the final workload state. The read command may be originally queued later than the write command. In detail, the operating environment setting circuit 220 may change the queueing order of commands in the command queue so that at least one of read commands in the command queue is output prior to a write command. The write command was originally queued in the command queue earlier or at a higher sequential position than the at least one read command. In an embodiment, the operating environment setting circuit 220 may be configured to, when the number of write buffers adjusted depending on the final workload state reaches the threshold number of write buffers, change the queueing order of the commands in the command queue to change output order of the queued command). Claim(s) 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aigo in view of Springberg in further view of Youn as applied to claim 2 above, and further in view of Jung (US Publication No. 2018/0107595 – “Jung”). Regarding claim 18, Aigo in view of Springberg in further view of Youn and further in view of Jung teaches The SSD of claim 2, wherein the memory controller in the integrated circuit is configured for managing read and write operations using the non-volatile semiconductor storage (Jung paragraph [0006], The SSD also may include a controller configured to receive a request for performance of an operation and to direct that a result of the performance of the operation is accessible in the In-SSD MM OS cache of the In-SSD VM. An In-host MM OS cache 117 in a host MM 116 and the In-SSD MM OS cache 125 in the In-SSD VM 124 may each form portions of the host MM 116 addressable by an OS 103 in a host system 102. The integrated circuit contains a memory controller for managing read/write commands to the NVM). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Jung. Jung teaches using an NVM/NAND storage device with a memory controller on an integrated circuit for managing read/write operations, which can provide improvements to the data operations, such as improved throughput (see Jung paragraph [0015-0016], For example, a PCIe can be a serial expansion interface circuit (bus) that may provide improvements over, for example, PCI, PCI-X, and AGP (Accelerated Graphics Port) bus standards, among others. Such improvements may include higher bus throughput, lower I/O pin count and a smaller physical footprint, better performance-scaling for bus devices, more detailed error detection and reporting mechanisms, and/or native hot-plug functionality). Regarding claim 19, Aigo in view of Springberg in further view of Youn and further in view of Jung teaches The SSD of claim 18, wherein the non-volatile storage comprises one or more NAND devices (Jung paragraph [0018], The NVM 126 and/or the NVM data storage resource 226 may, in some embodiments, be NAND memory and/or 3D) XPoint memory operated as secondary storage in relation to primary storage of the host MM 116 and the In-SSD MM OS cache resource 224). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Aigo and Springberg and Youn with those of Jung. Jung teaches using an NVM/NAND storage device with a memory controller on an integrated circuit for managing read/write operations, which can provide improvements to the data operations, such as improved throughput (see Jung paragraph [0015-0016], For example, a PCIe can be a serial expansion interface circuit (bus) that may provide improvements over, for example, PCI, PCI-X, and AGP (Accelerated Graphics Port) bus standards, among others. Such improvements may include higher bus throughput, lower I/O pin count and a smaller physical footprint, better performance-scaling for bus devices, more detailed error detection and reporting mechanisms, and/or native hot-plug functionality) as well as reduced latency for the NAND memory (Jung paragraph [0021], As described herein, a 3D XPoint array is intended to mean a three-dimensional cross-point array of memory cells for an SSD that is configured for non-volatile data storage and for reduced latency of data access and/or retrieval. The latency may be reduced relative to other non-volatile memory, e.g., NAND flash memory, among others, to a level approaching the relatively short latency achievable with volatile memory (e.g., DRAM)). Regarding claim 20, Aigo in view of Springberg in further view of Youn and further in view of Jung teaches The SSD of claim 18, wherein the integrated circuit further includes a host interface, and wherein the request to write the data is received from a host via the host interface (Aigo paragraph [0009], A storage apparatus according to an exemplary aspect of the invention includes a host controller that receives a write request accompanied by write data, a cache unit that checks if space is available in any one of itself and a cache unit of an external apparatus, and a switch unit that outputs a request to store write data in the cache unit of the external apparatus, on condition that space is available not in the cache unit but in the cache unit of the external apparatus. A host controller may be used to receive write requests for the host). Response to Arguments Applicant’s arguments, see page 1-3 (numbered pages 6-7), filed October 31st, 2025, with respect to the rejection(s) of claim(s) Claim 2 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Aigo (US Publication No. 2008/0059706 -- "Aigo") in view of Springberg et al. (US Publication No. 2022/0083265 – “Springberg”) in further view of Youn et al. (US Publication No. 2021/0334037 – “Youn”). The applicant’s arguments regarding the newly amended independent claim 2 have been found persuasive. The Youn reference has been added to disclose the teachings of an external and internal buffer which can be used to store data in the non-volatile memory, while not transmitting data between each other, as described in further detail above. In light of the newly added reference, the 35 USC 103 Rejection is maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Grimsrud et al. (US Publication No. 2015/0355704) teaches a memory system (i.e., SSD) that utilizes internal and external buffer to perform command write operations based on a power state of the SSD (i.e., see Grimsrud paragraph [0022], Upon entering the reduced power state of the SSD, the SSD controller 106 on the first power island 140.1 can transfer the context information for the SSD 102 from the memory buffer 112 to, e.g., the page buffer 128.1 of the NAND flash memory 126.1 within the NAND flash package 108.1 on the second power island 140.2. To that end, the memory arbiter 120 directs the context information from the memory buffer 112 to the page buffer 128.1 via the channel 122.1. Further, the ECC encoder 124.1 within the channel 122.1 encodes the context information to provide a desired level of ECC before the context information is stored in the page buffer 128.1. It is noted that the SSD controller 106 can transfer such context information for the SSD 102 from the memory buffer 112, other SRAM or DRAM internal or external to the SSD controller 106, one or more registers internal to the SSD controller 106, and/or any other suitable memory, register, or storage location internal or external to the SSD controller 106. It is further noted that such context information for the SSD 102 can include computed values that are generated as part of the transition to low power operation). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONAH C KRIEGER whose telephone number is (571)272-3627. The examiner can normally be reached Monday - Friday 8 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio Del Mar Perez-Velez can be reached on (571)-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.K./ Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/ Supervisory Patent Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Sep 08, 2023
Response after Non-Final Action
Aug 30, 2024
Non-Final Rejection — §103
Nov 20, 2024
Interview Requested
Nov 26, 2024
Examiner Interview Summary
Nov 26, 2024
Applicant Interview (Telephonic)
Dec 05, 2024
Response Filed
Feb 13, 2025
Final Rejection — §103
Apr 18, 2025
Interview Requested
Apr 24, 2025
Applicant Interview (Telephonic)
May 02, 2025
Examiner Interview Summary
May 23, 2025
Request for Continued Examination
May 30, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103
Oct 21, 2025
Interview Requested
Oct 24, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Response Filed
Nov 01, 2025
Examiner Interview Summary
Feb 03, 2026
Final Rejection — §103
Mar 26, 2026
Interview Requested
Apr 09, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572298
ADAPTIVE SCANS OF MEMORY DEVICES OF A MEMORY SUB-SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566705
SYSTEM ON CHIP, A COMPUTING SYSTEM, AND A STASHING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12566556
DATA SECURITY PROTECTION METHOD, DEVICE, SYSTEM, SERVER-SIDE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12554441
TRANSFERRING COMPRESSED DATA BETWEEN LOCATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12547582
Cloning a Managed Directory of a File System
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
86%
Grant Probability
95%
With Interview (+8.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month