Prosecution Insights
Last updated: April 19, 2026
Application No. 18/946,506

MEMORY CONTROLLER AND OPERATION METHOD THEREOF

Final Rejection §103
Filed
Nov 13, 2024
Examiner
PHAM, KAITLYN HUNG
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Seoul National University R&Db Foundation
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
1y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+45.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 5m
Avg Prosecution
17 currently pending
Career history
18
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-14 are presented for examination. This office action is in response to submission of application on 6-JAN-2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 8, filed 6-JAN-2026, with respect to the objection to the specification have been fully considered and are persuasive due to amendment. The objection of the specification has been withdrawn. Applicant’s arguments, see pages 8-12, filed 6-JAN-2026, with respect to rejections under 35 U.S.C. 103 have been fully considered and are persuasive due to amendments. The rejections under 35 U.S.C. 103 of claims 1-14 have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of previously applied prior art references. Regarding Applicant’s arguments that the prior art references differ from the invention based on a pruning step which has a lossy compression technique, in contrast to a lossless compression, Examiner notes that there are no limitations in the claims which preclude a lossy compression technique from reading on the claimed invention. That is, there is no element that limits the invention to only apply to lossless compression techniques as argued, and therefore, different compression techniques fall within the metes and bounds of the claim. Regarding Applicant’s arguments that the Li reference merely discloses dividing a dense matrix to achieve load balancing. Examiner notes that the intended use of the reference does not preclude the structure of the prior art from being capable of performing the cited intended use of the instant invention. That is, the mere fact that the Li reference discloses the dividing the dense matrix to achieve load balancing does not mean it cannot divide the matrix when there are too many non-zero elements. Regarding Applicant’s arguments that the prior art does not teach the conditional logic for dividing a matrix only when a threshold is exceeded, Examiner respectfully disagrees. As Applicant admits in page 12, Li partitions every dense (bolded for emphasis) matrix into submatrices. Examiner notes that dense matrices are defined in the art by them having non-zero elements exceeding a certain proportion. Therefore, by definition of Li teaching specifically dividing dense matrices, Li inherently teaches dividing when matrices contain non-zero elements exceeding some N. Further, as argued in the previous office action, this teaching of Li is in combination with the previous teachings of the other references which require and enforce a predetermined sparsity arrangement. One of ordinary skill in the art would recognize that, with one teaching that requires and enforces a sparsity arrangement by making matrices less dense, and another teaching that, when a matrix is too dense, divides a matrix to make less-dense submatrices, a combined method of enforcing a sparsity arrangement by dividing the matrix when it is too dense to create less-dense matrices is obvious by a substitution of one technique for making matrices less dense for another technique of doing the same. For these reasons, the Applicant’s arguments that the prior art does not render obvious the claimed concepts are not persuasive, however, since the scope of the claims have changed, the rejections must be updated to address the amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over PEDRAM et al., U.S. Pub. No. 20240095518 (hereinafter “Pedram”) in view of Zheng et al., U.S. Pub. No. 20230110401 (hereinafter “Zheng”) further in view of LI et al., U.S. Pub. No. 20180046914 (hereinafter “Li”). Regarding claim 1: Pedram teaches A memory controller comprising: ([0049], Pedram teaches a controller that controls data movement from memory.) a compression control circuit configured to generate a compressed data by compressing the data based on an N:M sparsity rule and generate a metadata including metadata for data elements included in the compressed data; ([0033], Pedram teaches a compressor unit that compresses tensors during operations. Moreover, in [0034], Pedram teaches that the processing core can process structured sparsity arrangements to compute the output tensors that may be compressed, including various sparsities of a lower integer to a higher integer. Furthermore, in [0036], Pedram teaches that the memory can generate metadata associated with the compressed tensors. The compressed tensors with processing based on structured sparsity arrangements of one integer to another is interpreted to be the claimed generate a compressed data by compressing data based on an N:M sparsity rule. The metadata being generated for a compressed tensor is interpreted to be the claimed generating metadata including metadata for data elements included in the compressed data). a first write for writing the compressed data and a second write for writing the metadata; and ([0032], Pedram teaches that the memory may store the compressed tensors, and separately, that metadata associated with the compressed tensors may also be stored in the memory.) N and M are natural numbers and an M is greater than N. ([0034], Pedram teaches various structured sparsity arrangements including 2:4, in which both numbers are natural numbers and the second number is greater than the first.) Pedram does not appear to explicitly disclose data blocks, a host interface circuit configured to receive a host read request, a host write request, and a data block corresponding to the host write request from a host; a scheduler configured to schedule commands, a memory interface circuit configured to transmit a memory command output from the scheduler to a memory device, or when a number of non-zero elements among M elements included in a target data block is greater than N, the compression control circuit splits the target data block to generate one or more additional data blocks that include non-zero elements exceeding N, and compresses the one or more additional data blocks to generate one or more additional compressed data blocks. Zheng further teaches data blocks being written and a host interface circuit configured to receive a host read request, a host write request, and a data block corresponding to the host write request from a host; ([0049], Zheng teaches a host interface within a storage controller that enables communication with the host system, to implement a storage interface or protocol. Furthermore, in [0037-0039], Zheng teaches that a storage controller may receive host I/Os from the host system for data access, including host write requests and read requests, and that a media access manager (in the storage controller) may receive from the host, a request to write one or more blocks of data. Though not explicit, since the host interface enables communication with the host, it is obvious that when the storage controller receives the host I/Os, it would be received via the host interface, and the claim limitations are taught. The blocks of Zheng also correspond to the tensor data of Pedram.). Zheng further teaches a scheduler configured to schedule write commands ([0037], Zheng teaches that host I/Os are received, and later in the same paragraph teaches queued scheduled host I/Os for data access, including host write requests. While not explicitly stated, components queueing and thereby creating scheduled host I/Os, is interpreted as the function of the claimed scheduler.) Zheng further teaches a memory interface circuit configured to transmit a memory command output from the scheduler to a memory device, wherein ([0037], Zheng teaches that the storage controller may perform media I/Os for storage media accesses corresponding to scheduled host I/Os. While not explicit, the memory media I/Os corresponding to the scheduled host I/Os is interpreted to be the claimed memory command output from the scheduler to the memory device, and as such, since the storage controller of Zheng generates those internal I/Os, the storage controller of Zheng meets the limitations of the claimed memory interface circuit.) Pedram and Zheng are analogous art because they are from the same field of endeavor, memory management. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram and Zheng to achieve the combined result of the memory controller with a host interface circuit that can process host reads and host write requests with data blocks, with a memory interface that transmits a memory command from a scheduler to the memory device, to also include a mechanism for generating compressed data blocks by compressing the data block based on an N:M sparsity rule and generate a metadata block including metadata for the data elements included in the compressed data block, with N and M being natural numbers and M being greater than N. One of ordinary skill in the art would have been motivated to make this modification in order to apply known compression systems used for matrices which saves storage size and reduces memory traffic, as discussed in Pedram [0026], to host data blocks in more general memory systems. Pedram/Zheng do not appear to explicitly disclose when a number of non-zero elements among M elements included in a target data block is greater than N, the compression control circuit splits the target data block to generate one or more additional data blocks that include non-zero elements exceeding N, and compresses the one or more additional data blocks to generate one or more additional compressed data blocks. However, Li teaches when a number of non-zero elements among M elements included in a target data block is greater than N, the compression control circuit splits the target data block to generate one or more additional data blocks that include non-zero elements exceeding N, ([0191-0199], Li teaches that the dense matrices are compressed into sparse matrices by a process that first divides a dense matrix into a plurality of submatrices of similar size before compression, such that each submatrix contains a similar number of non-zero elements. As discussed previously, the combination of Pedram and Zheng teaches using and enforcing a predetermined structured sparsity arrangement defined by the ratio of non-zero elements to total elements, as certain codings used by compressor/decompressor units are suitable for a certain expected sparsity range. Li teaches that the dense matrices, which are matrices which have a high proportion of non-zero elements, are divided to achieve lower density. Therefore, combining the ideas yields the result of enforcing a structured sparsity arrangement where the matrices that are denser than the arrangement defines, are divided into sparser submatrices to match the expected sparsity of the compressor codings before compression, which is interpreted to be the claimed splitting the target data block into one or more additional data blocks when a number of non-zero elements per M elements included in the data is greater than N.). and compresses the one or more additional data blocks to generate one or more additional compressed data blocks. ([0199-0203], Li teaches that after the division of the matrix, each submatrix is compressed and stored.) Pedram/Zheng and Li are analogous art because they are from the same field of endeavor, data management. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram/Zheng and Li, which generates compressed data blocks by compressing an obtained data block from a host write based on an N:M sparsity rule, to substitute the method of lowering density by simply pruning of Pedram to achieve the N:M sparsity rule with the alternative method of lowering density by dividing the data into multiple sub-blocks, then compressing each sub-block to generate compressed sub-blocks. One of ordinary skill in the art would have been motivated to make this modification to maintain a level of sparsity as set for the implementation, as certain compressing/decompressing codings are mainly suitable for an expected range of sparsity as discussed in Pedram [0033]. Regarding claim 2: The combination of Pedram, Zheng, and Li teaches all limitations of claim 1, from which claim 2 depends. Pedram/Zheng/Li further teaches the compression control circuit includes a metadata buffer storing the metadata block. ([0036], Pedram teaches that the metadata associated with compressed tensors are output to a metadata buffer.). Regarding claim 3: The combination of Pedram, Zheng, and Li teaches all limitations of claim 2, from which claim 3 depends. Pedram/Zheng/Li further teaches during a read operation, the compression control circuit generates a first read command for reading the compressed data block in response to the host read request, and ([0033], Pedram teaches that the compressor/decompressor unit can decompress or compress tensors depending on the direction they flow (with respect to the processor and memory) compressed tensors flowing from the memory towards the NPU are uncompressed. The compressed tensors flowing from memory is interpreted to be the claimed read operation, during which a compressed data block is read. As shown with respect to claim 1, the combination of Pedram and Zheng results in the combination of the compression concepts of Pedram, applied to general host I/O requests, and therefore, in [0037], Zheng teaches the storage controller generating internal I/Os corresponding to host I/Os, including read requests, and teaches generating the first read command for reading the data in response to the host read request. Finally, in [0060], Zheng teaches that host read requests result in accesses for blocks in the block pool, which contains the data (previously) written to the storage media.) Pedram/Zheng/Li further teaches generates a second read command for reading the metadata block from the memory device when the metadata block does not exist in the metadata buffer, and ([0037], Pedram teaches that during a decompression, metadata associated with the compressed tensors is received by the metadata buffer. While not explicit, this occurs during the decompression, which occurs when compressed tensors are read from memory. Furthermore, in [0032], Pedram generally teaches that metadata associated with compressed tensors may be stored in memory. Therefore, when metadata is stored in memory, and needs to be received by the metadata buffer during a decompression operation, it is obvious that the metadata is being read from the memory and that the metadata buffer does not contain the metadata until then.) Pedram/Zheng/Li further teaches the scheduler schedules the first read command and the second read command. (As shown with respect to claim 1, Zheng teaches a scheduling of all host commands, including read commands, which would obviously apply to the readings of the previous limitations of claim 3.) One of ordinary skill in the art would have been motivated to make these modifications for the same reasons as in claim 1. Regarding claim 4: the combination of Pedram, Zheng, and Li teaches all limitations of claim 3, from which claim 4 depends. Pedram/Zheng/Li further teaches the compression control circuit generates a data block corresponding to the host read request by referring to the compressed data block received by the first read command and the metadata block corresponding to the compressed data block ([0037], Pedram teaches that a decompressor unit would input the compressed tensor to a zero injector logic that injects zero elements based on the metadata in the metadata buffer. The metadata is associated with the compressed tensors received, and overall the result of the decompression is the output dense tensor. Combining the teachings of Pedram and Zheng, generating an output tensor during a decompression to read stored compressed tensors by referring to the compressed tensor and metadata corresponding to it, can be applied to data blocks and host data operations, and the claimed generating a data block corresponding to the host read request by referring to the compressed data block received by the first read command and the metadata block corresponding to the compressed data block is an obvious result.) One of ordinary skill in the art would have been motivated to make these modifications for the same reasons as in claim 1. Regarding claim 10: Pedram teaches an operation method of a memory controller, the operation method comprising: ([0049], Pedram teaches a controller that controls data movement from memory.) generating a compressed data by compressing the data based on an N:M sparsity rule; generating a metadata including metadata for data elements included in the compressed data block; ([0033], Pedram teaches a compressor unit that compresses tensors during operations. Moreover, in [0034], Pedram teaches that the processing core can process structured sparsity arrangements to compute the output tensors that may be compressed, including various sparsities of a lower integer to a higher integer. Furthermore, in [0036], Pedram teaches that the memory can generate metadata associated with the compressed tensors. The compressed tensors with processing based on structured sparsity arrangements of one integer to another is interpreted to be the claimed generate a compressed data by compressing data based on an N:M sparsity rule. The metadata being generated for a compressed tensor is interpreted to be the claimed generating metadata including metadata for data elements included in the compressed data). generating a first write for writing the compressed data in a memory device; and generating a second write for writing the metadata in the memory device ([0032], Pedram teaches that the memory may store the compressed tensors, and separately, that metadata associated with the compressed tensors may also be stored in the memory.) N and M are natural numbers and an M is greater than N. ([0034], Pedram teaches various structured sparsity arrangements including 2:4, in which both numbers are natural numbers and the second number is greater than the first.) Pedram does not appear to explicitly disclose data blocks, a host interface circuit configured to receive a host read request, a host write request, and a data block corresponding to the host write request from a host; a scheduler configured to schedule commands, a memory interface circuit configured to transmit a memory command output from the scheduler to a memory device, wherein Zheng further teaches data blocks being written and receiving a data block corresponding to a host write request from a host; ([0037-0039], Zheng teaches that a storage controller may receive host I/Os from the host system for data access, including host write requests and read requests, and that a media access manager (in the storage controller) may receive from the host, a request to write one or more blocks of data. The blocks of Zheng also correspond to the tensor data of Pedram.) Zheng further teaches generating write commands ([0037], Zheng teaches that the storage controller may perform media I/Os for storage media accesses corresponding to scheduled host writes) Pedram and Zheng are analogous art because they are from the same field of endeavor, memory management. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram and Zheng to achieve the combined result of the memory controller that can process host reads and host write requests with data blocks, with a memory interface that generating compressed data blocks by compressing the data block based on an N:M sparsity rule and generate a metadata block including metadata for the data elements included in the compressed data block, and which generates write commands for writing those blocks in the memory device, with N and M being natural numbers and M being greater than N. One of ordinary skill in the art would have been motivated to make this modification in order to apply known compression systems used for matrices which saves storage size and reduces memory traffic, as discussed in Pedram [0026], to host data blocks in more general memory systems. Pedram/Zheng do not appear to explicitly disclose when a number of non-zero elements among M elements included in a target data block is greater than N, splitting the target data block to generate one or more additional data blocks that include non-zero elements exceeding N, and compressing the one or more additional data blocks to generate one or more additional compressed data blocks. However, Li teaches when a number of non-zero elements among M elements included in a target data block is greater than N, splitting the target data block to generate one or more additional data blocks that include non-zero elements exceeding N, ([0191-0199], Li teaches that the dense matrices are compressed into sparse matrices by a process that first divides a dense matrix into a plurality of submatrices of similar size before compression, such that each submatrix contains a similar number of non-zero elements. As discussed previously, the combination of Pedram and Zheng teaches using and enforcing a predetermined structured sparsity arrangement defined by the ratio of non-zero elements to total elements, as certain codings used by compressor/decompressor units are suitable for a certain expected sparsity range. Li teaches that the dense matrices, which are matrices which have a high proportion of non-zero elements, are divided to achieve lower density. Therefore, combining the ideas yields the result of enforcing a structured sparsity arrangement where the matrices that are denser than the arrangement defines, are divided into sparser submatrices to match the expected sparsity of the compressor codings before compression, which is interpreted to be the claimed splitting the target data block into one or more additional data blocks when a number of non-zero elements per M elements included in the data is greater than N.). and compressing the one or more additional data blocks to generate one or more additional compressed data blocks. ([0199-0203], Li teaches that after the division of the matrix, each submatrix is compressed and stored.) Pedram/Zheng and Li are analogous art because they are from the same field of endeavor, data management. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram/Zheng and Li, which generates compressed data blocks by compressing an obtained data block from a host write based on an N:M sparsity rule, to substitute the method of lowering density by simply pruning of Pedram to achieve the N:M sparsity rule with the alternative method of lowering density by dividing the data into multiple sub-blocks, then compressing each sub-block to generate compressed sub-blocks. One of ordinary skill in the art would have been motivated to make this modification to maintain a level of sparsity as set for the implementation, as certain compressing/decompressing codings are mainly suitable for an expected range of sparsity as discussed in Pedram [0033]. Regarding claim 11: The combination of Pedram, Zheng, and Li teaches all limitations of claim 10, from which claim 11 depends. Pedram/Zheng/Li further teaches the second write command is generated when all metadata included in the metadata block are valid. (As discussed with respect to claim 10, the combination of Pedram and Zheng teaches generating the second write command to write the metadata block into the memory, in situations whenever the metadata block is generated in the first place, which would include when all metadata included in the metadata block are valid. Therefore, the claimed second write command being generated when all metadata included in the metadata block is valid is taught.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 10. Regarding claim 12: The combination of Pedram, Zheng, and Li teaches all limitations of claim 10, from which claim 12 depends. Pedram/Zheng/Li further teaches generating a first read command for reading a first compressed data block corresponding to a host read request; ([0033], Pedram teaches that the compressor/decompressor unit can decompress or compress tensors depending on the direction they flow (with respect to the processor and memory) compressed tensors flowing from the memory towards the NPU are uncompressed. The compressed tensors flowing from memory is interpreted to be the claimed reading, during which a compressed data block is read. As shown with respect to claim 10, the combination of Pedram and Zheng results in the combination of the compression concepts of Pedram, applied to general host I/O requests, and therefore, in [0037], Zheng teaches the storage controller generating internal I/Os corresponding to host I/Os, including read requests, and teaches generating the first read command for reading the data in response to the host read request. Finally, in [0060], Zheng teaches that host read requests result in accesses for blocks in the block pool, which contains the data (previously) written to the storage media.) Pedram/Zheng/Li further teaches generating a second read command for reading a first metadata block corresponding to the first compressed data block; and ([0037], Pedram teaches that during a decompression, metadata associated with the compressed tensors is received by the metadata buffer. While not explicit, this occurs during the decompression, which occurs when compressed tensors are read from memory. Furthermore, in [0032], Pedram generally teaches that metadata associated with compressed tensors may be stored in memory, so reading the metadata, which is stored as a block, is taught.) Pedram/Zheng/Li further teaches generating a data block corresponding to the host read request based on the first compressed data block and the first metadata block. ([0037], Pedram teaches that a decompressor unit would input the compressed tensor to a zero injector logic that injects zero elements based on the metadata in the metadata buffer. The metadata is associated with the compressed tensors received, and overall the result of the decompression is the output dense tensor. Combining the teachings of Pedram and Zheng, generating an output tensor during a decompression to read stored compressed tensors by referring to the compressed tensor and metadata corresponding to it, can be applied to data blocks and host data operations, and the claimed generating a data block corresponding to the host read request by referring to the compressed data block received by the first read command and the metadata block corresponding to the compressed data block is an obvious result.) One of ordinary skill in the art would have been motivated to make these modifications for the same reasons as in claim 10. Claims 5-9, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over PEDRAM et al., U.S. Pub. No. 20240095518 (hereinafter “Pedram”) in view of Zheng et al., U.S. Pub. No. 20230110401 (hereinafter “Zheng”) in view of LI et al., U.S. Pub. No. 20180046914 (hereinafter “Li”) in view of Cheng et al., U.S. Pub. No. 20110320532 (hereinafter “Cheng”) Regarding claim 5: The combination of Pedram, Zheng, and Li teaches all limitations of claim 2, from which claim 5 depends. Pedram/Zheng/Li further teaches when the compression control circuit generates the compressed data block and the one or more additional compressed data blocks (As discussed with respect to claim 1, the combination of Pedram/Zheng/Li teaches the generation of a compressed data block and additional compressed data blocks.) Pedram/Zheng/Li do not appear to explicitly disclose the compression control circuit further includes a mapping table, and wherein the mapping table stores relationships between the compressed data block and the additional compressed data block. However, Cheng teaches a mapping, and wherein the mapping stores relationships between the data block and the additional data block. ([0059-0060], Cheng teaches that when a complete file is created as part writing data to an object storage server, the file is split into sub-data, and a mapping between the file and the sub-data blocks is established according to the identifier of the file. While not explicitly represented as a table, a table is an obvious form such a mapping can take, which in [0055], is a way that Cheng describes mappings.) Pedram/Zheng/Li and Cheng are analogous art because they are from the same field of endeavor, data management in memory systems. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram/Zheng/Li and Cheng, to achieve the result of a system which generates a compressed data block and additional compressed data block from an input data block as part of a host write request, to also include a mapping table which stores relationships between compressed data blocks and their additional compressed data blocks when generated. One of ordinary skill in the art would have been motivated to make this modification in order to enable the retrieval of all associated sub-blocks that are part of the same original data when the original data is read as discussed in Cheng [0068-0074]. Regarding claim 6: The combination of Pedram, Zheng, Li, and Cheng teaches all limitations of claim 5, from which claim 6 depends. Pedram/Zheng/Li/Cheng further teaches the compression control circuit generates one or more additional metadata blocks respectively corresponding to the one or more additional compressed data blocks, and wherein the compression control circuit further generates a third write command for the one or more additional compressed data blocks, and a fourth write command for the one or more additional metadata blocks. (As shown with respect to claim 1, the memory controller of the combination of Pedram and Zheng teaches of generating a metadata block corresponding to a compressed data block, and performing a write command for writing both the compressed data block and the metadata block. In combination with the teachings of Li to split the data of one operation into multiple compressed data units, it is obvious to extend the teachings of generating an additional metadata block for the additional compressed data units and the claimed generating a third write command for the additional compressed data block and a fourth write command for the additional metadata block.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 1. Regarding claim 7: The combination of Pedram, Zheng, Li, and Cheng teaches all limitations of claim 6, from which claim 7 depends. Pedram/Zheng/Li/Cheng further teaches the compression control circuit generates a first read command for reading a first compressed data block corresponding to the host read request, and further generates a third read command for reading a first additional compressed data block corresponding to the first compressed data block when information for the first additional compressed data block exists in the mapping table. ([0068-0074], Cheng teaches that after receiving a read request for a file, the system searches for the mappings between the file and the sub-data blocks established when the file was written. As discussed with respect to claim 5, the combination of Pedram/Zheng/Li/Cheng teaches the system that generates a first compressed data block and any additional compressed data blocks, whose relationship is recorded in a mapping table. Therefore, the additional teachings of Cheng in combination with the previously discussed combination results in the combination of the claimed first read command for reading a first compressed data block corresponding to the host read request, and further generating a third read command for reading a first additional compressed data block corresponding to the first compressed data block when information for the first additional compressed data block exists in the mapping table.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 5. Regarding claim 8: The combination of Pedram, Zheng, Li, and Cheng teaches all limitations of claim 7, from which claim 8 depends. Pedram/Zheng/Li/Cheng further teaches the compression control circuit further generates a fourth read command for reading a first additional metadata block, corresponding to the first additional compressed data block, from the memory device when the first additional metadata block does not exist in the metadata buffer. ([0037], Pedram teaches that during a decompression, metadata associated with the compressed tensors is received by the metadata buffer. While not explicit, this occurs during the decompression, which occurs when compressed tensors are read from memory. Furthermore, in [0032], Pedram generally teaches that metadata associated with compressed tensors may be stored in memory. Therefore, when metadata is stored in memory, and needs to be received by the metadata buffer during a decompression operation, it is obvious that the metadata is being read from the memory and that the metadata buffer does not contain the metadata until then. Further, as discussed with respect to claim 6, the combination of Pedram/Zheng/Li/Cheng teaches the situation where an additional compressed data block and additional metadata corresponding to it would be created, and so to process a reading for the first additional compressed data block, the reading of the additional metadata when the metadata buffer does not contain the additional metadata is an obvious application.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 5. Regarding claim 9: The combination of Pedram, Zheng, Li, and Cheng teaches all limitations of claim 7, from which claim 9 depends. Pedram/Zheng/Li/Cheng further teaches the compression control circuit generates a data block corresponding to the host read request based on the first compressed data block, the first additional compressed data block, the first metadata block, and the first additional metadata block. ([0037], Pedram teaches that a decompressor unit would input the compressed tensor to a zero injector logic that injects zero elements based on the metadata in the metadata buffer. The metadata is associated with the compressed tensors received, and the overall result of the decompression is the output dense tensor. Combining these teachings of Pedram to those of the combination of Pedram/Zheng/Li/Cheng, the generation of n decompressed output data block corresponding to the host read request based on all the various compressed sub-blocks associated with the original data established during writing, and all of their corresponding metadata is taught.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 5. Regarding claim 13: The combination of Pedram, Zheng, and Li teaches all limitations of claim 10, from which claim 13 depends. Pedram/Zheng/Li further teaches generating an additional metadata block including metadata for data elements included in the additional compressed data block; generating a third write command for writing the additional compressed data block; and generating a fourth write command for writing the additional metadata. (As shown with respect to claim 10, the memory controller of the combination of Pedram and Zheng teaches of generating a metadata block corresponding to a compressed data block, and performing a write command for writing both the compressed data block and the metadata block. In combination with the teachings of Li to split the data of one operation into multiple compressed data units, it is obvious to extend the teachings of generating an additional metadata block for the additional compressed data units and the claimed generating a third write command for the additional compressed data block and a fourth write command for the additional metadata block.) Pedram/Zheng/Li do not appear to explicitly disclose storing relationships between the compressed data block and the one or more additional compressed data blocks; However, Cheng teaches storing relationships between the data block and the one or more additional data block; ([0059-0060], Cheng teaches that when a complete file is created as part writing data to an object storage server, the file is split into sub-data, and a mapping between the file and the sub-data blocks is established according to the identifier of the file.) Pedram/Zheng/Li and Cheng are analogous art because they are from the same field of endeavor, data management in memory systems. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Pedram/Zheng/Li and Cheng, to achieve the result of a system which generates a compressed data block and additional compressed data block from an input data block as part of a host write request, to also include a mapping table which stores relationships between compressed data blocks and their additional compressed data blocks when generated. One of ordinary skill in the art would have been motivated to make this modification in order to enable the retrieval of all associated sub-blocks that are part of the same original data when the original data is read as discussed in Cheng [0068-0074]. Regarding claim 14: The combination of Pedram, Zheng, Li, and Cheng teaches all limitations of claim 13, from which claim 14 depends. Pedram/Zheng/Li/Cheng further teaches generating a first read command for reading a first compressed data block corresponding to a host read request; generating a third read command for reading a first additional compressed data block corresponding to the first compressed data block when information for the first additional compressed data block exists in the relationships; ([0068-0074], Cheng teaches that after receiving a read request for a file, the system searches for the mappings between the file and the sub-data blocks established when the file was written. As discussed with respect to claim 13, the combination of Pedram/Zheng/Li/Cheng teaches the system that generates a first compressed data block and any additional compressed data blocks, whose relationship is recorded in a mapping table. Therefore, the additional teachings of Cheng in combination with the previously discussed combination results in the combination of the claimed first read command for reading a first compressed data block corresponding to the host read request, and further generating a third read command for reading a first additional compressed data block corresponding to the first compressed data block when information for the first additional compressed data block exists in the mapping table.) Pedram/Zheng/Li/Cheng further teaches generating a fourth read command for reading a first additional metadata block corresponding to the first additional compressed data; ([0037], Pedram teaches that during a decompression, metadata associated with the compressed tensors is received by the metadata buffer. While not explicit, this occurs during the decompression, which occurs when compressed tensors are read from memory. Furthermore, in [0032], Pedram generally teaches that metadata associated with compressed tensors may be stored in memory, and the reading the metadata is taught. Further, as discussed with respect to claim 13, the combination of Pedram/Zheng/Li/Cheng teaches the situation where an additional compressed data block and additional metadata corresponding to it would be created, and so to process a reading for the first additional compressed data block, the reading of the additional metadata is an obvious application.) Pedram/Zheng/Li/Cheng further teaches generating a data block corresponding to the host read request based on the first compressed data block, the first additional compressed data block, a first meta data block corresponding to the first compressed data block, and the first additional meta data block. ([0037], Pedram teaches that a decompressor unit would input the compressed tensor to a zero injector logic that injects zero elements based on the metadata in the metadata buffer. The metadata is associated with the compressed tensors received, and the overall result of the decompression is the output dense tensor. Combining these teachings of Pedram to those of the combination of Pedram/Zheng/Li/Cheng, the generation of a decompressed output data block corresponding to the host read request based on all the various compressed sub-blocks associated with the original data established during writing, and all of their corresponding metadata is taught.) One of ordinary skill in the art would have been motivated to make this modification for the same reasons as claim 13. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN HUNG PHAM whose telephone number is (571)272-6333. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio Del Mar Perez-Velez can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.H.P./Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Nov 13, 2024
Application Filed
Oct 25, 2025
Non-Final Rejection — §103
Dec 23, 2025
Applicant Interview (Telephonic)
Dec 23, 2025
Examiner Interview Summary
Jan 05, 2026
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554636
MEMORY SYSTEM AND METHOD OF CONTROLLING MEMORY SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
1y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month