The present application, filed on or after March 16, 2013, is being examined under first to invent provisions of the AIA .
DETAILED ACTION
This Action is in response to communications filed 12/8/2025.
Claims 1, 2, 10 and 20 are amended.
Claims 1-20 are pending.
Claims 1-20 are rejected.
Response to Arguments
Applicant`s arguments filed December 8, 2025 have been fully considered and they are persuasive with respect to prior art rejection.
As per the 103 rejection of claims 1, 10 and 20, Applicant argued Sharon fails to disclose or suggest the feature of " wherein the imaqe includes the at least one compressed format file"; where examiner relies on a newly cited reference Kataoka to disclose the claimed limitation. Regarding 112 rejections, Applicant has shared multiple links to describe common known language but claims are rejected as it was not clearly described how “removed” library share by an application was performed, how “removing” was done “offline”, how “reducing the compressed data area before obtaining the image of the compressed data area based on the at least one compressed format file preloaded” in claim 6 was completed.
Claim Rejections - 35 U.S.C. 112
The following is a quotation of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), first paragraph:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 4 and 14 is rejected under 35 U.S.C. 112 (a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as claims recite “removing at least one library shared by the at least one application” claim 4, claims are rejected under 35 U.S.C 112(a) as specification recites the same exact language of the claim, where there is no disclosure of what the shared library actually is and how it is removed after preloading each of the at least one compressed format file preloaded as no further disclosure on any of such details provided in the specification.
Claims 5 and 15 is rejected under 35 U.S.C. 112 (a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as claims recite “removing is performed offline” claim 5, claims are rejected under 35 U.S.C 112(a) as specification recites the same exact language of the claim, where there is no disclosure of what the term offline actually is and how removing of library is performed offline as no further disclosure on any of such details provided in the specification.
Claims 6 and 16 is rejected under 35 U.S.C. 112 (a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as claims recite “reducing the compressed data area before obtaining the image of the compressed data area based on the at least one compressed format file preloaded” claim 6, claims are rejected under 35 U.S.C 112(a) as specification recites the same exact language of the claim, where there is no disclosure of how the data is reduced before obtaining said image based on the at least one compressed format file preloaded as no further disclosure on any of such details provided in the specification.
Claims 7-9 and 17-19 are rejected under 35 U.S.C. 112 (a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as claims recite “performed offline”, claims are rejected under 35 U.S.C 112(a) as specification recites the same exact language of the claim, where there is no disclosure of what the term offline actually is and how such operations being performed offline as no further disclosure on any of such details provided in the specification.
All dependent claims are rejected as having the same deficiencies as the claims they depend from.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11 and 17 are rejected under 35 U.S.C. 103(a) as being unpatentable over Lu et al. (US PGPUB 2012/0260009 hereinafter referred to as Lu), and further in view of Kataoka et al. (US PGPUB 2009/0299973) (hereinafter ‘Kataoka’).
As per independent claim 1, Lu discloses a memory operation method, comprising: using a compression algorithm to compress at least one application in a non-volatile memory into at least one compressed format file [(Paragraphs 0019 and 0021-0023; FIGs. 1 and 2 and related text) wherein Lu teaches wherein compression/decompression engine 17 is configured to compress and decompress data associated with operations internal to data storage device 10, such as read-modify-write cycles used to manage data stored in flash memory to correspond to the claimed limitation]; preloading the at least one compressed format file from the non-volatile memory into a compressed data area in a volatile memory [(Paragraphs 0018 and 0021-0023; FIGs. 1 and 2 and related text) wherein Lu teaches wherein Primary compression/decompression engine 16 is configured to compress data received from host device 14 via host interface 15 and to store the compressed data in memory 12 via memory interface 20. Primary compression/decompression engine 16 is further configured to decompress compressed data stored in memory 12 prior to the data being sent to host device 14 via host interface 15; Compression input buffer 25 is configured to store data received from host device 14 via host interface 15. Compression input buffer 25 may be a first-in/first-out (FIFO) buffer. Compression engine core 26 is configured to compress the data stored in compression input buffer 25 and store the compressed data in compression output buffer 27. Similar to compression input buffer 25, compression output buffer 27 may be a FIFO buffer. Compression output buffer 27 is configured to store the compressed data until it is stored in memory 12. Compression bypass buffer 24 is configured to store data received from host device 14 via host interface 15 that is intended to bypass compression sub-system 23 and therefore not be compressed prior to being stored in memory 12. Data that is stored in compression bypass buffer 24 may include command and/or control information received from host device 14, pre-compressed data, such as audio and video data compressed according to various industry standards to correspond to the claimed limitation]; obtaining an image of the compressed data area based on the at least one compressed format file preloaded into the compressed data area, and writing back the image to the non-volatile memory [(Paragraphs 0018, 0021-0023, 0028-0029 and 0035; FIGs. 1 and 3 and related text) wherein Lu teaches where the compression/decompression engine 17 may be used to compress and/or decompress data necessary to perform various tasks within data storage system 10. For example, when flash memory is used to implement storage medium 13, various housekeeping tasks are performed to maintain the data stored within the flash memory. The tasks may include read-modify-write operations, garbage collection operations, wear-leveling algorithms, etc. These housekeeping tasks may require compressed data stored in storage medium 13 to be temporarily decompressed in order to perform the housekeeping task and subsequently recompressed prior to being stored back in storage medium 13. These housekeeping tasks may be performed in the background of operations within data storage controller 11 without stopping the operation of primary compression/decompression engine 16 to correspond to the claimed limitation]; and decompressing the at least one compressed format file into the at least one application in the volatile memory [(Paragraphs 0018, 0021-0023 and 0028-0029; FIGs. 1 and 2 and related text) wherein Lu teaches where the decompression input buffer 32 is configured to store compressed data transferred from memory 12. Decompression input buffer 32 may be a first-in/first-out (FIFO) buffer. Decompression engine core 33 is configured to decompress the compressed data stored in decompression input buffer 32 and to store the decompressed data in decompression output buffer 34. Similar to decompression input buffer 32, decompression output buffer 34 may be a FIFO buffer. Decompression output buffer 34 is configured to store the decompressed data until it is transferred to host device 14 via host interface 15. Decompression bypass buffer 31 is configured to store data transferred from memory 12 that is intended to bypass decompression sub-system 29 and therefore not be decompressed prior to being transferred to host device 14. Data that is stored in decompression bypass buffer 31 may include command and/or control information communicated to host device 14, data that was initially received from host device 14 in a compressed format, such as audio and video data compressed according to various industry standards to correspond to the claimed limitation].
Lu does not appear to explicitly disclose wherein the image includes the at least one compressed format file.
However, Kataoka discloses wherein the image includes the at least one compressed format file [(Paragraphs 0007, 0087-0089, 0095-0097 and 0199 and related text) where Kataoka teaches combining the compressed files in descending order of access frequency after the sorting at the sorting such that a storage capacity of a cache area for a storage area that stores therein the compressed file group is not exceeded by a combined size of the compressed files combined; and writing, from the storage area into the cache area, the compressed files combined at the combining, the compressed files combined being written prior to a search of the compressed files combined to correspond to the claimed limitation].
Lu and Kataoka are analogous art because they are from the same field of endeavor of data storage management.
At the time of the invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Lu and Kataoka before him or her, to modify the method of Lu to include the caching of compressed format files of Kataoka because it will enhance system performance.
The motivation for doing so would be for the benefit of [ “increased speed of a full text search can be realized by causing the cache area to be a resident memory and increasing the efficiency of the server resources” (Paragraph 0199 by Kataoka)].
Therefore, it would have been obvious to combine Lu and Kataoka to obtain the invention as specified in the instant claim.
As for independent claims 10 and 20, the applicant is directed to the rejections to claim 1 set forth above, as they are rejected based on the same rationale.
Claims 2, 3 and 11-13 are rejected under 35 U.S.C. 103(a) as being disclosed by Lu/ Kataoka, as in claims 1 and 10 above, and further in view of Rostoker et al. (US PGPUB 2015/0178013 hereinafter referred to as Rostoker).
As per dependent claim 2, Lu discloses the method of claim 1.
Lu does not appear to explicitly disclose wherein the non-volatile memory complies with the eMMC (Embedded MultiMediaCard) flash memory standard.
However, Rostoker discloses wherein the non-volatile memory complies with the eMMC (Embedded MultiMediaCard) flash memory standard [(Paragraph 0017; FIG. 1) wherein the data storage device 102 may be a memory card. The data storage device 102 may operate in compliance with a JEDEC industry specification, one or more other specifications, or a combination thereof. For example, the data storage device 102 may operate in compliance with an eMMC specification, in compliance with a USB specification, a UFS specification, an SD specification, or a combination thereof to correspond to the claimed limitation].
Soman/ Fujita and Rostoker are analogous art because they are from the same field of endeavor of data storage management.
Before the effective filing date of the claimed inventions, it would have been obvious to one of ordinary skill in the art, having the teachings of Lu and Rostoker before him or her, to modify the method of Lu to include the flash storage standard of Rostoker because it will enhance efficiency.
The motivation for doing so would be [“avoids latency and power usage associated with decompressing large quantities of unrelated data during a read operation” (Paragraph 0047 by Rostoker)].
Therefore, it would have been obvious to combine Lu and Rostoker to obtain the invention as specified in the instant claim.
As per dependent claim 3, Rostoker teaches wherein the compression algorithm is an LZ4 compression algorithm, an LZO compression algorithm or a zlib compression algorithm [(Paragraph 0077) where the processor may execute one or more instructions to perform data compression using Huffman coding, Arithmetic coding, Prediction with Partial string Matching (PPM) compression, Context-Tree Weighing (CTW) compression, Lempel-Ziv (LZO) coding, or another compression technique to correspond to the claimed limitation].
As per dependent claim 12, Rostoker teaches wherein the volatile memory is double data rate synchronous dynamic random access memory (DDR SDRAM) [(Paragraph 0081; FIG. 1) where the semiconductor memory devices include volatile memory devices, such as dynamic random access memory ("DRAM") or static random access memory ("SRAM") devices, non-volatile memory devices, such as resistive random access memory ("ReRAM"), electrically erasable programmable read only memory ("EEPROM"), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory ("FRAM"), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration to correspond to the claimed limitation].
As for dependent claim 11, the applicant is directed to the rejections to claim 2 set forth above, as they are rejected based on the same rationale.
As for dependent claim 13, the applicant is directed to the rejections to claim 3 set forth above, as they are rejected based on the same rationale.
Claims 4, 6, 14 and 16 are rejected under 35 U.S.C. 103(a) as being disclosed by Lu/ Kataoka, as in claims 1 and 10 above, and further in view of Gupta et al. (US PGPUB 2017/0228282 hereinafter referred to as Gupta).
As per dependent claim 4, Lu discloses the method of claim 1.
Lu does not appear to explicitly disclose removing at least one library shared by the at least one application after preloading each of the at least one compressed format file of the at least one application into the compressed data area in the volatile memory.
However, Gupta discloses removing at least one library shared by the at least one application after preloading each of the at least one compressed format file of the at least one application into the compressed data area in the volatile memory [(Paragraphs 0028-0030; FIG. 1) wherein During cache insertion of at least one data block 131 classified as dirty (i.e., no backup copy) into the data cache 120, the adaptive coding unit 250 encodes the at least one data block 131 with the high fault tolerance erasure code for increased data redundancy in the data cache 120. The cache insertion unit 220 inserts the at least one data block 131 encoded with the high fault tolerance erasure code into the data cache 120. In response to the cache capacity of the data cache 120 exceeding the pre-determined bound/threshold, the destaging unit 230 triggers a destaging of at least data block 131 having the following properties: (1) the at least one data block 131 is dirty (i.e., no backup copy), (2) the at least one data block 131 is relatively cold (i.e., infrequently accessed), and (3) the at least one data block 131 is encoded with the high fault tolerance erasure code. In response to the destaging of the at least one data block 131, the adaptive coding unit 250 converts the at least one data block 131 to the low fault tolerance erasure code for decreased data redundancy in the data cache 120, as the at least one data block 131 becomes clean when destaged (i.e., at least one backup copy generated during the destaging). The decreased data redundancy helps reduce cache space usage of the data cache 120, where it will be obvious to one of ordinary skill in the art to utilize the removal of redundant data copy from the cache as taught by Gupta to correspond to the claimed limitation].
Lu and Gupta are analogous art because they are from the same field of endeavor of data storage management.
Before the effective filing date of the claimed inventions, it would have been obvious to one of ordinary skill in the art, having the teachings of Lu and Gupta before him or her, to modify the method of Lu to include the asynchronous data transfer of Gupta because it will enhance efficiency.
The motivation for doing so would be [“provide increased recovery performance, increased reliability (i.e., increased data redundancy) and decreased storage overhead (i.e., increased storage efficiency)” (Paragraph 0019 by Gupta)].
Therefore, it would have been obvious to combine Lu and Gupta to obtain the invention as specified in the instant claim.
As per dependent claim 6, Lu discloses reducing the compressed data area before obtaining the image of the compressed data area based on the at least one compressed format file preloaded into the compressed data area [(Paragraphs 0028-0030; FIG. 1) wherein during cache insertion of at least one data block 131 classified as dirty (i.e., no backup copy) into the data cache 120, the adaptive coding unit 250 encodes the at least one data block 131 with the high fault tolerance erasure code for increased data redundancy in the data cache 120. The cache insertion unit 220 inserts the at least one data block 131 encoded with the high fault tolerance erasure code into the data cache 120. In response to the cache capacity of the data cache 120 exceeding the pre-determined bound/threshold, the destaging unit 230 triggers a destaging of at least data block 131 having the following properties: (1) the at least one data block 131 is dirty (i.e., no backup copy), (2) the at least one data block 131 is relatively cold (i.e., infrequently accessed), and (3) the at least one data block 131 is encoded with the high fault tolerance erasure code. In response to the destaging of the at least one data block 131, the adaptive coding unit 250 converts the at least one data block 131 to the low fault tolerance erasure code for decreased data redundancy in the data cache 120, as the at least one data block 131 becomes clean when destaged (i.e., at least one backup copy generated during the destaging). The decreased data redundancy helps reduce cache space usage of the data cache 120, where it will be obvious to one of ordinary skill in the art to utilize the removal of redundant data copy from the cache as taught by Gupta to correspond to the claimed limitation].
As for dependent claim 14, the applicant is directed to the rejections to claim 4 set forth above, as they are rejected based on the same rationale.
As for dependent claim 16, the applicant is directed to the rejections to claim 6 set forth above, as they are rejected based on the same rationale.
Claims 5, 7, 15 and 17 are rejected under 35 U.S.C. 103(a) as being disclosed by Lu/ Kataoka in view of Gupta, as in claims 4 and 14 above, and further in view of Sprouse et al. (US PGPUB 2014/0136759 hereinafter referred to as Sprouse).
As per dependent claim 5, Lu discloses the method of claim 4.
Lu does not appear to explicitly disclose wherein the operation of removing the at least one library shared by the at least one application is performed offline.
However, Sprouse discloses wherein the operation of removing the at least one library shared by the at least one application is performed offline [(Paragraph 0127; FIG. 1) wherein the data in 3801 can be removed. The metadata in 3805 will be modified to point to new physical location. After the de-duplication, the data normally is fragmented. Garbage collection can be trigger to consolidate the data and remove the deleted data and keys. Whether done in- or off-line, the sNAND 3803 provides for large memory space and good key matching speed due to the degree of parallelism discussed in preceding sections. Generally, as off-line de-duplication can be done during garbage collection, it has no performance penalty; whereas although the in-line process does have the tradeoff of a performance penalty, it can also save on primary drive capacity, reduce write amplification leading to increased endurance of flash drives, and reduce power consumption by performing fewer write operations. Off-line de-duplication can also provide more free capacity for "overprovisioning" and improving performance, where overprovisioning is part of the extra storage that can be used to compact data and other functions, where it will be obvious to one of ordinary skill in the art to utilize the removal of data offline as taught by Sprouse to correspond to the claimed limitation].
Lu and Sprouse are analogous art because they are from the same field of endeavor of data storage management.
Before the effective filing date of the claimed inventions, it would have been obvious to one of ordinary skill in the art, having the teachings of Lu and Sprouse before him or her, to modify the method of Lu to include the offline data removal to reduce space as taught by Sprouse because it will enhance efficiency.
The motivation for doing so would be [“improving efficiency of de-duplication operations of the sort that are valuable in cleaning up data bases” (Paragraph 0073 by Sprouse)].
Therefore, it would have been obvious to combine Lu and Sprouse to obtain the invention as specified in the instant claim.
As per dependent claim 7, Sprouse discloses wherein the operation of reducing the compressed data area is performed offline [(Paragraphs 0028-0030; FIG. 1) wherein during cache insertion of at least one data block 131 classified as dirty (i.e., no backup copy) into the data cache 120, the adaptive coding unit 250 encodes the at least one data block 131 with the high fault tolerance erasure code for increased data redundancy in the data cache 120. The cache insertion unit 220 inserts the at least one data block 131 encoded with the high fault tolerance erasure code into the data cache 120. In response to the cache capacity of the data cache 120 exceeding the pre-determined bound/threshold, the destaging unit 230 triggers a destaging of at least data block 131 having the following properties: (1) the at least one data block 131 is dirty (i.e., no backup copy), (2) the at least one data block 131 is relatively cold (i.e., infrequently accessed), and (3) the at least one data block 131 is encoded with the high fault tolerance erasure code. In response to the destaging of the at least one data block 131, the adaptive coding unit 250 converts the at least one data block 131 to the low fault tolerance erasure code for decreased data redundancy in the data cache 120, as the at least one data block 131 becomes clean when destaged (i.e., at least one backup copy generated during the destaging). The decreased data redundancy helps reduce cache space usage of the data cache 120, where it will be obvious to one of ordinary skill in the art to utilize the removal of redundant data offline to reduce capacity as taught by Sprouse to correspond to the claimed limitation].
As for dependent claim 15, the applicant is directed to the rejections to claim 5 set forth above, as they are rejected based on the same rationale.
As for dependent claim 17, the applicant is directed to the rejections to claim 7 set forth above, as they are rejected based on the same rationale.
Claims 8, 9, 18 and 19 are rejected under 35 U.S.C. 103(a) as being disclosed by Lu/ Kataoka, as in claims 1 and 10 above, and further in view of Okita et al. (US PGPUB 2013/0067147 hereinafter referred to as Okita).
As per dependent claim 8, Lu discloses the method of claim 1.
Lu does not appear to explicitly disclose wherein the operation of loading the at least one compressed format file from the non-volatile memory into the compressed data area in the volatile memory is performed offline.
However, Okita discloses wherein the operation of loading the at least one compressed format file from the non-volatile memory into the compressed data area in the volatile memory is performed offline [(Paragraph 0023; FIG. 1) wherein the NAND flash memories 70, 71, . . . , 7n store the specified readout data in the read buffer 11 (step S201). In this case, the readout data stored in the read buffer 11 is asynchronously stored irrespective of the issued order of the read command, and the order of the LBA. This is because the sequential property (continuity) of the data gets lost as the write is carried out in parallel with respect to each channel of the NAND flash memories 70, 71, . . . , 7n when writing the data. It is also caused by an individual difference in the read access time of the NAND flash memories 70, 71, . . . , 7n, where it will be obvious to one of ordinary skill I the art yo utilize the asynchronous data transfer to the volatile memory from the non-volatile memory as taught by Okita to correspond to the claimed limitation].
Lu and Okita are analogous art because they are from the same field of endeavor of data storage management.
Before the effective filing date of the claimed inventions, it would have been obvious to one of ordinary skill in the art, having the teachings of Lu and Okita before him or her, to modify the method of Lu to include the asynchronous data transfer of Okita because it will enhance efficiency.
The motivation for doing so would be [“avoids latency and power usage associated with decompressing large quantities of unrelated data during a read operation” (Paragraph 0047 by Okita)].
Therefore, it would have been obvious to combine Lu and Okita to obtain the invention as specified in the instant claim.
As per dependent claim 9, Okita teaches wherein the operation of obtaining the image of the compressed data area based on the at least one compressed format file loaded into the compressed data area is performed offline [(Paragraph 0023; FIG. 1) wherein the NAND flash memories 70, 71, . . . , 7n store the specified readout data in the read buffer 11 (step S201). In this case, the readout data stored in the read buffer 11 is asynchronously stored irrespective of the issued order of the read command, and the order of the LBA. This is because the sequential property (continuity) of the data gets lost as the write is carried out in parallel with respect to each channel of the NAND flash memories 70, 71, . . . , 7n when writing the data. It is also caused by an individual difference in the read access time of the NAND flash memories 70, 71, . . . , 7n, where it will be obvious to one of ordinary skill I the art yo utilize the asynchronous data transfer to the volatile memory from the non-volatile memory as taught by Okita to correspond to the claimed limitation].
As for dependent claim 18, the applicant is directed to the rejections to claim 8 set forth above, as they are rejected based on the same rationale.
As for dependent claim 19, the applicant is directed to the rejections to claim 9 set forth above, as they are rejected based on the same rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Error! Unknown document property name.ohamed Gebril whose telephone number is Error! Unknown document property name. and email address is mohamed.gebril @uspto.gov. The examiner can normally be reached on Error! Unknown document property name..
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached on 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-270-2857.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED M GEBRIL/Primary Examiner, Art Unit 2135