DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-5 and 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vishne et al. (Hereinafter Vishne, U.S. Publication No. 2023/0289093).
Regarding claim 1, Vishne teaches:
A method for maintaining the integrity of a memory component, the method comprising:
receiving, by a memory controller, a plurality of memory requests including at least one write request (See [0034] “The controller 206 includes a host interface module (HIM) 210 configured to receive and send data between the host device 202 and the controller 206. The controller 206 further includes a command executer 212 coupled to the HIM 210. The command executer 212 may be configured to execute read and write commands received from the host device 202.”);
allocating a data block into a buffer cache to cache the at least one write request (See [0031] “the controller 108 may use volatile memory 112 as a cache. For instance, the controller 108 may store cached information in volatile memory 112 until cached information is written to non-volatile memory 110.”);
detecting whether sufficient time has elapsed beyond a predetermined threshold (See Figure 8, step 802 “Receive command to send to memory device at predetermined timer setting” See [0007] “set a value for a timer that predicts availability of the memory device; receive a first command; send the command for processing to the memory device; check status of the memory device; determine the memory device is available;”),
in response to sufficient time having elapsed beyond the predetermined threshold,
flagging a backend memory as being available (See Figure 8, steps 806, in which the memory device is determined to not be busy (i.e. No) and thus determined/flagged to be “available”. See [0044] “Once the memory device is not busy at 806, the command is processed at 810” See [0007] “set a value for a timer that predicts availability of the memory device; receive a first command; send the command for processing to the memory device; check status of the memory device; determine the memory device is available); and
in response to the flagging, fetching the at least one write request to write data to the memory component (See [0044] “The command is to be sent to the memory device through the FIM over the data bus at a predetermined time set by a timer.” See [0044] “Once the memory device is not busy at 806, the command is processed at 810”).
Regarding claim 2, Vishne teaches:
The method of claim 1, further including determining whether unrelated requests have elapsed (See [0002] “The data storage device uses a controller having a flash interface module (FIM) to write the data to the memory device of the data storage device and deliver an indication to the host device that the write command was successful. To retrieve data, the host device sends a read command to the data storage device to read data from the memory device. The data storage device executes the read command and delivers an indication to the host device that the read was successful.” The system may determine whether previous/unrelated requests have completed/elapsed.).
Regarding claim 3, Vishne teaches:
The method of claim 1, further including determining whether sufficient time has elapsed beyond the predetermined threshold based on one or more measurements of past requests (See [0050] “adaptively adjusting timing of sending the commands based upon determining whether the memory means is busy. The adaptively adjusting timing comprises increasing an amount of time between sending commands upon determining that the memory means is busy and decreasing the amount of time between sending commands upon determining the memory means is not busy. The decreasing comprises decreasing the amount of time by a value equal to: (current amount of time between sending commands—mean of previous amounts of time between sending commands)/X, where “X” is a taregt for late predictions per every early prediction.” See Figure 8 in view of [0050], in which the predetermined timer is set based previous amounts of time between sending previous commands.”)
Regarding claim 4, Vishne teaches:
The method of claim 3, wherein the one or more measurements include a moving average of a request rate (See [0050] “adaptively adjusting timing of sending the commands based upon determining whether the memory means is busy. The adaptively adjusting timing comprises increasing an amount of time between sending commands upon determining that the memory means is busy and decreasing the amount of time between sending commands upon determining the memory means is not busy. The decreasing comprises decreasing the amount of time by a value equal to: (current amount of time between sending commands—mean of previous amounts of time between sending commands)/X, where “X” is a target for late predictions per every early prediction.” The “mean of previous amounts of time between sending commands” corresponds to the claimed “average request rate”.).
Regarding claim 5, Vishne teaches:
The method of claim 4, wherein the moving average of the request rate is computed within a variable time window (See [0050] “adaptively adjusting timing of sending the commands based upon determining whether the memory means is busy. The adaptively adjusting timing comprises increasing an amount of time between sending commands upon determining that the memory means is busy and decreasing the amount of time between sending commands upon determining the memory means is not busy. The decreasing comprises decreasing the amount of time by a value equal to: (current amount of time between sending commands—mean of previous amounts of time between sending commands)/X, where “X” is a taregt for late predictions per every early prediction.” The timing of sending commands may be adjusted based on a current amount of time between sending commands and a mean of pervious amounts of time between sending commands. These timings are constantly being adjusted/changed, thereby creating a variable time window.).
Claim 15 is rejected for the same reasons as claim 1. Claim 16 is rejected for the same reasons as claim 2. Claim 17 is rejected for the same reasons as claim 3. Claim 18 is rejected for the same reasons as claim 4. Claim 19 is rejected for the same reasons as claim 5.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vishne in view of Frey (U.S. Patent No. 6,725,392).
Regarding claim 6, Frey teaches:
The method of claim 1, further including reading old data when the backend memory is available (See Col. 24 lines 51-53 “Then this delta is sent to the data IOM and the new data is reconstructed in the cache of the data IOM by XORing the delta with the old data (read by using the old disk address).” See claim 5 of Frey “(a1) issuing a read command to read an old data block corresponding to the new data block if the old data block is not already in a first buffer in the first IOM, the old data block having a first location in a meta-data structure for the distributed file system that contains an old disk address for the old data block;”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the command processing method of Vishne with the RAID management system of Frey to enhance storage capabilities by implementing redundant storage drives for improved performance, data redundancy, and increased capacity.
Regarding claim 7, Frey teaches:
The method of claim 6, further including computing and storing a partial parity in the buffer cache (See Col. 19 lines 21-23 “For partial writes and partial parity updates, a second cache buffer (requiring a new disk address only if it would be required for a full buffer update) must be used.”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the command processing method of Vishne with the RAID management system of Frey to avoid read-before-write of data by using partial parity updates, which would prevent data inconsistency and improve performance and scalability (See Col. 19, lines 47-50 of Frey).
Claim 20 is rejected for the same reasons as claim 6.
Claims 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Vishne in view of Li et al. (Hereinafter Li, U.S. Publication No. 2019/0163409).
Regarding claim 8, Li teaches:
The method of claim 1, wherein the memory component is a RAID stripe (See Figures 1A-1E, which depict RAID stripes.).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the command processing method of Vishne with the writing method for a RAID storage system of Li to improve read/write management for RAID system when in a degraded mode, by reducing the number of times of reading/writing the storage disk while ensuring data consistency during a write operation, thereby reducing the response time of the write operation with respect to the disk array group in the degraded mode. Such combination can improve an overall throughput while the disk array group is in the degraded mode (See [0035] of Li).
Regarding claim 9, Li teaches:
The method of claim 8, further including waiting until data or a parity cache entry is to be victimized (See [0007] “The method further comprises: in response to receiving from the disk array a first indication that the to-be-written data has been written into the at least one cache page, marking the at least one cache page as to-be-flushed.”).
Regarding claim 10, Li teaches:
The method of claim 8, further including initiating victimization when the RAID stripe has been written and the backend memory is available (See [0007] “The method further comprises: in response to receiving from the disk array a first indication that the to-be-written data has been written into the at least one cache page, marking the at least one cache page as to-be-flushed. In addition, the method further comprises: sending to the disk array a second indication that the to-be-written data can be flushed to the at least one disk array group, so that the to-be-written data in the at least one cache page marked as to-be-flushed is flushed into the at least one storage block.”).
Regarding claim 11, Li teaches:
The method of claim 8, further including reading an original parity from the backend memory and computing a new stripe parity (See abstract “The method further comprises: determining new parity information associated with the new data based on the old data, the old parity information and the new data.”).
Regarding claim 12, Li teaches:
The method of claim 11, further including writing both the original parity and new data sectors to the backend memory (See abstract “this method further comprises: flushing the new data and the new parity information into the data block and the parity block in the at least one disk array group, respectively.” See [0059] “FIG. 7 illustrates a schematic diagram of a process 700 of the cache component 210 flushing a dirty page according to embodiments of the present disclosure.”).
Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Vishne in view of Li in view of Jones (U.S. Patent No. 5506977)
Regarding claim 13, Jones teaches:
The method of claim 8, further including reading unwritten data sectors from the backend memory and computing a new stripe parity (See abstract “The present invention operates during writes to K blocks of a stripe where K is less than N, i.e., a partial stripe write. If K is greater than (N-1)/2, the N-K unwritten blocks are read in order to compute the new parity information before the actual write take place.”).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to combine the command processing method of Vishne and the writing method for a RAID storage system of Li with the stripe write method of Jones to minimize the number of reads required to compute new parity information when performing partial stripe write operations (See abstract of Jones).
Regarding claim 14, Li teaches:
The method of claim 13, further including writing both the new parity and new data sectors to the backend memory (See abstract “this method further comprises: flushing the new data and the new parity information into the data block and the parity block in the at least one disk array group, respectively.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL L WESTBROOK whose telephone number is (571)270-5028. The examiner can normally be reached Mon-Fri 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL L WESTBROOK/Examiner, Art Unit 2139
/REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139