Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-11 and 13-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-5, 7-11, 13-14, 17-18, and 20 of copending Application No. 18/945,120 (US 2025/0272001) (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the reference application are a mere obvious variation of the claims in the instant application. The claims of the instant application and the claims of the reference patent are compared in the table below.
Instant Application
Reference Application (18/945,120)
A method comprising:
detecting, by a storage system, a write command that initiates a data storage operation, wherein the data storage operation includes processing data via a data storage path from intake of the data into the storage system to storing the data in a storage device of the storage system, the data storage path comprising at least a first processing stage and a second processing stage;
generating, by the storage system and based on a first intermediate representation of the data produced by the first processing stage, a checksum;
verifying, by the storage system prior to the second processing stage producing a second intermediate representation of the data, the checksum; and
directing, by the storage system and based on the verifying the checksum, the second processing stage to produce the second intermediate representation of the data based on the first intermediate representation of the data.
The method of claim 1, wherein the verifying the checksum comprises:
generating an additional instance of the checksum based on the first intermediate representation; and
comparing the additional instance of the checksum to the checksum.
The method of claim 5, further comprising:
determining, based on the comparing, that the additional instance of the checksum is different from the checksum; and
directing, based on the determining that the additional instance of the checksum is different from the checksum, the first processing stage to generate an additional instance of the first intermediate representation of the data.
The method of claim 5, further comprising:
determining, based on the comparing, that the additional instance of the checksum is different from the checksum; and
applying, based on the determining that the additional instance of the checksum is different from the checksum, an error correcting algorithm to the first intermediate representation of the data.
The method of claim 1, further comprising:
detecting a read command that initiates a data retrieval operation, wherein the data retrieval operation includes processing the data via a data retrieval path from the storage device to output of the data from the storage system, the data retrieval path comprising at least a third processing stage that corresponds to the second processing stage of the data storage path and a fourth processing stage that corresponds to the first processing stage of the data storage path;
generating, based on the second intermediate representation of the data, an additional checksum;
verifying, prior to the third processing stage producing an additional instance of the first intermediate representation of the data, the additional checksum;
directing, based on the verifying the additional checksum, the third processing stage to produce the additional instance of the first intermediate representation of the data based on the second intermediate representation of the data;
generating, based on the additional instance of the first intermediate representation of the data, an additional instance of the checksum;
verifying, prior to the fourth processing stage producing an additional instance of the data, the additional instance of the checksum; and
providing, based on the verifying the additional instance of the checksum, the additional instance of the data as an output to the read command.
A method comprising:
detecting, by a storage system, a data access operation that processes data via a data path between a client of the storage system and a storage device of the storage system for either storing or reading of the data, the data path including at least a first processing stage and a second processing stage; generating, by the storage system based on the data access operation, a first instance of a first checksum at a first time based on a first intermediate representation of the data produced by the first processing stage;
generating, by the storage system prior to the second processing stage producing a second intermediate representation of the data, a second instance of the first checksum at a second time subsequent to the first time and based on the first intermediate representation of the data;
modifying, by the storage system and based on the second checksum being different from the first checksum, the first intermediate representation of the data to generate a corrected first intermediate representation of the data;
generating, by the storage system and based on the corrected first intermediate representation of the data, a third checksum; and
directing, by the storage system and based on verifying that the third checksum matches the first checksum, the second processing stage to generate the second intermediate representation of the data based on the corrected first intermediate representation of the data.
The method of claim 1, further comprising: generating, based on the second intermediate representation and the third checksum, a fourth checksum; and verifying, the second intermediate representation using the fourth checksum to generate another instance of the third checksum.
The method of claim 1, wherein the processing the data comprises a transforming of the data.
The method of claim 1, wherein the processing the data comprises a transforming of the data.
The method of claim 2, wherein the transforming comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, or generating erasure codes for the data.
The method of claim 7, wherein the transforming comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, or generating erasure codes for the data.
The method of claim 1, wherein the processing the data comprises a transmission of the data from a first component of the storage system to a second component of the storage system.
The method of claim 1, wherein the processing the data comprises a transmission of the data from a first component of the storage system to a second component of the storage system.
The method of claim 1, wherein the generating the checksum is performed in conjunction with a generating of the first intermediate representation at the first processing stage.
The method of claim 1, wherein the generating the checksum is performed in conjunction with a generating of the first intermediate representation at the first processing stage.
The method of claim 1, wherein: the data storage path consists of a plurality of processing stages including the first processing stage and the second processing stage; and the method further comprises: generating at each processing stage of the plurality of stages, a respective checksum; and verifying, prior to proceeding to a subsequent processing stage, the respective checksum.
The method of claim 1, wherein the data path consists of a plurality of processing stages including the first processing stage and the second processing stage; and the method further comprises: generating at each processing stage of the plurality of stages, a respective checksum; and verifying, prior to proceeding to a subsequent processing stage, the respective checksum.
The method of claim 9, wherein the generating the respective checksum at each processing stage is performed in conjunction with processing the data at each processing stage.
The method of claim 3, wherein the generating the respective checksum at each processing stage is performed in conjunction with the processing the data at each processing stage.
A system comprising: a memory storing instructions; and one or more processors communicatively coupled to the memory and configured to execute the instructions to perform a process comprising: detecting a write command that initiates a data storage operation, wherein the data storage operation includes processing data via a data storage path from intake of the data into a storage system to storing the data in a storage device of the storage system, the data storage path comprising at least a first processing stage and a second processing stage; generating, based on a first intermediate representation of the data produced by the first processing stage, a checksum; verifying, prior to the second processing stage producing a second intermediate representation of the data, the checksum; and directing, based on the verifying the checksum, the second processing stage to produce the second intermediate representation of the data based on the first intermediate representation of the data.
The system of claim 13, the process further comprising: detecting a read command that initiates a data retrieval operation, wherein the data retrieval operation includes processing the data via a data retrieval path from the storage device to output of the data from the storage system, the data retrieval path comprising at least a third processing stage that corresponds to the second processing stage of the data storage path and a fourth processing stage that corresponds to the first processing stage of the data storage path; generating, based on the second intermediate representation of the data, an additional checksum; verifying, prior to the third processing stage producing an additional instance of the first intermediate representation of the data, the additional checksum; directing, based on the verifying the additional checksum, the third processing stage to produce the additional instance of the first intermediate representation of the data based on the second intermediate representation of the data; generating, based on the additional instance of the first intermediate representation of the data, an additional instance of the checksum; verifying, prior to the fourth processing stage producing an additional instance of the data, the additional instance of the checksum; and providing, based on the verifying the additional instance of the checksum, the additional instance of the data as an output to the read command.
A system comprising: a memory storing instructions; and one or more processors communicatively coupled to the memory and configured to execute the instructions to perform a process comprising: detecting a data access operation that processes data via a data path between a client of a storage system and a storage device of the storage system for either storing or reading of the data, the data path including at least a first processing stage and a second processing stage; generating, based on the data access operation, a first instance of a first checksum at a first time based on a first intermediate representation of the data produced by the first processing stage; generating, prior to the second processing stage producing a second intermediate representation of the data, a second instance of the first checksum at a second time subsequent to the first time and based on the first intermediate representation of the data; modifying, based on the second checksum being different from the first checksum, the first intermediate representation of the data to generate a corrected first intermediate representation of the data; generating, based on the corrected first intermediate representation of the data, a third checksum; and directing, based on verifying that the third checksum matches the first checksum, the second processing stage to generate the second intermediate representation of the data based on the corrected first intermediate representation of the data.
The system of claim 13, wherein the processing the data comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, or generating erasure codes for the data.
The system of claim 13, wherein the processing the data comprises a transmission of the data from a first component of the storage system to a second component of the storage system.
The system of claim 11, wherein the processing the data comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, generating erasure codes for the data, or transmitting the data from a first component of the storage system to a second component of the storage system.
The system of claim 13, wherein the generating the checksum is performed in conjunction with a generating of the first intermediate representation at the first processing stage.
The system of claim 13, wherein the generating the respective checksum at each processing stage is performed in conjunction with the processing the data at each processing stage.
The system of claim 16, wherein: the data storage path consists of a plurality of processing stages including the first processing stage and the second processing stage; and the process further comprises: generating at each processing stage of the plurality of stages, a respective checksum; and verifying, prior to proceeding to a subsequent processing stage, the respective checksum.
The system of claim 11, wherein the data path consists of a plurality of processing stages including the first processing stage and the second processing stage; and the process further comprises: generating at each processing stage of the plurality of stages, a respective checksum; and verifying, prior to proceeding to a subsequent processing stage, the respective checksum.
A computer program product embodied in a non-transitory computer readable storage medium and comprising computer instructions for: detecting a write command that initiates a data storage operation, wherein the data storage operation includes processing data via a data storage path from intake of the data into a storage system to storing the data in a storage device of the storage system, the data storage path comprising at least a first processing stage and a second processing stage; generating, based on a first intermediate representation of the data produced by the first processing stage, a checksum; verifying, prior to the second processing stage producing a second intermediate representation of the data, the checksum; and directing, based on the verifying the checksum, the second processing stage to produce the second intermediate representation of the data based on the first intermediate representation of the data.
The computer program product of claim 19, further comprising computer instructions for: detecting a read command that initiates a data retrieval operation, wherein the data retrieval operation includes processing the data via a data retrieval path from the storage device to output of the data from the storage system, the data retrieval path comprising at least a third processing stage that corresponds to the second processing stage of the data storage path and a fourth processing stage that corresponds to the first processing stage of the data storage path; generating, based on the second intermediate representation of the data, an additional checksum; verifying, prior to the third processing stage producing an additional instance of the first intermediate representation of the data, the additional checksum; directing, based on the verifying the additional checksum, the third processing stage to produce the additional instance of the first intermediate representation of the data based on the second intermediate representation of the data; generating, based on the additional instance of the first intermediate representation of the data, an additional instance of the checksum; verifying, prior to the fourth processing stage producing an additional instance of the data, the additional instance of the checksum; and providing, based on the verifying the additional instance of the checksum, the additional instance of the data as an output to the read command.
A computer program product embodied in a non-transitory computer readable storage medium and comprising computer instructions for: detecting a data access operation that processes data via a data path between a client of a storage system and a storage device of the storage system for either storing or reading of the data, the data path including at least a first processing stage and a second processing stage; generating, based on the data access operation, a first instance of a first checksum at a first time based on a first intermediate representation of the data produced by the first processing stage; generating, prior to the second processing stage producing a second intermediate representation of the data, a second instance of the first checksum at a second time subsequent to the first time and based on the first intermediate representation of the data; modifying, based on the second checksum being different from the first checksum, the first intermediate representation of the data to generate a corrected first intermediate representation of the data; generating, based on the corrected first intermediate representation of the data, a third checksum; and directing, based on verifying that the third checksum matches the first checksum, the second processing stage to generate the second intermediate representation of the data based on the corrected first intermediate representation of the data.
The computer program product of claim 18, further comprising computer instructions for: generating, based on the second intermediate representation and the third checksum, a fourth checksum; and verifying, the second intermediate representation using the fourth checksum to generate another instance of the third checksum.
Regarding claims 1, 5, 6, 7, and 11, claim 1 of the reference application teaches a method including detecting a data access operation, processing data via a data path comprising at least a first processing stage and a second processing stage, generating a checksum based on a first intermediate representation of the data produced by the first processing stage, verifying the checksum prior to the second processing stage, and directing the second processing stage to generate a second intermediate representation based on the verification.
The instant application further teaches generating additional instances of the checksum and correcting the intermediate representation upon detecting a mismatch. These additional limitations are taught by claim 5 of the reference application, which teaches generating an additional checksum instance based on an intermediate representation and verifying the intermediate representation using the generated checksum. Therefore, the subject matter of claims 1, 5, 6, 7, and 11 of the instant application is not patentable distinct from claims 1 and 5 of the reference application.
One having ordinary skill in the art would have been motivated to modify the method of claim 1 of the reference application to include the additional checksum generation and verification steps recited in the instant claims in order to further ensure data integrity across processing stages, as taught by claim 5 of the reference application.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Regarding claim 2, claim 7 of the reference application teaches the same limitations as claim 2 of the instant application.
Regarding claim 3, claim 8 of the reference application teaches the same limitations as claim 3 of the instant application.
Regarding claim 4, claim 9 of the reference application teaches the same limitations as claim 4 of the instant application.
Regarding claim 8, claim 10 of the reference application teaches the same limitations as claim 8 of the instant application.
Regarding claim 9, claim 3 of the reference application teaches the same limitations as claim 9 of the instant application, with the only difference being that the instant application teaches “data storage path” whereas the reference application teaches “data path.” Both claims require a plurality of processing stages in which a respective checksum is generated at each stage and verified prior to proceeding to a subsequent stage. The difference in terminology does not grant patentable distinctiveness.
Regarding claim 10, claim 4 of the reference application teaches the same limitations as claim 10 of the instant application.
Regarding claims 13 and 18, claim 11 of the reference application teaches a system of detecting a data access operation, staged processing of data along a data path, generating checksums based on intermediate representations, verifying the checksums prior to producing subsequent representations, and providing verified data as output.
The instant application teaches corresponding system and computer program implementations of the same data path checksum verification framework. Therefore, the subject matter of claims 13 and 18 of the instant application is not patentable distinct from claim 11 of the reference application.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Regarding claims 14 and 15, claim 17 of the reference application teaches a system in which processing the data comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, generating erasure codes for the data, or transmitting the data from a first component of the storage system to a second component of the storage system.
Claims 14 and 15 teach the same limitations as claim 17 of the reference application. Therefore, the subject matter of claims 14 and 15 of the instant application is not patentable distinct from claim 17 of the reference application.
Regarding claim 16, claim 14 of the reference application teaches a system in which generating the respective checksum at each processing stage is performed in conjunction with processing the data at each processing stage. Claim 16 of the instant application recites a subset of this same functionality, specifically generating the checksum in conjunction with generating the first intermediate representation at the first processing stage.
Therefore, the subject matter of claim 16 of the instant application is not patentable distinct from claim 14 of the reference application.
Regarding claim 17, claim 13 of the reference application teaches the same limitations as claim 17 of the instant application, with the only difference being that the instant application teaches “data storage path” whereas the reference application teaches “data path.” Both claims require a plurality of processing stages in which a respective checksum is generated at each stage and verified prior to proceeding to a subsequent stage. The difference in terminology does not grant patentable distinctiveness.
Regarding claim 19, claim 18 of the reference application teaches a computer program in a non-transitory computer readable storage medium for detecting a data access operation, processing data via a data path including first and second processing stages, generating a checksum based on a first intermediate representation, verifying the checksum prior to producing a subsequent intermediate representation, and directing subsequent processing based on the verification.
Claim 19 of the instant application teaches a computer program implementing a write-path subset of this same staged checksum verification framework. Therefore, the subject matter of claim 19 of the instant application is not patentable distinct from claim 18 of the reference application.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Regarding claim 20, claims 18 and 20 of the reference application teach a computer program implementing staged processing along a read path, including generating checksums based on intermediate representations, verifying the checksums prior to producing earlier-stage representations, and providing verified data as output.
Claim 20 of the instant application teaches a computer program implementing read path checksum verification using intermediate representations. Therefore, the subject matter of claim 20 of the instant application is not patentable distinct from claims 18 and 20 of the reference application.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Brinicombe et al. (US 2015/0301964), hereinafter Brinicombe, in view of Yang et al. (US 8,255,763), hereinafter Yang.
Regarding claim 1, Brinicombe teaches a method comprising:
detecting, by a storage system, a write command that initiates a data storage operation, wherein the data storage operation includes processing data via a data storage path from intake of the data into the storage system to storing the data in a storage device of the storage system (Brinicombe, Fig. 3B teaches a write ingest pipeline that handles write operations, where data flows through an ingest pipeline prior to reaching storage),
generating, by the storage system and data produced by the first processing stage, a checksum (Brinicombe, Fig. 4, steps 406; para. [0042], lines 12-13, “During the write pipeline, checksums can be verified”; para. [0045], lines 22-24, “Example data processing steps can include: CRC generation; secure hash generation… checksum generation”; para. [0080], lines 1-7, “An example of using checksums for maintaining de-duplication database and/or parity fault location is now provided. Checksums can be used for several different purposes in various embodiments [e.g. de-duplication, read verification, etc.]. In a de-duplication example, a cryptographic hash [e.g. SHA-256] can be computed for every user data block for each write”);
verifying, by the storage system prior to the second processing stage producing a second intermediate representation of the data, the checksum (Brinicombe, claim 9, “The data-plane architecture of claim 8, wherein the write pipeline moves the data from the write/ingest memory to the write/emit memory, and wherein during the write pipeline checksums are verified ad the data is encrypted”); and
directing, by the storage system and based on the verifying the checksum, the second processing stage to produce the second intermediate representation of the data based on the first intermediate representation of the data (Brinicombe, para. [0082] teaches verifying data using stored checksums maintained in a checksum database, where the checksum database serves as an authoritative source for comparison).
Brinicombe fails to explicitly teach the data storage path comprising at least a first processing stage and a second processing stage; and intermediate representations of data.
However, Yang, in an analogous art, teaches the data storage path comprising at least a first processing stage and a second processing stage (Yang, Fig. 4A teaches a multi-stage processing pipeline with sequential stages); and intermediate representations of data.
Brinicombe and Yang are both considered to be analogous to the claimed invention because both are in the same field of multi-stage memory systems.
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Brinicombe to incorporate the teachings of Yang by including the functionality of having staged processing and intermediate representations of data.
The suggestion/motivation for doing so would be to ensure that subsequent processing stages operate correctly on verified data.
Regarding claim 2, the combination of Brinicombe in view of Yang teaches the method of claim 1, wherein the processing the data comprises a transforming of the data (Brinicombe, para. [0055], lines 7-9, “data processing steps can be limited to standard storage operations and systems [e.g. for RAID, compression, de-duplication, encryption, and the like]”).
Regarding claim 3, the combination of Brinicombe in view of Yang teaches the method of claim 2, wherein the transforming comprises at least one of compressing the data, merging the data, splitting the data, encrypting the data, or generating erasure codes for the data (Brinicombe, para. [0055], lines 7-9, “data processing steps can be limited to standard storage operations and systems [e.g. for RAID, compression, de-duplication, encryption, and the like]”).
Regarding claim 4, the combination of Brinicombe in view of Yang teaches the method of claim 1, wherein the processing the data comprises a transmission of the data from a first component of the storage system to a second component of the storage system (Brinicombe, para. [0053], lines 10-11, “movement of data between pipeline steps can be automated by built-in micro-sequencers to save embedded CPU load”; para. [0075], lines 10-12, “These reverse references can be used to allow for physical data movement within the storage array”).
Regarding claim 5, the combination of Brinicombe in view of Yang teaches the method of claim 1, wherein the verifying the checksum comprises: generating an additional instance of the checksum based on the first intermediate representation; and comparing the additional instance of the checksum to the checksum (Brinicombe, para. [0081], lines 1-8, “an additional smaller checksum can be computed [e.g. substantially simultaneously with hash message authentication code (HMAC or other cryptographic hash)]. This checksum can be held in memory. By holding the checksum in memory, the checksum can be available so every read computes the same checksum. A comparison can be performed in order to detect transient read errors for the storage devices”).
Regarding claim 6, the combination of Brinicombe in view of Yang teaches the method of claim 5, further comprising: determining, based on the comparing, that the additional instance of the checksum is different from the checksum; and directing, based on the determining that the additional instance of the checksum is different from the checksum, the first processing stage to generate an additional instance of the first intermediate representation of the data (Brinicombe, para. [0081], lines 8-10, “A comparison can be performed in order to detect transient read errors for the storage devices. A failure can result in the data being re-read from the array and/or reconstruction of the data using parity on the redundant”).
Regarding claim 7, the combination of Brinicombe in view of Yang teaches the method of claim 5, further comprising:
determining, based on the comparing, that the additional instance of the checksum is different from the checksum; and
applying, based on the determining that the additional instance of the checksum is different from the checksum, an error correcting algorithm to the first intermediate representation of the data (Brinicombe, para. [0081], lines 8-10, “A comparison can be performed in order to detect transient read errors for the storage devices. A failure can result in the data being re-read from the array and/or reconstruction of the data using parity on the redundant;” para. [0045], lines 37-43, “Example data encoding for redundancy implementations can include: mirroring [e.g. copying of data]: single parity [RAID-5], double parity [RAID-6] and triple parity encoding; generic M+N/(Cauchy)Reed-Solomon coding; and/or error correction codes such as Hamming codes, convolution codes, BCH codes, turbo codes, LDPC codes”).
Regarding claim 8, the combination of Brinicombe in view of Yang teaches the method of claim 1, wherein the generating the checksum is performed in conjunction with a generating of the first intermediate representation at the first processing stage (Brinicombe, para. [0055], lines 7-9, “data processing steps can be limited to standard storage operations and systems ( e.g. for RAID, compression, de-duplication, encryption, and the like);” para. [0080], lines 5-7, “a cryptographic hash (e.g. SHA-256) can be computed for every user data block for each write”; the reference teaches a hash/checksum being computer during write processing, as well as processing and checksum generating occurring together in the pipeline).
Regarding claim 9, the combination of Brinicombe in view of Yang teaches the method of claim 1, wherein: the data storage path consists of a plurality of processing stages including the first processing stage and the second processing stage; and the method further comprises: generating at each processing stage of the plurality of stages, a respective checksum; and verifying, prior to proceeding to a subsequent processing stage, the respective checksum (Yang, Fig. 6B, block 662; the reference teaches multi-stage processing and verification before proceeding).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Brinicombe to incorporate the teachings of Yang by including the functionality of multi-stage processing and verification before proceeding.
The suggestion/motivation for doing so would be to ensure reliability of the processing stage before proceeding in the operation.
Regarding claim 10, the combination of Brinicombe in view of Yang teaches the method of claim 9, wherein the generating the respective checksum at each processing stage is performed in conjunction with processing the data at each processing stage (Yang, Fig. 6A teaches evaluating parity and reliability metrics before continuing the decoding process).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Brinicombe to incorporate the teachings of Yang by including the functionality of generating checksum at each stage, along with processing data.
The suggestion/motivation for doing so would be to ensure reliability of the processing stage before proceeding in the operation.
Regarding claim 11, the combination of Brinicombe in view of Yang teaches the method of claim 1, further comprising:
detecting a read command that initiates a data retrieval operation, wherein the data retrieval operation includes processing the data via a data retrieval path from the storage device to output of the data from the storage system (Brinicombe, Fig. 3B teaches a read pipeline that handles write operations, where data flows through an ingest pipeline prior to reaching storage), the data retrieval path comprising at least a third processing stage that corresponds to the second processing stage of the data storage path and a fourth processing stage that corresponds to the first processing stage of the data storage path (Yang, Fig. 4A teaches a multi-stage processing pipeline with sequential stages);
generating, based on the second intermediate representation of the data, an additional checksum; verifying, prior to the third processing stage producing an additional instance of the first intermediate representation of the data, the additional checksum; directing, based on the verifying the additional checksum, the third processing stage to produce the additional instance of the first intermediate representation of the data based on the second intermediate representation of the data; generating, based on the additional instance of the first intermediate representation of the data, an additional instance of the checksum; verifying, prior to the fourth processing stage producing an additional instance of the data, the additional instance of the checksum (Brinicombe, para. [0081], lines 7-10, “A comparison can be performed in order to detect transient read errors for the storage devices. A failure can result in the data being re-read from the array and/or reconstruction of the data using parity on the redundant”); and providing, based on the verifying the additional instance of the checksum, the additional instance of the data as an output to the read command (Brinicombe, para. [0082], lines 1-4, “Multiple reads can be implemented to validate data. For example, when the system is running the checksum database can be used to allow the data for every read to be validated to catch transient and/or drive errors”; implies that validation occurs before data is provided to the requestor).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Brinicombe to incorporate the teachings of Yang by including the functionality of having staged processing and intermediate representations of data.
The suggestion/motivation for doing so would be to ensure that subsequent processing stages operate correctly on verified data.
Regarding claim 12, the combination of Brinicombe in view of Yang teaches the method of claim 11, wherein: the checksum is stored in the storage system in response to the write command; and the verifying the additional instance of the checksum comprises comparing the additional instance of the checksum with the checksum (Brinicombe, para. [0081], lines 1-8, “an additional smaller checksum can be computed [e.g. substantially simultaneously with hash message authentication code (HMAC or other cryptographic hash)]. This checksum can be held in memory. By holding the checksum in memory, the checksum can be available so every read computes the same checksum. A comparison can be performed in order to detect transient read errors for the storage devices”).
Claim 13 is a system with limitations similar to the method of claim 1, and is rejected under the same rationale.
Claim 14 is a system with limitations similar to the method of claim 3, and is rejected under the same rationale.
Claim 15 is a system with limitations similar to the method of claim 4, and is rejected under the same rationale.
Claim 16 is a system with limitations similar to the method of claim 8, and is rejected under the same rationale.
Claim 17 is a system with limitations similar to the method of claim 9, and is rejected under the same rationale.
Claim 18 is a system with limitations similar to the method of claim 11, and is rejected under the same rationale.
Claim 19 is a computer program product with limitations similar to the method of claim 1, and is rejected under the same rationale.
Claim 20 is a computer program product with limitations similar to the method of claim 11, and is rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Beattie et al. (US 2016/0364289) teaches end-to-end error detection coding in a memory system that utilizes checksums for error verification.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE V BRADEN whose telephone number is (703)756-5381. The examiner can normally be reached Mon-Fri: 9AM-5:30 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Albert Decady can be reached at (571) 272-3819. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.V.B./Examiner, Art Unit 2112
/ALBERT DECADY/Supervisory Patent Examiner, Art Unit 2112