Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
EXAMINER’S NOTE: The claims have been reviewed and considered under the new guidance pursuant to the 2019 Revised Patent Subject Matter Eligibility Guidance (PEG 2019) issued January 7, 2019.
This is in response to Applicant’s claims filed on 21 October 2024. Claims 1-20 remain pending.
Information Disclosure Statement
The Information Disclosure Statements respectfully submitted on 21 October 2024 and 11 November 2024 have been considered by the Examiner.
Continued Prosecution Application
This application is a continuation of Serial No. 18/357,206 filed on 24 July 2023, which is now, US Patent No. 12,124,716, issued on 22 October 2024, Serial No. 17/833,046, filed on 06 June 2022, which is now, US Patent No. 11,709,603, issued on 25 July 2023, Serial No. 16/679,914, filed on 11 November 2019, which is now, US Patent No. 11,354,049, issued on 07 June 2022, Serial No. 15/581,369 filed on 28 April 2017, which is now US Patent No. 10,489,073.
Double Patenting
6. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Instant Application 18/921,156
Issued Application 12,124,716
1. A method comprising: assigning a context, populated with an encryption key index, to an object for storage within a storage tier of a multi-tiered storage environment; storing the object, assigned the context, into the storage tier; and in response to receiving a request to access a data chunk within the object stored into the storage tier: identifying an encryption key for the object using the encryption key index; and utilizing the encryption key to decrypt the data chunk within the object to provide in response to the request.
2. The method of claim 1, comprising: in response to a determination that the context includes an unverified error indicator, designating the data chuck as being inconsistent.
3. The method of claim 1, comprising: in response to a determination that the context includes a pseudobad indicator that the data chunk had an error when being read from a storage location for generating the object, designating that the data chuck is inconsistent.
4. The method of claim 1, comprising: setting a wrecked indicator within the context of the object based upon forcefully corruption of the data chunk.
5. The method of claim 1, comprising: in response to a determination that the context indicates that an unverified RAID error occurred when the data chunk was being read from a storage location for generating the object, designating the data chuck as being inconsistent.
6. The method of claim 1, comprising: creating a checksum for both the data chunk and the context; populating the context with the checksum; and utilizing the checksum to verify the data chunk.
7. The method of claim 1, comprising: evaluating the context to identify a file block number for the data chunk; and utilizing the file block number to access the data chunk within the object.
8. The method of claim 1, comprising: creating the object to comprise a plurality of object pages; and populating data chunks into the plurality of object pages, wherein the data chunks correspond to data moved from a source storage tier to the storage tier.
9. The method of claim 1, comprising: creating the object to comprise a header; populating the header with a creation timestamp for the object; and utilizing the creation timestamp as part of verifying the data chunk.
10. A computing device comprising: a memory comprising machine executable code; and a processor coupled to the memory, the processor configured to execute the machine executable code to: assign a context, populated with a file block number for a data chunk, to an object for storage within a storage environment, wherein the data chunk is stored within the object; store the object, assigned the context, into the storage environment; and in response to receiving a request to access the data chunk within the object stored into the storage environment: evaluate the context to identify the file block number for the data chunk; and utilize the file block number to access the data chunk within the object to provide in response to the request.
11. The computing device of claim 10, wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with creation timestamp for the object; and utilize the creation timestamp as part of verifying the data chunk.
12. The computing device of claim 10, wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a volume identifier of a volume storing data populated into object; and utilize the volume identifier as part of verifying the data chunk.
13. The computing device of claim 12, wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a hash of a name of the object and a volume identifier of a volume storing data populated into object; and utilize the hash as part of verifying the data chunk.
14. The computing device of claim 13, wherein the machine executable code causes the computing device to: execute an operation targeting the object to verify the hash.
15. The computing device of claim 10, wherein the machine executable code causes the computing device to: creating a checksum for both the data chunk and the context; populating the context with the checksum; and utilizing the checksum to verify the data chunk.
16. The computing device of claim 10, wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a buff tree universal identifier of a volume storing data populated into object; and utilize the buff tree universal identifier as part of verifying the data chunk.
17. The computing device of claim 10, wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with an indicator as to whether the object is encrypted; and utilize the indicator to determine how to access the data chunk.
18. A non-transitory machine readable medium comprising instructions for performing a method, which when executed by a machine, causes the machine to: assigning a context, populated with an indicator that a data chunk had an error when being read from a storage location for generating an object, to the object for storage within a storage tier of a multi-tiered storage environment; storing the object, assigned the context, into the storage tier; and in response to receiving a request to access the data chunk within the object stored into the storage tier: evaluating the indicator to determine that the data chunk had the error; and designating the data chunk of the object as being inconsistent as a response to the request.
19. The non-transitory machine readable medium of claim 18, wherein the instructions cause the machine to: create the object to comprise a header; populate the header with a buff tree universal identifier of a volume storing data populated into object; and utilize the buff tree universal identifier as part of verifying the data chunk.
20. The non-transitory machine readable medium of claim 18, wherein the instructions cause the machine to: create the object to comprise a header; populate the header with an indicator as to whether the object is encrypted; and utilize the indicator to determine how to access the data chunk.
1. A method comprising: in response to determining that data of a first storage location is to be migrated to a second storage location, storing the data as data chunks within object pages of an object; populating the object with a header that comprises at least one of a version of the object, an indicator as to whether the object is encrypted, a creation timestamp for the object, a volume identifier of where the data was stored at the first storage location, and an identifier of a name of the object; storing the object within the second storage location; and reading the identifier within the header in order to determine that the object was successfully stored within the second storage location with non-corrupt data.
2. The method of claim 1, comprising: populating the object with a context that comprises an encryption key index; utilizing the encryption key index of the context to identify an encryption key; and decrypting the data chunks within the object using the encryption key.
3. The method of claim 1, comprising: populating the object with a context that comprises a pseudobad indicator; evaluating the pseudobad indicator to determine whether the data read from the first storage location had an error; and in response to determining that the data had the error, designating a data chunk of the object as being inconsistent.
4. The method of claim 3, comprising: populating the context within an indicator to indicate that a storage operating system marked the error that resulted in the pseudobad indicator.
5. The method of claim 1, comprising: populating the object with a context comprising an indicator to indicate that a RAID subsystem identified a disk error when reading the data from the first storage location for creating the object; and designating a data chunk of the object as being inconsistent based upon the disk error.
6. The method of claim 1, comprising: evaluating a context of the object to determine whether the context comprises an unverified error indicator; and in response to the context comprising the unverified error indicator, determining that a data chunk within the object is inconsistent due to an unverified RAID error occurring when the data was read from the first storage location for creating the object.
7. The method of claim 1, comprising: forcefully corrupting the data stored within the object; and in response to the data being corrupted, setting a wrecked indicator.
8. The method of claim 1, comprising: identifying a file block number for a data chunk; and populating the object with a context that comprises the file block number.
9. The method of claim 1, comprising: generating a checksum for a data chunk within the object; and populating the object with a context that comprises the checksum.
10. The method of claim 1, comprising: creating a context for the object; generating a checksum for the context and a data chunk within the object; and populating the context with the checksum.
11. A computing device comprising: a memory comprising machine executable code; and a processor coupled to the memory, the processor configured to execute the machine executable code to perform operations comprising: in response to determining that data of a first storage location is to be migrated to a second storage location, storing the data as data chunks within object pages of an object; populating the object with a header that comprises at least one of a version of the object, an indicator as to whether the object is encrypted, a creation timestamp for the object, a volume identifier of where the data was stored at the first storage location, and an identifier of a name of the object; storing the object within the second storage location; and reading the identifier within the header in order to determine that the object was successfully stored within the second storage location with non-corrupt data.
12. The computing device of claim 11, wherein the operations comprise: evaluating a property of a volume storing the data at the first storage location to determine that the data is backup data to be migrated; and utilizing a destination backup volume at the second storage location as a destination for storing the object based upon the destination backup volume having a backup property.
13. The computing device of claim 11, wherein the operations comprise: reading the data from the first storage location into a staging file; and destaging the data from the staging file into the object pages of the object as the data chunks; volume having a backup property.
14. The computing device of claim 11, wherein the operations comprise: in response to determining that the data is deduplicated, preserving deduplication of the data when the data is stored into the object pages of the object as the data chunks.
15. The computing device of claim 11, wherein the operations comprise: in response to determining that the data is compressed, preserving compression of the data when the data is stored into the object pages of the object as the data chunks.
16. The computing device of claim 11, wherein the operations comprise: assigning sequence numbers to objects stored within the second storage location, wherein a sequence number assigned to the object uniquely identifies the object.
17. A non-transitory machine readable medium comprising instructions for performing a method, which when executed by a machine, causes the machine to perform operations comprising: in response to determining that data of a first storage location is to be migrated to a second storage location, storing the data as data chunks within object pages of an object; populating the object with a header that comprises at least one of a version of the object, an indicator as to whether the object is encrypted, a creation timestamp for the object, a volume identifier of where the data was stored at the first storage location, and an identifier of a name of the object; storing the object within the second storage location; and reading the identifier within the header in order to determine that the object was successfully stored within the second storage location with non-corrupt data.
18. The non-transitory machine readable medium of claim 17, wherein the operations comprise: populating the object with a context that comprises an encryption key index; utilizing the encryption key index of the context to identify an encryption key; and decrypting the data chunks within the object using the encryption key.
19. The non-transitory machine readable medium of claim 17, wherein the operations comprise: populating the object with a context that comprises a pseudobad indicator; evaluating the pseudobad indicator to determine whether the data read from the first storage location had an error; and in response to determining that the data had the error, designating a data chunk of the object as being inconsistent.
20. The non-transitory machine readable medium of claim 19, wherein the operations comprise: populating the context within an indicator to indicate that a storage operating system marked the error that resulted in the pseduobad indicator.
7. Claims 1-20 is rejected on the ground of non-statutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,124,716. Although the claims at issue are not identical, they are not patentably distinct from each other because in both instances, the claims are drawn towards techniques for multi-tier write allocation for storing client data within multiple tiers of a storage system. The omission of “reading the identifier within the header in order to determine that the object was successfully stored within the second storage location with non-corrupt data” does not change the scope of the claims for the instant application and the issued application. Similarly, in both instances, a similarity measure may be attained wherein utilizing a first storage tier and a second storage tier for identifying data being associated with a policy without the need to change how the file system references the data because the file system was already configured to utilize the second storage tier location identifier is being performed.
Claim Objections
8. Claims 2-3 and 5 are objected to because of the following informalities: The word “chuck” appears to a typographical error and should be amended to recite “chunk”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
11. Applicant has provided a submission in this file that the claimed invention and the subject matter disclosed in the prior art reference were owned by, or subject to an obligation of assignment to, the same entity as NetApp, Inc. not later than the effective filing date of the claimed invention, or the subject matter disclosed in the prior art reference was developed and the claimed invention was made by, or on behalf of one or more parties to a joint research agreement not later than the effective filing date of the claimed invention. However, although subject matter disclosed in the reference Katiyar et al. (Pub No. 2017/0024161) has been excepted as prior art under 35 U.S.C. 102(a)(2), it is still applicable as prior art under 35 U.S.C. 102(a)(1) that cannot be excepted under 35 U.S.C. 102(b)(2)(C).
Applicant may overcome this rejection under 35 U.S.C. 102(a)(1) by a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application, and is therefore, not prior art as set forth in 35 U.S.C. 102(b)(1)(A). Alternatively, applicant may rely on the exception under 35 U.S.C. 102(b)(1)(B) by providing evidence of a prior public disclosure via an affidavit or declaration under 37 CFR 1.130(b).
12. Claims 1 and 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Mattsson (Pub No. 2008/0082834) in view of Katiyar et al. (Pub No. 2017/0024161).
Referring to the rejection of claim 1, Mattsson discloses a method comprising:
assigning a context, populated with an encryption key index, to an object identifying an encryption key for the object using the encryption key index; (See Mattsson, para. 6, 8, 52-54, 57, Figs. 7-9 and 10-12, i.e., storing metadata (context) with data elements/records/fields (e.g., “storing metadata about the encrypted datum” [0006]; DTP recovery column embodiments FIGS. 10–12; “the record 702… contains a key indicator 706” [0052–0053]). Quote: “[storing metadata about the encrypted datum]” [0006]. It also teaches teaching that metadata can include a key identifier / key index and examples of storing a key indicator in the same element as the encrypted string (FIGS. 7–9; “[t]he data may be encrypted using an encryption key and the metadata may comprise a key identifier for the encryption key” [0008]; “it is desired to store three bytes of meta data containing a key index” [0057]). “The data may be encrypted using an encryption key and the metadata may comprise a key identifier for the encryption key.” [0008]; “the record … contains a key indicator … provides an index to an encryption key…” [0052–0054]
and utilizing the encryption key to decrypt the data chunk within the object to provide in response to the request. (See Mattsson, para. 3, 59, 63, and 68-69, i.e., In order to decrypt encrypted data, one must possess one or more pieces of information such as an encryption key, the encryption algorithm, and an initialization vector (IV) The original value from another field (the table's primary key, for example) must always be available, at decryption operations, if used to modify the encryption key. A method to ensure that the original value for the IV field is available, at decryption operations, is to store a new dedicated field for the IV value. FIG. 10 is DTP encrypted columns and an optional DTP recovery column10 values encrypted with the same IV. Looking at these values, it would likely be possible to decrypt the values, even if not knowing any clear text. The clear text bytes are restricted to `0`-`9`, and each byte is XOR-ed with a constant, K. By looking at the variance, it may be possible to determine which character is `0`, which is `1`, and so on. By including the metadata with the sensitive data, the receiving device will have some of the required information for decryption. Moreover, in embodiments where the sensitive data is compressed, no modifications are required to database tables as the encrypted sensitive data and the corresponding meta data will fit into the same amount of space as originally allocated in the tables)
However, Mattsson fails to explicitly disclose for storage within a storage tier of a multi-tiered storage environment.
Katiyar et al. discloses a method and system for storing data at different storage tiers of a storage system.
Katiyar et al. discloses for storage within a storage tier of a multi-tiered storage environment; storing the object, assigned the context, into the storage tier; and in response to receiving a request to access a data chunk within the object stored into the storage tier. (See Katiyar et al., para. 25, 30-34, 100-110, 119-121, i.e., files/blocks/objects and explicitly states that “file” includes “an object, “for storage within a storage tier of a multi-tiered storage environment” (i.e., tier placement) teaching of multi-tier storage, performance vs capacity tiers, monitoring and moving data between tiers, and using TVBN/chunk mapping to permit movement without pointer invalidation (see [0030]–[0034], [0100]–[0107], [0119]–[0121], Claim 1). “SSD tier … performance tier … cold data is moved to HDDs” [0030–0034]; “transferring the data from the first storage tier to the second storage tier” (Summary/Claim 1); “data is transferred … and the chunk ID map is updated … TVBN address of the block does not change” [0106–0110], [0120–0121])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine Mattsson’s method and system of data storage for storing encrypted data such that the data contains sufficient information to decrypt the data modified with Katiyar et al.’s method and system for storing data at different storage tiers of a storage system.
Motivation for such an implementation would enable a data movement between different storage tiers without having to invalidate indirect block pointers at the level 1 blocks. (See Katiyar et al., para. 88)
Referring to the rejection of claim 6, (Mattsson modified by Katiyar et al.) discloses comprising: creating a checksum for both the data chunk and the context; populating the context with the checksum; and utilizing the checksum to verify the data chunk. (See Katiyar et al., para. 91-96, i.e., chunk ID to virtual ID relationship is created when data is write allocated to a chunk. For example, when data is first write allocated to say, chunk 1, a virtual ID is assigned to chunk 1. This relationship is updated when data is moved. The generation count is zero when a virtual ID is first used. The count is incremented when the data is moved. The file system places PVBN at a Level 1 block. The VVOL indirect stores the VVBN and a TVBN reduces the redirection cost and the metadata cost of tiering data. A TVBN (i.e., context) may be 48 bits and may include a TVBN identifier (TVBN ID), the generation count (for example, a 4-bit value), a virtual ID of the chunk (24-bit value) and an offset (for example, a 19-bit value) of the PVBN of the chunk. The PVBN is obtained from the chunk ID and the offset value and the virtual ID and the generation count is stored in a checksum context of each block so that the chunk ID data structure (may also be referred to as chunk ID map metafile) can be recreated in case there is any corruption (i.e., verifying the data chunk using the checksum)
Referring to the rejection of claim 7, (Mattsson modified by Katiyar et al.) discloses comprising: evaluating the context to identify a file block number for the data chunk; and utilizing the file block number to access the data chunk within the object. (See Katiyar et al., para. 43 and 83, i.e., to facilitate access to storage space at the storage sub-system, the storage operating system implements a file system that logically organizes the information as a hierarchical structure of directories and files on the disks. Each “on-disk” file may be implemented as set of disk blocks configured to store information, such as text, whereas a directory may be implemented as a specially formatted file in which other files and directories are stored. These data blocks are organized within a volume block number (vbn) space that is maintained by the file system. The file system may also assign each data block in the file a corresponding “file offset” or file block number (fbn). The file system typically assigns sequences of fbns on a per-file basis, whereas vbns are assigned over a larger volume address space. When operating in a VVOL, VVBN identifies a file block number (fbn) location within the file and the file system uses the indirect blocks of the hidden container file to translate the fbn into a physical. VBN (PVBN) location within the physical volume, which block can then be retrieved from a storage device)
Referring to the rejection of claim 8, (Mattsson modified by Katiyar et al.) discloses comprising: creating the object to comprise a plurality of object pages; and populating data chunks into the plurality of object pages, wherein the data chunks correspond to data moved from a source storage tier to the storage tier. (See Katiyar et al., para. 89-91, i.e., data structure is based on dividing the PVBN space of a storage device into a plurality of “chunks” (i.e., a plurality of object pages). As shown in FIG. 7A, the PVBN for SSD may be divided into chunks Chunk 1, 2, 3 and so forth. The PVBN of hard drive (HDD) may be divided in Chunks X, Y, X and so forth. In one aspect, a chunk is a set of contiguous PVBNs at the storage device. Chunks may be aligned from a start PVBN of each storage device of an aggregate. Each chunk is uniquely identified by the chunk identifier (shown as Chunk ID in Column 702C). The chunk ID may be derived from a first PVBN in the chunk. When storage device PVBNs chunk size is aligned with chunk ID, then the chunk ID can be derived from the most significant bit of the PVBN. Column 702A stores the virtual ID of a chunk and column 702B stores a generation count (shown as Gen). Chunk ID to virtual ID relationship is created when data is write allocated to a chunk. For example, when data is first write allocated to say, chunk 1, a virtual ID is assigned to chunk 1. This relationship is updated when data is moved. For example, if data is moved to chunk 2, then the virtual ID is assigned to chunk 2)
Referring to the rejection of claim 9, (Mattsson modified by Katiyar et al.) discloses comprising: creating the object to comprise a header; populating the header with a creation timestamp for the object; and utilizing the creation timestamp as part of verifying the data chunk. (See Katiyar et al., para. 62, 65, and 84, i.e., the file system is a message-based system that provides logical volume management capabilities for use in access to the information stored at the storage devices. The WAFL file system having an on-disk format representation that is block-based using, e.g., 4 kilobyte (KB) blocks and using inodes to identify files and file attributes (such as creation time, access permissions, size and block location). The information stored in the meta-data section of each inode describes a file and, as such, may include the file type (e.g., regular or directory), size of the file, time stamps (e.g., access and/or modification) for the file and ownership, i.e., user identifier (UID) and group ID (GID), of the file.) The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL, whether it is online, being created or being destroyed, etc.)
13. Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Mattsson (Pub No. 2008/0082834) in view of Katiyar et al. (Pub No. 2017/0024161) as applied to claim 1 above, and further in view of Belluomini et al. (Pub No. 2009/0083504). However, the combination of Mattsson and Katiyar et al. fails to explicitly disclose an error indicator and data chunk being inconsistent.
Belluomini et al. discloses a method and system for checking the integrity of data in disk storage systems.
Referring to the rejection of claim 2, (Mattsson and Katiyar et al. modified by Belluomini et al.) discloses comprising: in response to a determination that the context includes an unverified error indicator, designating the data chuck as being inconsistent. (See Belluomini et al., See [0083]–[0085]: If the AMD for the data read is inconsistent with the data… checker 134 may determine that the data or the appendix has been corrupted… If the VMD… is inconsistent… checker 134 may invoke error handler 160…)
The rationale for combining Mattsson and Katiyar et al. in view of Belluomini et al. is the same as claim 2.
Referring to the rejection of claim 3, (Mattsson and Katiyar et al. modified by Belluomini et al.) discloses in response to a determination that the context includes a pseudobad indicator that the data chunk had an error when being read from a storage location for generating the object, designating that the data chuck is inconsistent. (See Belluomini et al., para. 86 and 99, i.e., when an AMD error is detected (e.g., when checker has determined that the AMD for the read data is corrupted without examining the VMD), then it is possible that either the data or the appendix or both are corrupted. If error handler does not locate an error in the target data based on the detected VMD error, then it is determined that the VMD stored in LLNVS device is corrupted, and an error is declared)
The rationale for combining Mattsson and Katiyar et al. in view of Belluomini et al. is the same as claim 2.
Referring to the rejection of claim 4, (Mattsson and Katiyar et al. modified by Belluomini et al.) discloses comprising: setting a wrecked indicator within the context of the object based upon forcefully corruption of the data chunk. (See Belluomini et al., para. 86 and 99, i.e., when an AMD error is detected (e.g., when checker has determined that the AMD for the read data is corrupted without examining the VMD), then it is possible that either the data or the appendix or both are corrupted. If error handler does not locate an error in the target data based on the detected VMD error, then it is determined that the VMD stored in LLNVS device is corrupted, and an error is declared)
The rationale for combining Mattsson and Katiyar et al. in view of Belluomini et al. is the same as claim 2.
Referring to the rejection of claim 5, (Mattsson and Katiyar et al. modified by Belluomini et al.) discloses comprising: in response to a determination that the context indicates that an unverified RAID error occurred when the data chunk was being read from a storage location for generating the object, designating the data chuck as being inconsistent. (See Belluomini et al., para. 85, 91, and 97-98, i.e., if the two copies of the VMD stored in the appendix and in the LLNVS devices are inconsistent, then checker invoke error handler to manage the error, the two error handler methods for VMD and AMD use multiple copies of VMD stored in one or more LLNVS devices as integrated with a RAID layer to repair corrupt data stored on disk drives. if checker determines that the AMD associated with the rebuilt data is inconsistent with that from the appendix, then checker determines if the VMD in the appendix is consistent with the VMD stored in LLNVS devices, since the AMD for the rebuilt data is not a match, logger may log an error and fail the IO operation requested by host, and if an error is located, then error handler determine whether the error is associated with the target data, if so, error handler request that the RAID layer to rebuild the target data and if the rebuild attempt is not successful, then logger logs an error and an failure is returned to host in response to the submitted IO request)
The rationale for combining Mattsson and Katiyar et al. in view of Belluomini et al. is the same as claim 2.
Claim Rejections - 35 USC § 102
14. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
15. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
16. Applicant has provided evidence in this file showing that the claimed invention and the subject matter disclosed in the prior art reference were owned by, or subject to an obligation of assignment to, the same entity as NetApp, Inc. not later than the effective filing date of the claimed invention, or the subject matter disclosed in the prior art reference was developed and the claimed invention was made by, or on behalf of one or more parties to a joint research agreement in effect not later than the effective filing date of the claimed invention. However, although reference Katiyar et al. (Pub No. 2017/0024161) has been excepted as prior art under 35 U.S.C. 102(a)(2), it is still applicable as prior art under 35 U.S.C. 102(a)(1) that cannot be excepted under 35 U.S.C. 102(b)(2)(C).
Applicant may rely on the exception under 35 U.S.C. 102(b)(1)(A) to overcome this rejection under 35 U.S.C. 102(a)(1) by a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application, and is therefore not prior art under 35 U.S.C. 102(a)(1). Alternatively, applicant may rely on the exception under 35 U.S.C. 102(b)(1)(B) by providing evidence of a prior public disclosure via an affidavit or declaration under 37 CFR 1.130(b).
17. Claims 10-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Katiyar et al. (Pub No. 2017/0024161).
Referring to the rejection of claim 10, Katiyar et al. discloses a computing device comprising:
a memory comprising machine executable code; (See Katiyar et al., Fig. 1, a memory comprising machine executable code disclosed as the memory device, item 112)
and a processor coupled to the memory, the processor configured to execute the machine executable code to: (See Katiyar et al., Fig. 1, a processor coupled to the memory configured to execute the machine executable code disclosed as the processor, item 110)
assign a context, populated with a file block number for a data chunk, to an object for storage within a storage environment, wherein the data chunk is stored within the object; (See Katiyar et al., para. 43, i.e., to facilitate access to storage space at the storage sub-system, the storage operating system implements a file system that logically organizes the information as a hierarchical structure of directories and files on the disks. Each “on-disk” file may be implemented as set of disk blocks configured to store information, such as text, whereas a directory may be implemented as a specially formatted file in which other files and directories are stored. These data blocks are organized within a volume block number (vbn) space that is maintained by the file system. The file system may also assign each data block in the file a corresponding “file offset” or file block number (fbn). The file system typically assigns sequences of fbns on a per-file basis, whereas vbns are assigned over a larger volume address space. The file system organizes the data blocks within the vbn space as a logical volume. The file system typically consists of a contiguous range of vbns from zero to n, for a file system of size n−1 blocks)
store the object, assigned the context, into the storage environment; (See Katiyar et al., para. 45, i.e., The storage operating system may further implement a storage module, such as a RAID system, that manages the storage and retrieval of the information to and from storage devices in accordance with input/output (I/O) operations. The RAID system typically organizes the RAID groups into one large “physical” disk (i.e., a physical volume), such that the disk blocks are concatenated across all disks of all RAID groups. The logical volume maintained by the file system is then “disposed over” (spread over) the physical volume maintained by the RAID system)
and in response to receiving a request to access the data chunk within the object stored into the storage environment: evaluate the context to identify the file block number for the data chunk and utilize the file block number to access the data chunk within the object to provide in response to the request (See Katiyar et al., para. 46 and 83-84, i.e., When accessing a block of a file in response to servicing a client request, the file system specifies a vbn that is translated at the file system/RAID system boundary into a disk block number (dbn) location on a particular storage device (disk, dbn) within a RAID group of the physical volume. Each block in the vbn space and in the dbn space is typically fixed, e.g., 4 k bytes (kB), in size; accordingly, there is typically a one-to-one mapping between the information stored on the disks in the dbn space and the information organized by the file system in the vbn space. The (disk, dbn) location specified by the RAID system is further translated by a storage driver system of the storage operating system into a plurality of sectors (e.g., a 4 kB block with a RAID header translates to 8 or 9 disk sectors of 512 is or 520 bytes) on the specified storage device. When operating in a VVOL, VVBN identifies a file block number (fbn) location within the file and the file system uses the indirect blocks of the hidden container file to translate the fbn into a physical. The storage label file is a 4 kB file that contains metadata similar to that stored in a conventional RAID label. In other words, the storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL, whether it is online, being created or being destroyed, etc.)
Referring to the rejection of claim 11, Katiyar et al. discloses wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with creation timestamp for the object; and utilize the creation timestamp as part of verifying the data chunk. (See Katiyar et al., para. 62, 65, and 84, i.e., the file system is a message-based system that provides logical volume management capabilities for use in access to the information stored at the storage devices. The WAFL file system having an on-disk format representation that is block-based using, e.g., 4 kilobyte (KB) blocks and using inodes to identify files and file attributes (such as creation time, access permissions, size and block location). The information stored in the meta-data section of each inode describes a file and, as such, may include the file type (e.g., regular or directory), size of the file, time stamps (e.g., access and/or modification) for the file and ownership, i.e., user identifier (UID) and group ID (GID), of the file.) The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL, whether it is online, being created or being destroyed, etc.)
Referring to the rejection of claim 12, Katiyar et al. discloses wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a volume identifier of a volume storing data populated into object; and utilize the volume identifier as part of verifying the data chunk. (See Katiyar et al., para. 69-71, 80, and 84, i.e., a RAID label includes “physical” information about the storage system, such as the volume name is created to comprise the header; that information is loaded into the storage label file. The storage label file includes the name of the associated VVOL, the online/offline status of the VVOL, and other identity and state information of the associated VVOL (whether it is in the process of being created or destroyed (i.e., verifying the data chunk). The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL (i.e., volume identifier), whether it is online, being created or being destroyed, (i.e., verifying the data chunk). Katiyar et al. further discloses another variation of the volume identifier as being a buffer tree UUID wherein the buffer tree is an internal representation of blocks for a data container (e.g., file A) loaded into the buffer cache and maintained by the file system. The data of file A are contained in data blocks and the locations of these blocks are stored in the indirect blocks of the file to the “write anywhere” nature of the file system, these blocks may be located anywhere at the storage devices)
Referring to the rejection of claim 13, Katiyar et al. discloses wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a hash of a name of the object and a volume identifier of a volume storing data populated into object; and utilize the hash as part of verifying the data chunk. (See Katiyar et al., para. 43, 69-71, 80, 84, and 96 i.e., a RAID label includes “physical” information about the storage system, such as the volume name is created to comprise the header; that information is loaded into the storage label file. The storage label file includes the name of the associated VVOL, the online/offline status of the VVOL, and other identity and state information of the associated VVOL (whether it is in the process of being created or destroyed (i.e., verifying the data chunk). The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL (i.e., volume identifier), whether it is online, being created or being destroyed, (i.e., verifying the data chunk). Katiyar et al. further discloses another variation of the volume identifier as being a buffer tree UUID wherein the buffer tree is an internal representation of blocks for a data container (e.g., file A) loaded into the buffer cache and maintained by the file system. The data of file A are contained in data blocks and the locations of these blocks are stored in the indirect blocks of the file to the “write anywhere” nature of the file system, these blocks may be located anywhere at the storage devices. The file system typically assigns sequences of fbns on a per-file basis, whereas vbns are assigned over a larger volume address space and the virtual ID and the generation count is stored in a checksum (i.e., hash) context of each block so that the chunk ID data structure (may also be referred to as chunk ID map metafile) can be recreated in case there is any corruption. (i.e., the name corresponds to a hash of the volume identifier and the sequence number)
Referring to the rejection of claim 14, Katiyar et al. discloses wherein the machine executable code causes the computing device to: execute an operation targeting the object to verify the hash. (See Katiyar et al., para. 84 and 120, i.e., an identifier of a name of the object (e.g., a hash of the name and the buffer tree UUID, which can be read back after the object is moved into the remote object store (i.e., cloud server disclosed as a capacity tier storage device used to verify the hash) wherein the selected source chunk is moved to a destination, for example, to a capacity tier storage device. Before the data is moved, the file system ensures that there are enough blocks at the destination to store the transferred chunk. After the entire chunk is transferred, the source chunk's virtual ID is mapped to the destination chunk at data structure. The TVBN address of the block does not change after the move which ensures that the indirect block pointers remain valid. Since the TVBN address does not change, no updates are needed to the allocation files. Thereafter, the process ends)
Referring to the rejection of claim 15, Katiyar et al. discloses wherein the machine executable code causes the computing device to: creating a checksum for both the data chunk and the context; populating the context with the checksum; and utilizing the checksum to verify the data chunk. (See Katiyar et al., para. 91-96, i.e., chunk ID to virtual ID relationship is created when data is write allocated to a chunk. For example, when data is first write allocated to say, chunk 1, a virtual ID is assigned to chunk 1. This relationship is updated when data is moved. The generation count is zero when a virtual ID is first used. The count is incremented when the data is moved. The file system places PVBN at a Level 1 block. The VVOL indirect stores the VVBN and a TVBN reduces the redirection cost and the metadata cost of tiering data. A TVBN (i.e., context) may be 48 bits and may include a TVBN identifier (TVBN ID), the generation count (for example, a 4-bit value), a virtual ID of the chunk (24-bit value) and an offset (for example, a 19-bit value) of the PVBN of the chunk. The PVBN is obtained from the chunk ID and the offset value and the virtual ID and the generation count is stored in a checksum context of each block so that the chunk ID data structure (may also be referred to as chunk ID map metafile) can be recreated in case there is any corruption (i.e., verifying the data chunk using the checksum)
Referring to the rejection of claim 16, Katiyar et al. discloses wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with a buff tree universal identifier of a volume storing data populated into object; and utilize the buff tree universal identifier as part of verifying the data chunk. (See Katiyar et al., para. 69-71, 80, and 84, i.e., a RAID label includes “physical” information about the storage system, such as the volume name is created to comprise the header; that information is loaded into the storage label file. The storage label file includes the name of the associated VVOL, the online/offline status of the VVOL, and other identity and state information of the associated VVOL (whether it is in the process of being created or destroyed (i.e., verifying the data chunk). The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL (i.e., volume identifier), whether it is online, being created or being destroyed, (i.e., verifying the data chunk). Katiyar et al. further discloses another variation of the volume identifier as being a buffer tree UUID wherein the buffer tree is an internal representation of blocks for a data container (e.g., file A) loaded into the buffer cache and maintained by the file system. The data of file A are contained in data blocks and the locations of these blocks are stored in the indirect blocks of the file to the “write anywhere” nature of the file system, these blocks may be located anywhere at the storage devices)
Claim Rejections - 35 USC § 103
18. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
19. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
20. Applicant has provided a submission in this file that the claimed invention and the subject matter disclosed in the prior art reference were owned by, or subject to an obligation of assignment to, the same entity as NetApp, Inc. not later than the effective filing date of the claimed invention, or the subject matter disclosed in the prior art reference was developed and the claimed invention was made by, or on behalf of one or more parties to a joint research agreement not later than the effective filing date of the claimed invention. However, although subject matter disclosed in the reference Katiyar et al. (Pub No. 2017/0024161) has been excepted as prior art under 35 U.S.C. 102(a)(2), it is still applicable as prior art under 35 U.S.C. 102(a)(1) that cannot be excepted under 35 U.S.C. 102(b)(2)(C).
Applicant may overcome this rejection under 35 U.S.C. 102(a)(1) by a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application, and is therefore, not prior art as set forth in 35 U.S.C. 102(b)(1)(A). Alternatively, applicant may rely on the exception under 35 U.S.C. 102(b)(1)(B) by providing evidence of a prior public disclosure via an affidavit or declaration under 37 CFR 1.130(b).
21. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Katiyar et al. (Pub No. 2017/0024161) in view of Mattsson (Pub No. 2008/0082834). Katiyar et al. discloses the invention as applied to claim 10 above, however, Katiyar et al. fails to explicitly disclose encryption.
Mattsson discloses a method and system of data storage for storing encrypted data such that the data contains sufficient information to decrypt the data.
Referring to the rejection of claim 17, (Katiyar et al. modified by Mattsson) discloses wherein the machine executable code causes the computing device to: create the object to comprise a header; populate the header with an indicator as to whether the object is encrypted; and utilize the indicator to determine how to access the data chunk. (See Mattsson, para., i.e., 53-54 and 60, i.e., a data record (row) contains one or more encrypted field. The record also contains a key indicator. The key indicator provides an index to an encryption key used to encrypt the record. A key indicator identifies the encryption key which was used to encrypt data in a record/row. The key indicator is stored part of the encrypted field, in this case appended to the encrypted string. An initialization vector used to encrypt data in a record/row is identified and stored or indicated in meta data. This feature would allow DTP and DTC fields to transparently include meta data in the storage format. The meta data can include Recovery information, integrity check information, key generation index, and a Rotating Initialization Vectors. A DTP recovery column containing fields with information used to validate the integrity and encryption status. These fields can contain information used to validate the integrity and encryption status for each encrypted field that is contained in this row, object, or record)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine Katiyar et al.’s method and system for storing data at different storage tiers of a storage system modified with Mattsson’s method and system of data storage for storing encrypted data such that the data contains sufficient information to decrypt the data.
Motivation for such an implementation would enable securely storing data comprising encrypting a clear-text datum, storing the encrypted datum, and storing metadata about the encrypted datum. (See Mattsson, para. 6)
22. Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Belluomini et al. (Pub No. 2009/0083504) in view of Katiyar et al. (Pub No. 2017/0024161).
Referring to the rejection of claim 18, (Belluomini et al. modified by Katiyar et al.) discloses a non-transitory machine readable medium comprising instructions for performing a method, which when executed by a machine, causes the machine to:
assigning a context, populated with an indicator that a data chunk had an error when being read from a storage location for generating an object, (See Belluomini et al., para. 83, i.e., the host read a data chunk and an associated appendix from disk drives (i.e., HDD tier), and also at least one copy of a VMD corresponding to data chunk from one of the LLNVS devices (i.e., SSS tier). The checker examines the AMD to validate the integrity of read data, if the AMD for the data read is inconsistent with the data, then checker may determine that the data or the appendix has been corrupted and may invoke error handler to manage the error)
evaluating the indicator to determine that the data chunk had the error; (See Belluomini et al., para. 86 and 99, i.e., when an AMD error is detected (e.g., when checker has determined that the AMD for the read data is corrupted without examining the VMD), then it is possible that either the data or the appendix or both are corrupted. If error handler does not locate an error in the target data based on the detected VMD error, then it is determined that the VMD stored in LLNVS device is corrupted, and an error is declared)
and designating the data chunk of the object as being inconsistent as a response to the request. (See Belluomini et al., para. 85, 91, and 97-98, i.e., if the two copies of the VMD stored in the appendix and in the LLNVS devices are inconsistent, then checker invoke error handler to manage the error, the two error handler methods for VMD and AMD use multiple copies of VMD stored in one or more LLNVS devices as integrated with a RAID layer to repair corrupt data stored on disk drives. if checker determines that the AMD associated with the rebuilt data is inconsistent with that from the appendix, then checker determines if the VMD in the appendix is consistent with the VMD stored in LLNVS devices, since the AMD for the rebuilt data is not a match, logger may log an error and fail the IO operation requested by host, and if an error is located, then error handler determine whether the error is associated with the target data, if so, error handler request that the RAID layer to rebuild the target data and if the rebuild attempt is not successful, then logger logs an error and an failure is returned to host in response to the submitted IO request)
However, Belluomini et al. fails to explicitly disclose for storage within a storage tier of a multi-tiered storage environment.
Katiyar et al. discloses a method and system for storing data at different storage tiers of a storage system.
Katiyar et al. discloses to the object for storage within a storage tier of a multi-tiered storage environment; storing the object, assigned the context, into the storage tier; and in response to receiving a request to access the data chunk within the object stored into the storage tier: (See Katiyar et al., para. 25, 30-34, 100-110, 119-121, i.e., files/blocks/objects and explicitly states that “file” includes “an object, “for storage within a storage tier of a multi-tiered storage environment” (i.e., tier placement) teaching of multi-tier storage, performance vs capacity tiers, monitoring and moving data between tiers, and using TVBN/chunk mapping to permit movement without pointer invalidation (see [0030]–[0034], [0100]–[0107], [0119]–[0121], Claim 1). “SSD tier … performance tier … cold data is moved to HDDs” [0030–0034]; “transferring the data from the first storage tier to the second storage tier” (Summary/Claim 1); “data is transferred … and the chunk ID map is updated … TVBN address of the block does not change” [0106–0110], [0120–0121])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine Belluomini et al. discloses a method and system for checking the integrity of data in disk storage systems modified with Katiyar et al.’s method and system for storing data at different storage tiers of a storage system.
Motivation for such an implementation would enable a data movement between different storage tiers without having to invalidate indirect block pointers at the level 1 blocks. (See Katiyar et al., para. 88)
Referring to the rejection of claim 19, (Belluomini et al. modified by Katiyar et al.) discloses wherein the instructions cause the machine to: create the object to comprise a header; populate the header with a buff tree universal identifier of a volume storing data populated into object; and utilize the buff tree universal identifier as part of verifying the data chunk. (See Katiyar et al., para. 69-71, 80, and 84, i.e., a RAID label includes “physical” information about the storage system, such as the volume name is created to comprise the header; that information is loaded into the storage label file. The storage label file includes the name of the associated VVOL, the online/offline status of the VVOL, and other identity and state information of the associated VVOL (whether it is in the process of being created or destroyed (i.e., verifying the data chunk). The storage label file is the analog of a RAID label and, as such, contains information about the state of the VVOL such as, e.g., the name of the VVOL, a universal unique identifier (UUID) and fsid of the VVOL (i.e., volume identifier), whether it is online, being created or being destroyed, (i.e., verifying the data chunk). Katiyar et al. further discloses another variation of the volume identifier as being a buffer tree UUID wherein the buffer tree is an internal representation of blocks for a data container (e.g., file A) loaded into the buffer cache and maintained by the file system. The data of file A are contained in data blocks and the locations of these blocks are stored in the indirect blocks of the file to the “write anywhere” nature of the file system, these blocks may be located anywhere at the storage devices)
23. Claims 20 are rejected under 35 U.S.C. 103 as being unpatentable over Belluomini et al. (Pub No. 2009/0083504) in view of Katiyar et al. (Pub No. 2017/0024161) as applied to claim 18 above, and further in view of Mattsson (Pub No. 2008/0082834). However, the combination of Belluomini and Katiyar et al. fails to explicitly disclose encryption.
Mattsson discloses a method and system of data storage for storing encrypted data such that the data contains sufficient information to decrypt the data.
Referring to the rejection of claim 20, (Belluomini et al. and Katiyar et al. modified by Mattsson) discloses wherein the instructions cause the machine to: create the object to comprise a header; populate the header with an indicator as to whether the object is encrypted; and utilize the indicator to determine how to access the data chunk. (See Mattsson, para., i.e., 53-54 and 60, i.e., a data record (row) contains one or more encrypted field. The record also contains a key indicator. The key indicator provides an index to an encryption key used to encrypt the record. A key indicator identifies the encryption key which was used to encrypt data in a record/row. The key indicator is stored part of the encrypted field, in this case appended to the encrypted string. An initialization vector used to encrypt data in a record/row is identified and stored or indicated in meta data. This feature would allow DTP and DTC fields to transparently include meta data in the storage format. The meta data can include Recovery information, integrity check information, key generation index, and a Rotating Initialization Vectors. A DTP recovery column containing fields with information used to validate the integrity and encryption status. These fields can contain information used to validate the integrity and encryption status for each encrypted field that is contained in this row, object, or record)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date the claimed invention was made to combine Belluomini et al. discloses a method and system for checking the integrity of data in disk storage systems and Katiyar et al.’s method and system for storing data at different storage tiers of a storage system modified with Mattsson’s method and system of data storage for storing encrypted data such that the data contains sufficient information to decrypt the data.
Motivation for such an implementation would enable securely storing data comprising encrypting a clear-text datum, storing the encrypted datum, and storing metadata about the encrypted datum. (See Mattsson, para. 6)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY D FIELDS whose telephone number is (571)272-3871. The examiner can normally be reached IFP M-F 8am-4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COURTNEY D FIELDS/Examiner, Art Unit 2436 February 19, 2026