DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07 January 2026 has been entered.
Accordingly, claims 1, 3-14, and 16-21 are pending in this application. Claims 1, 4-5, 14, 16-17, and 20 are currently amended; claims 3, 6, 9, and 18-19 are previously presented; claims 7-8 and 10-13 are original; claims 2 and 15 are cancelled; and claim 21 is newly added.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3. Claims 14 and 16-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter, claim 14 recites “An apparatus comprising: a memory; and a processing device, operatively coupled to the memory”. According to the specification [Para. 143 of the publication], “The cloud-based storage system 318 depicted in FIG. 3C includes two cloud computing instances 320, 322 that each are used to support the execution of a storage controller application 324, 326. The cloud computing instances 320, 322 may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment 316 to support the execution of software applications such as the storage controller application 324, 326.”. Although the claims recited a memory and a processing device as part of the apparatus, the claims recited memory and processing device can be virtual according to the specification. As such, the apparatus can be virtual. Since, the cloud computing instances are virtual machines and the data can be migrated between different types of cloud computing instances and the cloud computing instances are the part of the cloud computing environment, thus the apparatus performs the claimed migration. Therefore, according to the specification, an apparatus may be a software for not having a hardware, i.e., physical or tangible form, such as a computer program per se (often referred to as "software per se") when claimed as a product without any structural recitations is directed to non-statutory subject matter. See MPEP 2106.03 (I). In light of Applicant's claim and specification, one of ordinary skill in the art could reasonably, presume that the whole of claim 14 is intended to be implemented solely as software. An invention implemented and instantiated solely as software is not a process or a composition of matter, and lacks the requisite structural elements to comprise a machine or article of manufacture. As such, Applicant's invention fails to fall within any of the four classes of statutory subject matter as described in 35 U.S.C. 101 and is so rejected on that basis.
Since claims 16-19 are dependent on the claim 14 inherit the features of and do not cure the deficiencies previously set forth with respect to claim 14 above. As such, these claims are rejected under 35 USC §101 for the same reasons set forth with respect to claim 14 above.
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
5. Claims 1, 3-11, 14, and 16-21 are provisionally rejected on the ground of non-statutory double patenting as being unpatentable over claims 1, 5-14, 16, and 19-20 of copending Application No. 17/487,778. The subject matter claimed in the instant application is fully disclosed in the copending Application No. 17/487,778 and is covered by the copending Application No. 17/487,778 and the application are claiming common subject matter, as follows:
Copending Application - 17/487,778
Instant Application - 17/487,208
1. (Currently Amended) A method comprising: initiating a migration of a dataset from a source storage system to a target storage system, wherein at least one of the source storage system and the target storage system is a cloud-based storage system, by mapping a volume in the target storage system to portions of data in the source storage system through metadata associated with the volume and providing a logical path to access portions of data in the source storage system; and providing, by the target storage system, read/write access to the dataset before completing migration of the dataset from the source storage system to the target storage system by using metadata associated with the volume to navigate the logical path to access the portions of data in the source storage system.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
1. (Currently Amended) A method comprising: initiating, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies unmigrated data blocks of the dataset in the source storage system and migrated data blocks of the dataset in the target storage system; and during migration of the dataset from the source storage system to the target storage system, updating, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
6. (Previously Presented) The method of claim 1, wherein the volume is created in response to a request to migrate the dataset from the source storage system to the target storage system.
3. (Previously Presented) The method of claim 1, wherein the volume is created in response to a request to migrate the dataset from the source storage system to the target storage system.
7. (Original) The method of claim 1, wherein the read/write access is provided before any portion of the dataset is copied from the source storage system to the target storage system.
4. (Currently Amended) The method of claim 1, wherein data services are provided before any portion of the dataset is copied from the source storage system to the target storage system.
8. (Previously Presented) The method of claim 1, further comprising: providing, by the target storage system, data services for the dataset before completing migration of the dataset from the source storage system to the target storage system, wherein the data services include at least one of snapshotting, cloning, data reduction, virtual copy, or replication.
5. (Currently Amended) The method of claim 1, wherein data services are provided during migration that include one or more features including at least one of snapshotting, cloning, data reduction, virtual copy, and replication.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
6. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
10. (Original) The method of claim 6, wherein the dataset is copied from the source storage system to the target storage system without participation by a host.
7. (Original) The method of claim 6, wherein the dataset is copied from the source storage system to the target storage system without participation by a host.
11. (Original) The method of claim 6, wherein the dataset is encrypted, and wherein the target storage system includes one or more encryption keys for reading the dataset.
8. (Original) The method of claim 6, wherein the dataset is encrypted, and wherein the target storage system includes one or more encryption keys for reading the dataset.
12. (Previously Presented) The method of claim 1, further comprising: receiving, by the target storage system from a host, a request directed at least in part to an unmigrated portion of the dataset; and servicing, by the target storage system, the request.
9. (Previously Presented) The method of claim 1, further comprising: receiving, by the target storage system from a host, a request directed at least in part to an unmigrated portion of the dataset; and servicing, by the storage controller on the target storage system, the request.
13. (Original) The method of claim 9, wherein an update to the dataset is propagated to the source storage system.
10. (Original) The method of claim 9, wherein an update to the dataset is propagated to the source storage system.
14. (Original) The method of claim 9, wherein an update to the dataset is not propagated to the source storage system.
11. (Original) The method of claim 9, wherein an update to the dataset is not propagated to the source storage system.
12. (Original) The method of claim 1, further comprising: replicating migrated portions of the dataset to a cloud-based storage system.
13. (Original) The method of claim 1, wherein the target storage system and the source storage system are collocated, and wherein one of the target storage system and the source storage system is an on-premises storage system that implements a cloud infrastructure.
1. (Currently Amended) A method comprising: initiating a migration of a dataset from a source storage system to a target storage system, wherein at least one of the source storage system and the target storage system is a cloud-based storage system, by mapping a volume in the target storage system to portions of data in the source storage system through metadata associated with the volume and providing a logical path to access portions of data in the source storage system; and providing, by the target storage system, read/write access to the dataset before completing migration of the dataset from the source storage system to the target storage system by using metadata associated with the volume to navigate the logical path to access the portions of data in the source storage system.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
14. (Currently Amended) An apparatus comprising: a memory; and a processing device, operatively coupled to the memory, configured to: initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data objects in the source storage system; and during migration of the dataset from the source storage system to the target storage system, update, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
19. (Original) The apparatus of claim 16, wherein the read/write access is provided before any portion of the dataset is copied from the source storage system to the target storage system.
16. (Currently Amended) The apparatus of claim 14, wherein data services are provided before any portion of the dataset is copied from the source storage system to the target storage system.
8. (Previously Presented) The method of claim 1, further comprising: providing, by the target storage system, data services for the dataset before completing migration of the dataset from the source storage system to the target storage system, wherein the data services include at least one of snapshotting, cloning, data reduction, virtual copy, or replication.
17. (Currently Amended) The apparatus of claim 14, wherein data services are provided during migration include one or more features including at least one of snapshotting, cloning, data reduction, virtual copy, and replication.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
18. (Previously Presented) The apparatus of claim 14, the processor further configured to: migrate a portion of the dataset from the source storage system to the target storage system; and update a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
12. (Previously Presented) The method of claim 1, further comprising: receiving, by the target storage system from a host, a request directed at least in part to an unmigrated portion of the dataset; and servicing, by the target storage system, the request.
19. (Previously Presented) The apparatus of claim 14, the processor further configured to: receive, by the target storage system from a host, a request directed at least in part to an unmigrated portion of the dataset; and service, by the storage controller on the target storage system, the request.
1. (Currently Amended) A method comprising: initiating a migration of a dataset from a source storage system to a target storage system, wherein at least one of the source storage system and the target storage system is a cloud-based storage system, by mapping a volume in the target storage system to portions of data in the source storage system through metadata associated with the volume and providing a logical path to access portions of data in the source storage system; and providing, by the target storage system, read/write access to the dataset before completing migration of the dataset from the source storage system to the target storage system by using metadata associated with the volume to navigate the logical path to access the portions of data in the source storage system.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
20. (Currently Amended) A non-transitory computer readable storage medium storing instructions, which when executed, cause a processing device to: initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data objects in the source storage system; and during migration of the dataset from the source storage system to the target storage system, update, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the data set that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
1. (Currently Amended) A method comprising: initiating a migration of a dataset from a source storage system to a target storage system, wherein at least one of the source storage system and the target storage system is a cloud-based storage system, by mapping a volume in the target storage system to portions of data in the source storage system through metadata associated with the volume and providing a logical path to access portions of data in the source storage system; and providing, by the target storage system, read/write access to the dataset before completing migration of the dataset from the source storage system to the target storage system by using metadata associated with the volume to navigate the logical path to access the portions of data in the source storage system.
9. (Previously Presented) The method of claim 1, further comprising: migrating a portion of the dataset from the source storage system to the target storage system; and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system.
21. (New) The method of claim 1, wherein the metadata representation for the volume refers to unmigrated data blocks of the dataset in the source storage system and to migrated data blocks of the dataset in the target storage system.
Noted, it would have been obvious to a person of ordinary skill in the art at the time the invention was made to modify or to omit the additional elements of claims 1, 5-14, 16, and 19-20 of the copending Application No. 17/487,778 to arrive at the claims 1, 3-11, 14, and 16-21 of the instant application because the person would have realized that the remaining element would perform the same functions as before. "Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before." See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U.S. Court of Customs and Patent Appeals.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 1, 14 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Butterworth et al. (previously presented) (US 2016/0092119 A1) hereinafter Butterworth, in view of Balasubramanian et al. (US 2017/0371887 A1) hereinafter Balasubramanian.
As to claim 1, Butterworth discloses a method comprising: initiating, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies unmigrated data blocks of the dataset in the source storage system and migrated data blocks of the dataset in the target storage system (Fig. 8-9, Para. 79, the progress of data migration may further be recorded so as to learn which data blocks in source storage system 410 have been migrated, which ones are being migrated and which ones have not been migrated. Specifically, in one embodiment of the present invention, the migrating data blocks in the source storage system to the target storage system via the virtual file system, i.e., a metadata representation, comprises: with respect to data blocks in the source storage system, on the basis of the progress of copying the data blocks from the source storage system to the target storage system, setting metadata that describes migration status of the data blocks, the metadata comprising at least one of "unmigrated," "under migration" and "migrated". Para. 61, “a virtual file system for reading data blocks in the source storage system may be built. Specifically, in this embodiment, virtual file system 526 is built in a target storage system 520 to directly read data blocks from source storage system 410, rather than data being delivered via a third-party migration controller.”. Para. 69, “In the virtual file system, each file/folder has it unique virtual path. The virtual file system achieves a mapping relationship from actual storage locations of files/folders to virtual paths, so that the target storage system may read data blocks in the source storage system.”. Thus, initiating, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies unmigrated data blocks of the dataset in the source storage system and migrated data blocks of the dataset in the target storage system.).
Butterworth does not explicitly disclose during migration of the dataset from the source storage system to the target storage system, updating, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
However, in the same field of endeavor, Balasubramanian discloses during migration of the dataset from the source storage system to the target storage system, updating, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system (Fig. 3, Para. 49, “The metadata will be updated whenever the status of the file changes between premigrated, migrated, migrated with stubs, or resident.”. Para. 13, “managing the metadata of storage systems during data migration. Data may go through three stages in the course of migration: premigrated (e.g., when the data is on the source disk), migrated (e.g., when the data is on tape media), and resident (e.g., when the data is on the destination disk). Reading data from a source disk to a destination disk may include reading a full file of data from the source disk for purposes of writing the file of data to the destination disk, and then reading the requested data from the destination disk.”. Para. 39, “The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident, etc. Changing the metadata may include changing the status of the data to reflect the location of the file. For example, where the file was migrated from the source disk to the tape media, the metadata may be changed by updating the status of the file to "migrated" or "migrated with stubs." The metadata is changed in response to the portion of data being migrated.”. Para. 41, “The migration controller may determine the location of the file by referencing the metadata on the file. The metadata on the file may be on either the source disk or the destination disk.”. Thus, during migration of the dataset from the source storage system to the target storage system, updating, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Butterworth by updating metadata whenever the status of the file changes between premigrated and migrated such that the requested data can be obtained directly from the location of the data rather than recalling the data as disclosed by Balasubramanian (Para. 19). The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident (Para. 39). One of the ordinary skills in the art would have motivated to make this modification by using the updated metadata which provides faster migration process as suggested by Balasubramanian (Para. 12).
As to claim 14, Butterworth discloses an apparatus comprising: a memory; and a processing device, operatively coupled to the memory (Para. 44-45), configured to: initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data obiects in the source storage system (Fig. 8-9, Para. 79, the progress of data migration may further be recorded so as to learn which data blocks in source storage system 410 have been migrated, which ones are being migrated and which ones have not been migrated. Specifically, in one embodiment of the present invention, the migrating data blocks in the source storage system to the target storage system via the virtual file system, i.e., a metadata representation, comprises: with respect to data blocks in the source storage system, on the basis of the progress of copying the data blocks from the source storage system to the target storage system, setting metadata that describes migration status of the data blocks, the metadata comprising at least one of "unmigrated," "under migration" and "migrated". Para. 61, “a virtual file system for reading data blocks in the source storage system may be built. Specifically, in this embodiment, virtual file system 526 is built in a target storage system 520 to directly read data blocks from source storage system 410, rather than data being delivered via a third-party migration controller.”. Para. 69, “In the virtual file system, each file/folder has it unique virtual path. The virtual file system achieves a mapping relationship from actual storage locations of files/folders to virtual paths, so that the target storage system may read data blocks in the source storage system.”. Thus, initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data obiects in the source storage system.).
Butterworth does not explicitly disclose during migration of the dataset from the source storage system to the target storage system, update, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
However, in the same field of endeavor, Balasubramanian discloses during migration of the dataset from the source storage system to the target storage system, update, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system (Fig. 3, Para. 49, “The metadata will be updated whenever the status of the file changes between premigrated, migrated, migrated with stubs, or resident.”. Para. 13, “managing the metadata of storage systems during data migration. Data may go through three stages in the course of migration: premigrated (e.g., when the data is on the source disk), migrated (e.g., when the data is on tape media), and resident (e.g., when the data is on the destination disk). Reading data from a source disk to a destination disk may include reading a full file of data from the source disk for purposes of writing the file of data to the destination disk, and then reading the requested data from the destination disk.”. Para. 39, “The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident, etc. Changing the metadata may include changing the status of the data to reflect the location of the file. For example, where the file was migrated from the source disk to the tape media, the metadata may be changed by updating the status of the file to "migrated" or "migrated with stubs." The metadata is changed in response to the portion of data being migrated.”. Para. 41, “The migration controller may determine the location of the file by referencing the metadata on the file. The metadata on the file may be on either the source disk or the destination disk.”. Thus, during migration of the dataset from the source storage system to the target storage system, update, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the dataset that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Butterworth by updating metadata whenever the status of the file changes between premigrated and migrated such that the requested data can be obtained directly from the location of the data rather than recalling the data as disclosed by Balasubramanian (Para. 19). The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident (Para. 39). One of the ordinary skills in the art would have motivated to make this modification by using the updated metadata which provides faster migration process as suggested by Balasubramanian (Para. 12).
As to claim 20, Butterworth discloses a non-transitory computer readable storage medium storing instructions, which when executed (Para. 44-45), cause a processing device to: initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data objects in the source storage system (Fig. 8-9, Para. 79, the progress of data migration may further be recorded so as to learn which data blocks in source storage system 410 have been migrated, which ones are being migrated and which ones have not been migrated. Specifically, in one embodiment of the present invention, the migrating data blocks in the source storage system to the target storage system via the virtual file system, i.e., a metadata representation, comprises: with respect to data blocks in the source storage system, on the basis of the progress of copying the data blocks from the source storage system to the target storage system, setting metadata that describes migration status of the data blocks, the metadata comprising at least one of "unmigrated," "under migration" and "migrated". Para. 61, “a virtual file system for reading data blocks in the source storage system may be built. Specifically, in this embodiment, virtual file system 526 is built in a target storage system 520 to directly read data blocks from source storage system 410, rather than data being delivered via a third-party migration controller.”. Para. 69, “In the virtual file system, each file/folder has it unique virtual path. The virtual file system achieves a mapping relationship from actual storage locations of files/folders to virtual paths, so that the target storage system may read data blocks in the source storage system.”. Thus, initiate, by a storage system controller of a target storage system, a migration of a dataset from a source storage system to the target storage system by mapping a volume in the target storage system to the dataset stored in the source storage system using a metadata representation that identifies data objects in the source storage system.).
Butterworth does not explicitly disclose during migration of the dataset from the source storage system to the target storage system, update system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the data set that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.
However, in the same field of endeavor, Balasubramanian discloses during migration of the dataset from the source storage system to the target storage system, update system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the data set that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system (Fig. 3, Para. 49, “The metadata will be updated whenever the status of the file changes between premigrated, migrated, migrated with stubs, or resident.”. Para. 13, “managing the metadata of storage systems during data migration. Data may go through three stages in the course of migration: premigrated (e.g., when the data is on the source disk), migrated (e.g., when the data is on tape media), and resident (e.g., when the data is on the destination disk). Reading data from a source disk to a destination disk may include reading a full file of data from the source disk for purposes of writing the file of data to the destination disk, and then reading the requested data from the destination disk.”. Para. 39, “The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident, etc. Changing the metadata may include changing the status of the data to reflect the location of the file. For example, where the file was migrated from the source disk to the tape media, the metadata may be changed by updating the status of the file to "migrated" or "migrated with stubs." The metadata is changed in response to the portion of data being migrated.”. Para. 41, “The migration controller may determine the location of the file by referencing the metadata on the file. The metadata on the file may be on either the source disk or the destination disk.”. Thus, during migration of the dataset from the source storage system to the target storage system, update provide, by the storage system controller of the target storage system, the metadata representation of the volume such that access requests for portions of the data set that are unmigrated are serviced from the source storage system and access requests for portions of the dataset that are migrated are serviced from the target storage system.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Butterworth by updating metadata whenever the status of the file changes between premigrated and migrated such that the requested data can be obtained directly from the location of the data rather than recalling the data as disclosed by Balasubramanian (Para. 19). The metadata may include the information needed to access the data, such as the locations, permissions, whether data is premigrated/migrated/resident (Para. 39). One of the ordinary skills in the art would have motivated to make this modification by using the updated metadata which provides faster migration process as suggested by Balasubramanian (Para. 12).
As to claim 21, the claim is rejected for the same reasons as claim 1 above. In addition, Butterworth discloses wherein the metadata representation for the volume refers to unmigrated data blocks of the dataset in the source storage system and to migrated data blocks of the dataset in the target storage system (Fig. 8-9, Para. 79, “the progress of data migration may further be recorded so as to learn which data blocks in source storage system 410 have been migrated, which ones are being migrated and which ones have not been migrated. Specifically, in one embodiment of the present invention, the migrating data blocks in the source storage system to the target storage system via the virtual file system comprises: with respect to data blocks in the source storage system, on the basis of the progress of copying the data blocks from the source storage system to the target storage system, setting metadata that describes migration status of the data blocks, the metadata comprising at least one of "unmigrated," "under migration" and "migrated."”. Thus, the metadata representation for the volume refers to unmigrated data blocks of the dataset in the source storage system and to migrated data blocks of the dataset in the target storage system.).
8. Claims 4-5, 9, 11, 13, 16-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Butterworth and Balasubramanian as applied above, and further in view of ZHAO et al. (previously presented) (US 2016/0269488 A1) hereinafter ZHAO.
As to claims 4 and 16, the claims are rejected for the same reasons as claims 1 and 14 above. Combination of Butterworth and Balasubramanian do not explicitly disclose wherein data services are provided before any portion of the dataset is copied from the source storage system to the target storage system.
However, in the same field of endeavor, ZHAO discloses wherein data services are provided before any portion of the dataset is copied from the source storage system to the target storage system (Para. 61, The target device 104 can be an offsite device, such as a device that is included in, or associated with, an external computing environment. For example, the target device 104 can be maintained by a commercial data source that is configured to provide various services and/or applications, i.e. data services are provided, by application of a cloud computing environment. Para. 64, When a decision is made to transfer data from the source device 102 to the target device 104, a link 112 is established between the source device 102 and the target device 104. For example, an operator (e.g., information technology personnel or someone with the proper authority) of the enterprise can contract with a third party entity, wherein the third party entity provides the cloud computing services, i.e. data services. Therefore, the target device such as the target storage system provides data services including read/write access for the dataset before completing migration of the dataset from the source storage system to the target storage system. Thus, the data services are provided before any portion of the dataset is copied from the source storage system to the target storage system.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of ZHAO into the combined method of Butterworth and Balasubramanian by providing the cloud computing services for transferring data from the source device to the target device as suggested by ZHAO (Para. 64). The target device may function as a client of the source device during the process of migration. The target device may keep its services running during migration and, therefore, requests from at least one client-side device may be processed (ZHAO, Para. 72). One of the ordinary skills in the art would have motivated to make this modification in order to provide a transparent and responsive data migration experience by utilizing program data that include data transfer status and location information of the data segment as suggested by ZHAO (Para. 150).
As to claims 5 and 17, the claims are rejected for the same reasons as claims 1 and 14 above. In addition, ZHAO discloses wherein data services are provided during migration that include one or more features including at least one of snapshotting, cloning, data reduction, virtual copy, and replication (Para. 61, The target device 104 can be an offsite device, such as a device that is included in, or associated with, an external computing environment. For example, the target device 104 can be maintained by a commercial data source that is configured to provide various services and/or applications, i.e. providing, by the storage controller on the target storage system, data services, by application of a cloud computing environment. Para. 27, “a cloud service can be replicated without shutdown and, thus, during the replication, the user will not notice the change of server. The disclosed aspects provide a one-time migration and copies data from the old system (e.g., source) to the new system (e.g., target).”. Thus, data services are provided during migration that include one or more features including at least one of snapshotting, cloning, data reduction, virtual copy, and replication.).
As to claims 9 and 19, the claims are rejected for the same reasons as claims 1 and 14 above. In addition, ZHAO discloses further comprising: receiving, by the target storage system from a host, a request directed at least in part to an unmigrated portion of the dataset (Para. 86, Through the session 416 established with the client device 402 and the link 420 with the host service 408, the target device 406 may function as an intermediary for the client device 402 and the host service 408. Thus, the target device 406 may receive requests from the client device 402, i.e. a host, and may process such requests, while data is being migrated from the source device 404. Para. 72, during the process of migration, the target device 304 may function as a client of the source device 302. The target device 304 may keep its services running during migration and, therefore, requests from at least one client-side device 306 may be processed. For example, if a response to the request includes data not yet migrated, i.e. an unmigrated portion of the dataset, to the target device 304, at least a portion of the requested data may be obtained from the source device 302.); and
servicing, by the storage controller on the target storage system, the request (Para. 26, “During the process of migration, source machines may function as background devices of target machines. Further, during the data migration process, the target machines may function as clients of the source machines. For example, the target machines may keep their services running during migration and, if requested, data may be obtained from the source machines to service the client request.”. Para. 72, “during the process of migration, the target device 304 may function as a client of the source device 302. The target device 304 may keep its services running during migration and, therefore, requests from at least one client-side device 306 may be processed. For example, if a response to the request includes data not yet migrated to the target device 304, at least a portion of the requested data may be obtained from the source device 302.”. Thus, the request being serviced by the target storage system.).
As to claim 11, the claim is rejected for the same reasons as claim 9 above. In addition, ZHAO discloses wherein an update to the dataset is not propagated to the source storage system (Para. 27, “The disclosed aspects provide a one-time migration and copies data from the old system (e.g., source) to the new system (e.g., target). In such a manner, the disclosed aspects operate similar to a "do not migrate" process, however, there is no synchronization back to the old system with the aspects disclosed herein.”. Thus, an update to the dataset is not propagated to the source storage system.).
As to claim 13, the claim is rejected for the same reasons as claim 1 above. In addition, ZHAO discloses wherein the target storage system and the source storage system are collocated (Para. 60, “FIG. 1 illustrates a system 100 for data migration from a source device 102 to a target device 104 according to an example conventional system. The source device is the device from which data is to be transferred and the target device is the device to which the data is transferred.”.), and wherein one of the target storage system and the source storage system is an on-premises storage system that implements a cloud infrastructure (Para. 61, “The target device 104 can be an offsite device, such as a device that is included in, or associated with, an external computing environment. For example, the target device 104 can be maintained by a commercial data source that is configured to provide various services and/or applications by application of a cloud computing environment. Thus, the target device 104 can be controlled and maintained by a third party, wherein the data is maintained and provided in a secured configuration. As illustrated, the target device 104 can be included, at least partially in a cloud computing environment 108.”. Para. 78, “the source device 404 may contain at least some of the data, applications, and/or services that are used by the client device 402. The source device 404 may be located on-site or within the premises or control of an enterprise (e.g., company, network, and so on).”. Para. 104, “The source device may be, for example a server of an enterprise (e.g., a company). The target device may be located offsite, or "in the cloud".”. Thus, one of the target storage system and the source storage system is an on-premises storage system that implements a cloud infrastructure.).
9. Claims 3, 6-7, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Butterworth and Balasubramanian as applied above, and further in view of Hardy et al. (previously presented) (US 2020/0073955 A1) hereinafter Hardy.
As to claim 3, the claim is rejected for the same reasons as claim 1 above. Butterworth and Balasubramanian do not explicitly disclose wherein the volume is created in response to a request to migrate the dataset from the source storage system to the target storage system.
However, in the same field of endeavor, Hardy discloses wherein the volume is created in response to a request to migrate the dataset from the source storage system to the target storage system (Para. 31, “identifying a request to migrate data associated with a volume from a source storage pool to a destination storage pool. Additionally, the method includes allocating one or more rank extents within the destination storage pool. Further, the method includes populating empty volume extents of the volume with corresponding offset locations within the allocated one or more rank extents within the destination storage pool. Also, the method includes transferring the data associated with the volume from one or more rank extents within the source storage pool to one or more offset locations within the allocated one or more rank extents of the destination storage pool.”. Para. 58, “FIG. 5, method 500 may initiate with operation 502, where a request to migrate data associated with a volume from a source storage pool to a destination storage pool is identified. In one embodiment, the volume includes a storage volume that organizes and presents a logical representation of the data in a contiguous manner to one or more hosts.”. Thus, the volume is created in response to a request to migrate the dataset from the source storage system to the target storage system.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Hardy into the combined method of Butterworth and Balasubramanian by mapping a volume in the destination storage system to the dataset in the source storage system for migrating the dataset of Butterworth in response to a request as suggested by Hardy (Para. 84). Data is migrating from one or more ranks of the source storage pool to one or more ranks of the destination storage pool, according to the correspondence between the logical volume extents of the volume and the physical offset locations within the rank extents of the destination storage pool. One of the ordinary skills in the art would have motivated to make this modification in order to reduce an amount of time and resources of one or more systems performing the data migration, which improves a performance of the one or more systems as suggested by Hardy (Para. 27).
As to claims 6 and 18, the claims are rejected for the same reasons as claims 1 and 14 above. In addition, Hardy discloses further comprising: migrating a portion of the dataset from the source storage system to the target storage system (Para. 26, “the method includes migrating data from one or more ranks of the source storage pool to one or more ranks of the destination storage pool, according to the correspondence between the logical volume extents of the volume and the physical offset locations within the rank extents of the destination storage pool.”.); and updating a mapping of the target storage system to the dataset to point to a location of the migrated portion in the target storage system (Para. 76, the previously allocated volume extent is updated to identify the offset location, i.e., updating a mapping, within the allocated rank extent of the destination storage pool where the data associated with the previously allocated volume extent was migrated. In yet another example, the stored data is migrated from the rank extent in the source storage pool to an offset location within an allocated rank extent of the destination storage pool.).
As to claim 7, the claim is rejected for the same reasons as claim 6 above. In addition, ZHAO discloses wherein the dataset is copied from the source storage system to the target storage system without participation by a host (Para. 147, during the process of migration, source machines may function as background devices of target machines. Further, during the data migration process, the target machines may function as clients of the source machines. For example, the target machines may keep their services running during migration and, if necessary, data can be obtained from the source machines to service a client request. Therefore, client devices, i.e., a host, are not aware that data migration is occurring or that data migration has occurred. Thus, the dataset is copied from the source storage system to the target storage system without participation by a host.).
10. Claims 8, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Butterworth and Balasubramanian and Hardy as applied above, and further in view of Murali et al. (previously presented) (US 9,582,524 B1) hereinafter Muralli.
As to claim 8, the claim is rejected for the same reasons as claim 6 above. Combination of Butterworth and Balasubramanian and Hardy do not explicitly disclose wherein the dataset is encrypted, and wherein the target storage system includes one or more encryption keys for reading the dataset.
However, in the same field of endeavor, Murali discloses wherein the dataset is encrypted, and wherein the target storage system includes one or more encryption keys for reading the dataset (Col. 1 line 54-62, “the data may be data that is stored in an encrypted form, for example through use of a Hardware Security Module (HSM). Such embodiments may enable a rotation (e.g., change) of an encryption key or algorithm such that the original data in Table A is encrypted using a first encryption key, a first set of encryption keys, and/or a first encryption algorithm, and the migrated data in Table B is encrypted using a second encryption key, second set of encryption keys, and/or second encryption algorithm.”. Thus, the dataset is encrypted, and wherein the target storage system includes one or more encryption keys for reading the dataset.).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Murali into the combined method of Butterworth and Balasubramanian and Hardy by including the encryption key for the dataset of ZHAO for reading the data using the provided encryption key as disclosed by Murali (Col. 1 line 55-62). The data services of Murali used in the environment of ZHAO in order to migrate data safely form source system to target system in the encrypted form. One of the ordinary skills in the art would have motivated to make this modification in order to protect sensitive data such as the financial transaction data by providing the data in the encrypted form as suggested by Murali (Col. 1 line 55-67; Col. 2 line 1-7).
As to claim 10, the claim is rejected for the same reasons as claim 9 above. In addition, Murali discloses wherein an update to the dataset is propagated to the source storage system (Col. 10 line 24-31, “After migration of the first data portion is completed, at 414 one or more other indices (e.g., other than the primary key index) may be created for the second table, in one or more regions. At 416 replication between regions may be enabled for the second table, such that changes (e.g., row inserts, deletes, and/or updates) may be propagated in the corresponding second table in one or more other regions for which replication is enabled.”. Col. 10 line 35-44, “the status table may be updated to indicate that one or more data writing processes are to write to both the first and second tables, and that one or more data reading processes are to read from the first table (but not the second table). By having writing processes write to both the first and second table, embodiments may ensure that the first table continues to store up-to-date data in case the migration fails and the system is to be rolled back (e.g., revert to using the original, unmigrated first table).”. Thus, an update to the dataset is propagated to the source storage system.).
As to claim 12, the claim is rejected for the same reasons as claim 1 above. In addition, Murali discloses further comprising: replicating migrated portions of the dataset to a cloud-based storage system (Col. 2 line 55-58, “migration may include an infrastructure transformation such as migrating data stored on a local database to storage on a cloud service.”. Col. 6 line 56-63, “cloud service 222 may host one or more of data migration server device(s) 204, data replication server device(s) 206, and/or data warehouses 208, 210, and 212. In such cases, the data migration and/or data replication services described herein may be provided to processes and/or users as a service in the cloud, via an Application Programming Interface (API) 224 or other intermediary software or hardware.”. Thus, replicating migrated portions of the dataset to a cloud-based storage system.).
Response to Arguments
11. Applicant’s arguments filed on 01/07/2026, with respect to claims 1, 3-14 and 16-21 have been considered but are moot because of the new ground of rejection necessitated by the amendment to the claims. For Examiner's response, see discussion below:
Applicant's arguments, see pages 8-11, with respect to the rejections of claims 1, 3-14 and 16-21 under 35 USC §103 have been considered but are moot in view of the new ground(s) of rejection necessitated by applicant's amendments as set forth in the respective rejections of claims 1, 3-14 and 16-21 under 35 USC §103 above in view of the newly found reference.
Conclusion
12. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chakravarty et al. (US 2007/0208788 A1) teaches the migration engine moves one or more of the data blocks between the first tier storage device and the second tier storage device based on a migration parameter of the data block.
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD SOLAIMAN BHUYAN whose telephone number is (571)272-7843. The examiner can normally be reached on Monday - Friday 9:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Charles Rones can be reached on 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571 -273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMAD S BHUYAN/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168