DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to Amendment filed on 6/26/2025, wherein claims 1-20 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kavaipatti Anantharamakrishnan et al. (US PGPUB 2022/0092024) (hereafter as Anantharamakrishnan), in view of Unknown Author (“Dell EMC PowerMax eNAS File Auto Recovery with SRDF/S”, dl.dell.com/content/docu88921, Feb 2020) (hereafter as “eNAS Recovery”), in view of Chen (US PGPUB 2019/0034422), further in view of Antony et al. (US PGPUB 2017/0242764).
As for claim 1, Anantharamakrishnan teaches a method, comprising:
Migrating a source storage virtual machine (Vserver) [Fig 1A – Hypervisor 103B hosting storage proxy/storage service software] of a source cluster [rack/datacenter/site] to a destination cluster [rack/data center/site] of a networked storage environment (paragraph 102, “redundancy ….at data center level…during ….site failures……storage proxies 106 install as a high availability active/passive pair…if one storage proxy 106 instance is lost or interrupted, operations failover seamlessly to the passive instance to maintain availability…” teaching migration of functionality of a storage proxy (i.e., storage virtual machine hosted inside a hypervisor) moving to another copy of storage proxy where granularity of failover is performed at any level, including rack level, or data center level at the failover rack/data center, etc. either granularity can be understood as the cluster claimed.),
wherein migrating the source Vserver further includes maintaining, by the processor, a state of a migrate operation [state] for migrating a plurality of source storage volumes [virtual disk being migrated] managed by the source Vserver [storage proxy/storage service software] of a source cluster [source storage cluster] to a plurality of destination storage volumes of the destination cluster [destination storage cluster] of a networked storage environment (paragraph 56, “…data migration…virtual disk being migrated….state information from all nodes involved in the migration, i.e., storage container nodes and replicas. Data migration to the destination storage cluster…node…updating its current state …as the migration progresses…decides the outcome of the migration by reviewing the state…” teaches the state view containing migration related states are stored, and is used as basis to determine status of the migration operation for data volume between a source storage cluster and a destination storage cluster. Paragraph 65, “storage cluster 110…storage proxy …and storage service software …are packaged and deployed as VMs on a compute host …with a hypervisor …installed…” and paragraph 72, “…storage proxy…implemented as a virtual machine….software container…provide storage access to any physical host or VM…” and paragraph 36-37, “…storage service…deployed to …hosted clouds, and/or to public cloud computing environments….comprise both computing and storages that collectively provide storage service…” teaches that storage proxy and/or storage service can be hosted in virtual machines, each individually, and also in combination can be understood as providing the claimed management function of the Vserver that manages the data at the source cluster);
retrying a task associated with the migration operation, and upon successful execution, continuing the migration operation (paragraph 59, “…in case of recoverable errors/failures, replica nodes perform smart retries…” teaches retrying task at replica node for migration tasks performed at the replica node. While prior art does not explicitly teach upon successful execution, continuing the migration operation, it teaches both retrying failed operations when recoverable, and when there is irrecoverable failures, the migration operation is aborted (stopped). Thus, it would be obvious to a person of ordinary skill in in the art before the effective filing date of the application to recognize that the system would continue operation subsequent to a successful retry of an task related to the migration because doing so allows for efficient handling of recoverable errors/failures to reduce migration failures).
checking, by the processor, the state of the migrate operation and in response to the state of the migrate operation, continuing the migrate operation or restarting the migration operation (paragraph 57, “….gets the latest state information from all nodes involved in the migration…updating its current state…as the migration progresses…in case of recoverable errors/failures, replica node perform smart retries…” Here, Examiner note smart retries is understood as a form of restarting the migration operations. In addition, Examiner note, when there is no errors/failures, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application to recognize the system would not perform smart retries because doing so allows for efficient handling of errors/failures only when error/failures occurs. In addition, see, Fig. 12 showing when there is no failure (i.e., success, the system goes to the end without retrying or aborting the migration operation)
Anantharamakrishnan teaches the failover using a duplicate storage proxy/storage service, Thus, it would have been obvious to a person of ordinary skill in the art the duplicate storage proxy/storage service failed over to would have applied configuration of the source Vserver to a destination Vserver. However, in the interest of compact prosecution, Examiner note Anantharamakrishnan does not explicitly teach migrating the source Vserver includes applying configuration of the source Vserver to a destination Vserver of a destination cluster.
However, eNAS Recovery teaches a known method of Vserver [VDM] migration from source cluster [source eNAS system] to destination cluster [destination eNAS system] (Pg. 16, “…failover or move a Virtual Data Mover (VDM) from a source eNAS system to a destination eNAS system…”) including migrating the source Vserver includes applying configuration of the source Vserver to a destination Vserver (Pg. 52, “…Data Mover configurations: DNS, NIS, NTP, local passwd and group, usermapper client FTP/SFTP, LADP, HTTP, CEPP, CAVA, Server Parameters, Netgroup, NSSwitch , Hosts…Use ‘migrate_system_conf’ to migrate those configurations that are needed…”) This known technique is applicable to the system of Anantharamakrishnan as they both share characteristics and capabilities, namely, they are directed storage proxy/management server migrations for failover.
One of ordinary skill in the art before the effective filing date of the application would have recognized that applying the known technique of Chen would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Chen to the teachings of Anantharamakrishnan would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such storage proxy/management server migration management features into similar systems. Further, applying migrating source Vserver include applying configuration of source Vserver to a destination Vserver to Anantharamakrishnan with migration of source Vserver to a destination Vserver accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow improved failover handling of the storage servers themselves. (eNAS Recovery, Pg. 16, “…the failover or move leverages….invokes zero data loss in the event of an unplanned operation…”).
While Anantharamakrishnan teaches smart retrying of task associated with the migration operation, and continue migration operation. Anantharamakrishnan and eNAS Recovery do not explicitly teach retrying, by the processor, a task associated with the migrate operation experiencing intermittent failure for a certain number of times.
However, Chen teaches a known method of data migration management including retrying, by the processor, a task associated with the migrate operation experiencing intermittent failure for a certain number of times [retry threshold] , and upon successful execution, continuing the migration operation (paragraph 68, “…if the migration fails…step 120 is performed again or repeatedly, until the number of retries reaches a retry threshold or the migration succeeds in the retry threshold.”). This known technique is applicable to the system of . Anantharamakrishnan and eNAS as they both share characteristics and capabilities, namely, they are directed data migration management with retry of failed migration operations.
One of ordinary skill in the art before the effective filing date of the application would have recognized that applying the known technique of Chen would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Chen to the teachings of . Anantharamakrishnan and eNAS would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data migration management features into similar systems. Further, applying retrying of a failed migration tasks for a certain number of times to . Anantharamakrishnan and eNAS with smart retrying of failed migration tasks accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more efficient handling of and examination of failed migration tasks within a definable time frame and more user customizable retry process. (Chen, paragraph 71 and 117).
While both . Anantharamakrishnan, eNAS and Chen teaches the ability to retry/restarting a migration process at the source cluster or the destination cluster executing the process, Anantharamakrishnan, eNAS and Chen do not explicitly teach restarting the process at a healthy note in response to detecting an unhealthy node.
However, Antony teaches a known method of VM hosted task/workload with associated data in datastores (see, e.g., paragraph 29) including restarting, by the processor, a process at a healthy node [healthy host/destination host] of the source cluster or the destination cluster to continue the operation, in response to detecting an unhealthy node at the source cluster or the destination cluster executing the process (paragraph 60, “…determined that the network partition …restarting VMs … on a healthy host within the cluster…VMs 214 and 216 are terminated, and …the terminated VMs are restarted on the selected destination host…”). This known technique is applicable to the system of Anantharamakrishnan, eNAS and Chen as they both share characteristics and capabilities, namely, they are directed to virtual machine based workload failover handling in a cluster.
One of ordinary skill in the art before the effective filing date of the application would have recognized that applying the known technique of Antony would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Antony to the teachings of Anantharamakrishnan, eNAS and Chen would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such VM hosted workload failover features into similar systems. Further, applying restarting a process at a healthy node to continue the process in response to detecting an unhealthy node within the cluster to Anantharamakrishnan, eNAS and Chen with restarting of migration tasks running inside a VM on a node within the source cluster or the destination cluster accordingly, would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow improved workload execution availability in response to failure detection. (Antony, paragraph 3-5).
As for claim 2, Anantharamakrishnan also teaches determining, by the processor, an inter-cluster failure between the source cluster and the destination occurring while the migrate operation is at a point of no return (PONR) (paragraph 144, “…network failure that prevents further data transfer…metadata is not successfully received at the destination….cause the present migration to abort at the block 2516 ….picked up in a later migration operation….” Here, inter-cluster failure is understood as network failure that prevents data transfer from source cluster to destination cluster. Under the BRI, PONR is understood as any state at which it require an abort of the current migration process, and not merely a retry.); and
restarting, by the processor, the source Vserver at the source cluster and the migrate operation (paragraph 144, “…network failure that prevents further data transfer…metadata is not successfully received at the destination….cause the present migration to abort at the block 2516 ….picked up in a later migration operation….” Abording the present migration operation and picking up (i.e., perform it at a later migration operation is understood as a form of restarting the process).
As for claim 3, Anantharamakrishnan also teaches the process is an orchestrator thread executed at the destination cluster to manage a plurality of phases of the migrate operation (paragraph 132 and Fig. 6, “…destination…kernel-to-kernel logic 551…operating in the OS 151 of the data service node at the destination storage cluster that hosts the receiving data storage subsystem 150…barrier logic 432 allows metadata migration operations….” In view of paragraph 55, “kernel-to kernel copies of payload data between source and destination storage nodes… “ teaches destination containing multiple processes that execute to manage different phases of the migration operations.).
As for claim 4, Anantharamakrishnan also teaches undoing, by the processor, any tasks executed during a setup phase of the migrate operation, in response to a failure condition occurring during the setup phase (Fig. 7 – step 2002 “…provision source and destination persistent volumes…” to step 2008 “Vdisk provisioned for migration?” are understood as et up phase migration operations before data migration. In view of paragraph 59, “…recoverable errors/failures, replica nodes perform smart retries…” teaches redoing (i.e., do a task again to replace a task that had errors or failed which is applicable to any of the migration tasks. Here, while prior art does not explicitly state the word “undoing, any tasks executed…”, the prior art teaches perform retries of migration tasks that have failed. Thus, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application that retrying a task would effectively have the system utilize the retried task in place of the failed task execution, by not using the result of a failed task (i.e., setting up destination vdisk in Fig. 7 step 2002), the previously failed setting up task result, if any, is not utilized, and thus functionally undone because doing so allows for performing of a migration task correctly following error or failure of a pervious performance of a migration task). and
restarting, by the processor, the migrate operation (paragraph 59, “….smart retry…”).
In addition, Chen also teaches undoing, by the processor, any tasks executed during a setup phase of the migrate operation, in response to a failure condition occurring during the setup phase (paragraphs 50-57 teaches various steps performed by migration module before data migration/transfer, and are considered setup phase. In view of paragraph 68/175, “…if the migration fails, step 120 is performed again…” “…if it is determined that the migration fails, the module can execute the migration module again…” teaches when migration fails regardless of which step, the migration process/module executes again. Thus, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application to recognize by execution the step 120 (i.e., initiating migration task again), that the failed step could be steps before the actual migration, and when the migration module/migration is performed again from the beginning, those steps are constructively “undone” because doing so allows for attempt to perform a failed task correctly and improve the success rate of data migration tasks).
restarting, by the processor, the migration operation (paragraph 175, “…if its determined that the migration fails….execute the migration module again….” Which implicitly start from the beginning of the migration module, thus, functionally undid the previous execution.).
As for claim 5, Anantharamakrishnan also teaches undoing, by the processor, any tasks executed during a transfer phase and a setup phase of the migrate operation, in response to a failure condition occurring during the transfer phase (Fig. 7 – step 2010 “migrate data from source vDisk to one or more corresponding destination Vdisks…” is understood as the transfer phase of the data migration. In view of paragraph 59, “…recoverable errors/failures, replica nodes perform smart retries…” teaches redoing (i.e., do a task again to replace a task that had errors or failed which is applicable to any of the migration tasks. Here, while prior art does not explicitly state the word “undoing, any tasks executed…”, the prior art teaches retrying of migration again. Thus, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application that retrying a task would effectively have the system utilize the retried tasks in place of the failed tasks, by not using the result of a failed tasks, the previously failed task results, if any, is not utilized, and thus functionally undone because doing so allows for performing of a migration task correctly following error or failure of a pervious performance of a migration task); and
restarting, by the processor, the migrate operation (paragraph 59, “….smart retry…”).
In addition, Chen also teaches undoing, by the processor, any tasks executed during a transfer phase and a setup phase of the migrate operation, in response to a failure condition occurring during the transfer phase (paragraphs 58-61 teaches various steps performed by migration module for data migration/transfer. In view of paragraph 68/175, “…if the migration fails, step 120 is performed again…” “…if it is determined that the migration fails, the module can execute the migration module again…” teaches when migration fails regardless of which step, the migration process/module executes again. Thus, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application to recognize by execution the step 120 (i.e., initiating migration task again), that the failed step could be steps before the actual migration, and when the migration module/migration is performed again from the beginning, those failed steps are constructively “undone” because their results are not used and doing so allows for attempt to perform a failed task correctly and improve the success rate of data migration tasks); and
restarting, by the processor, the migrate operation (paragraph 175, “…if its determined that the migration fails….execute the migration module again….” Which implicitly start from the beginning of the migration module, thus, functionally undid the previous execution.).
As for claim 6, Anantharamakrishnan also teaches undoing, by the processor, any tasks executed during a cut-over pre-commit phase, a transfer phase and a setup phase of the migrate operation, in response to a failure condition occurring during the cut-over pre-commit phase (Fig. 7 – step 2012 and paragraph 141, “…successful receipt of these files is reported to the barrier logic…after all the payload data …have been successfully received….migrate the associated metadata from the source…metadata is now written…to the destination owner metadata node…”. Here, cut-over commit phase is understood as, under the broadest reasonable interpretation of the claims, when the migration cycle ends, and the migrated data is confirmed and ready for use at destination. “pre-commit” is understood as actions taken subsequent to transfer but before actual use at destination. Here, the metadata operations, and the reporting of the receipt of files to the barrier logic can both be understood as task that’s part of the cut-over pre-commit phase. In view of paragraph 59, “…recoverable errors/failures, replica nodes perform smart retries…” teaches redoing (i.e., do a task again to replace a task that had errors or failed which is applicable to any of the migration tasks. Here, while prior art does not explicitly state the word “undoing, any tasks executed…”, the prior art teaches retrying of migration again. Thus, it would be obvious to a person of ordinary skill in the art before the effective filing date of the application that retrying a task would effectively have the system utilize the retried tasks in place of the failed tasks, by not using the result of a failed tasks, the previously failed task results, if any, is not utilized, and thus functionally undone because doing so allows for performing of a migration task correctly following error or failure of a pervious performance of a migration task); and
restarting, by the processor, the migrate operation (paragraph 59, “….smart retry…”).
In addition, Chen also teaches undoing, by the processor, any tasks executed during a cut-over pre-commit phase, a transfer phase and a setup phase of the migrate operation, in response to a failure condition occurring during the cut-over pre-commit phase (paragraph 89-93, “signature of the first system to sign the data…signature authentication is called …the second system to perform signature verification on the data…if signature verification succeeds, …if the signature verification fails…it indicates that the migration fails, and step 120 can be performed again…” teaches signature verification as step subsequent to transfer, but before the second/destination system data can be used (i.e., cutover), and it is the verification step, therefore, pre-commit the use of the second system.); and
restarting, by the processor, the migrate operation (paragraph 175, “…if its determined that the migration fails….execute the migration module again….” Which implicitly start from the beginning of the migration module, thus, functionally undid the previous execution.).
As for claim 7, Anantharamakrishnan also teaches retrying, by the processor, the task associated with the migrate operation, in response to a network error detected at the source cluster, the destination cluster or both the source and the destination cluster (paragraph 144, “…a network failure that prevents further data transfers…” is understood as related to either source, destination or both clusters. In view of paragraph 56, “….in case of recoverable errors/failures, …perform smart retries…”).
As for claims 8-14, they contain similar limitations as claims 1-7 respectively above. Thus, they are rejected under the same rationales.
As for claims 15-20, they contain similar limitations as claims 1-2, 4-6 respectively above. Thus, they are rejected under the same rationales.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN X LU whose telephone number is (571)270-1233. The examiner can normally be reached M-F 10am-6pm.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached on 5712723759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEVIN X LU/Examiner, Art Unit 2199
/JACOB D DASCOMB/Primary Examiner, Art Unit 2199