Prosecution Insights
Last updated: April 19, 2026
Application No. 18/236,969

ADAPTIVE MIGRATION ESTIMATION FOR A GROUP OF VIRTUAL COMPUTING INSTANCES

Non-Final OA §101§102§103
Filed
Aug 23, 2023
Examiner
ROTARU, OCTAVIAN
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
28%
Grant Probability
At Risk
1-2
OA Rounds
4y 2m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
116 granted / 409 resolved
-23.6% vs TC avg
Strong +39% interview lift
Without
With
+38.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
48 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
39.2%
-0.8% vs TC avg
§103
10.9%
-29.1% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
29.9%
-10.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION The following NON-FINAL Office action is in response to application 18236969 filed 08/23/2023 Status of Claims Claims 1-20 are currently pending and have been rejected as follows. Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea, here abstract idea) without significantly more. The claim(s) still recites, describe or set forth computer-aided mental processes as tested per MPEP 2106.04(a)(2) III C. Specifically, as summarized in the preamble of each of independent Claims 1,8,15 the character as a whole of the claims is “predicting durations for virtual computing instance migrations between computing environments”. Yet, MPEP 2106.04(a)(2) III C #2 is clear that a computer environment upon which to perform a mental process (here “receiving request” and ensuing “calculating” etc.) still recited the abstract idea. Thus here, recitations such as “virtual computing instances” (Claims 1-6, 8-13, 15-20), would at most represent, along with the associated “traits of virtual machines in the group” (dependent Claims 6,13,20) such computer environment upon which the abstract or computer aided mental processes of repeated calculations, that is, “calculating” “initial” and “revised” “estimated” “migrated durations” Claims 1-6, 8-13, 15-20) are being performed. For example here, “receiving a request for a migration duration prediction of a group of virtual computing instances from a source computing environment to a destination computing environment” (independent Claims 1,8,15) represents an abstract example of computer-aided observation or notification of a request. Similarly, “calculating initial estimated migration durations for the virtual computing instances of the group based on total available resources and a number of active virtual computing instances being migrated; and calculating revised estimated migration durations for at least one of the virtual computing instances of the group selected for migration based on the total available resources and a number of current active virtual computing instances being migrated when migration of at least one of the virtual computing instances of the group is predicted to complete before other virtual computing instances of the group, wherein the revised estimated migration durations are associated with the migration duration prediction for the group of virtual computing instances from the source computing environment to the destination computer environment” (independent Claims 1,8,15) represent, along with “calculating the initial estimated migration durations for the virtual computing instances of the group based on the total available resources divided by the number of virtual computing instances in the group” (dependent Claims 2,9,16), “calculating the revised estimated migration durations for at least one of the virtual computing instances of the group selected for migration based on the total available resources divided by the number of current active virtual computing instances being migrated” (dependent Claims 3,10,17), “calculating the revised estimated migration durations is iteratively executed as long as a number of the virtual computing instances of the group selected for migration is greater than one” (dependent Claims 4,11,18), “calculating the initial estimated migration durations for the virtual computing instances of the group using traits of the virtual computing instances in the group and system traits of the source and destination computing environments” (dependent Claims 5,12,19) cognitive or computer-aided mental processes of evaluation and judgment, using1 what appears to be equally abstract mathematical relationships expressed in words. Yet, MPEP 2106.04(a)(2) III C #1,#2,#3 is clear that that computer-aided mental processes such as the observation, evaluation and judgements, as previously disclosed by MPEP 2106.04(a)(2) III ¶2, constitute, along with the mathematical relationships expressed in words of MPEP 2106.04 (a)(2)I(A), elements integral to the abstract exception. Thus, by such test(s), the algorithmic or evaluations and judgements of Claims 1-5, 8-12, 15-19, as identified above, would also set forth the abstract exception. In a similar vein, MPEP 2106.04(a)(2) III C #3 states that using a computer as a tool to perform a mental process, does also not preclude the claims from reciting the abstract idea. It would then follow that here, recitations of “wherein calculating the initial and revised estimated migration durations includes employing machine learning models” (dependent Claims 7,14) would also represent use or employment of a tool to perform the abstract calculating, which as tested per, MPEP 2106.04(a)(2) III C #3, would not preclude the claims from reciting, describing or setting forth the abstract exception. Finally, the “instructed one or more processors” of Claims 8,15-19, can be argued as generic computer aids in performing of the aforementioned abstract steps above. Yet, as stated by MPEP 2106.04(a)(2) III C#1 performing a mental process on a generic computer does not preclude the claims from reciting, describing or setting forth the abstract exception. In abundance of caution Examiner would further scrutinize the computerized environment at the subsequent steps below. For now given the preponderance of legal evidence above, the claims recite or at a minimum describe or set forth the abstract exception. Prong one. This judicial exception is not integrated into a practical application because per Step 2A prong two, the individual or combination of any purported additional, computer-based elements are/is found to merely apply the above abstract idea [MPEP 2106.05(f)] and/or narrow it to a field of use or technological environment [MPEP 2106.05(h)]. Specifically, here, the computerization level, even when tested beyond mere computer environment or computer aids, as identified above, and as additional computer-based elements, would still not integrate the abstract idea into a practical application. For example, the “virtual computing instances” (Claims 1-6, 8-13, 15-20), could be argued as representative of a technological environment upon which the combination of collecting information, analyzing it, and displaying certain results of the collection and analysis, is narrowed upon [MPEP 2106.05(h) vi2], which would not integrate the abstract exception into a practical application, as tested per MPEP 2106.05(h). Similarly, MPEP 2106.05(f)(2) states that use of computer components to perform economic tasks, and tasks to receive, store and transmit data3 represent mere invocation of computers or other machinery merely as a tool to perform a process which does not integrate the abstract idea into practical application. Here, the “instructed one or more processors” of Claims 8,15-19, “machine learning models” (dependent Claims 7,14) can be argued as computer aids to perform the abstract idea upon which the abstract idea is being performed. Even, if such computer elements would now be tested as additional elements, per MPEP 2106.05(f) they would still represent computer components of general purpose applying an allocation or business method with its underlining mathematical (here “machine learning”) algorithm (here “model”) [MPEP 2106.05(f)(2) (i)4], monitoring audit log data executed on a general-purpose computer [MPEP 2106.0f(f)(2) iii5] followed by tailoring information and provide it to the user on such generic computer [MPEP 2106.05(f)(2) v6]. Such attempts, as tested per MPEP 2106.05(f), would not integrate the abstract exception into a practical application. This judicial exception is not integrated into a practical application because as shown above, the additional computer-based elements merely apply the already recited abstract idea, [MPEP 2106.05(f)] and/or provide a narrowing of the abstract idea to a field of user or technological environment [MPEP 2106.05(h)]. Examiner follows MPEP 2106.05 (d) II and carries over the MPEP 2106.05 (f),(h) findings above as sufficient option for evidence that the additional computer elements also do not provide significantly more, without relying on conventionality test of MPEP 2106.05(d). Yet, assuming arguendo that additional evidence would now be required at Step 2B to demonstrate that the above combination of additional elements are also well-understood routine and conventional, Examiner would also point to MPEP 2106.05(d) II finding the performing of repetitive calculations, as well-understood routine and conventional when computer implemented7. It then follows that here, computerized repeated calculations, that is, “calculating” “initial” and “revised” “estimated” “migrated durations” as preponderantly recited throughout Claims 1-6,8-13,15-20 could also be argued as conventional. If still necessary the Examiner would also rely on MPEP 2106.05(d) ¶2, MPEP 2106.05(d) I. 2. A, showing the conventionality of the additional computer-based elements read in light of Original Specification: - Original Specification ¶ [0026] 2nd-3rd sentences reciting at high level: “One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention”. - Original Specification ¶ [0031] “in Fig. 1, each private cloud computing environment 102 of the cloud system 100 includes one or more host computer systems ("hosts") 110. The hosts may be constructed on a server grade hardware platform 112, such as an x86 architecture platform. As shown, the hardware platform of each host may include conventional components of a computing device, such as one or more processors (e.g., CPUs) 114,system memory 116, a network interface 118,storage system 120, and other I/O devices such as, for example, a mouse and a keyboard (not shown). The processor 114 is configured to execute instructions, for example, executable instructions that perform one or more operations described herein and may be stored in the memory 116 and the storage system 120. The memory 116 is volatile memory used for retrieving programs and processing data. The memory 116 may include, for example, one or more random access memory (RAM) modules. The network interface 118 enables the host 110 to communicate with another device via a communication medium, such as a network 122 within the private cloud computing environment. The network interface 118 may be one or more network adapters, also referred to as a Network Interface Card (NIC). The storage system 120 represents local storage devices (e.g., one or more hard disks, flash memory modules, solid state disks and optical disks) and/or a storage interface that enables the host to communicate with one or more network data storage systems. Example of a storage interface is a host bus adapter (HBA) that couples the host to one or more storage arrays, such as a storage area network (SAN) or a network-attached storage (NAS), as well as other network data storage systems. The storage system 120 is used to store information, such as executable instructions, cryptographic keys, virtual disks, configurations and other data, which can be retrieved by the host”. - Original Specification ¶ [00107] reciting at high level: “Furthermore, embodiments of at least portions of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device”. - Original Specification ¶ [00108] reciting at high level: “The computer-useable or computer-readable medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device), or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disc. Current examples of optical discs include a compact disc with read only memory (CD-ROM), a compact disc with read/write (CD-R/W), a digital video disc (DVD), and a Blu-ray disc” - Original Specification ¶ [00109] reciting at high level: “In the above description, specific details of various embodiments are provided. However, some embodiments may be practiced with less than all of these specific details. In other instances, certain methods, procedures, components, structures, and/or functions are described in no more detail than to enable the various embodiments of the invention, for the sake of brevity and clarity”. - Original Specification ¶ [00110] reciting at high level: “Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents”. In conclusion, Claims 1-20 although directed to statutory categories (“method” or process, “non-transitory” “medium” or article, “system” or machine,) they still recite, or at least describe or set forth the abstract idea (Step 2A prong one) with their additional, computer elements not integrating the abstract idea into a practical application (Step 2A prong 2) or providing significantly more than the abstract idea itself (Step 2B). Thus, the Claims 1-20 are ineligible. Claim Rejections - 35 USC § 102 The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-13, 15-20 are rejected under 35 U.S.C. 102(a)(1) based upon a public use or sale or other public availability of the invention as disclosed by: Atsuji Sekiguchi US 20110161491 A1 Claims 1,8,15 Sekiguchi teaches “A computer-implemented method for predicting durations for virtual computing instance migrations between computing environments, the method comprising:” / “A non-transitory computer-readable storage medium containing program instructions for predicting durations for virtual computing instance migrations between computing environments, wherein execution of the program instructions by one or more processors causes the one or more processors to perform steps comprising” / “A system comprising: memory; and one or more processors configured to”: (Sekiguchi ¶ [0137]-¶ [0140]) - “receiving a request for a migration duration prediction of a group of virtual computing instances from a source computing environment to a destination computing environment”; (Sekiguchi ¶ [0059] 1st sentence: … comparing section 113 notifies time estimation section 115 of necessary time estimation request, receives an estimated necessary time report, compares the new deployment plan and the current deployment plan, and determines which deployment plan is better. Similarly, ¶ [0067] 1st, 4th sentences: Upon receiving a progress report request from time estimation section 115, the overall monitoring section 116 returns a progress report to the time estimation section 115. in Fig.8, the progress report includes target VM 801, source PM 802, destination PM 803, start time 804, transfer data 805, transferred data 806, and changed data 807. ¶ [0069] When new deployment plan is executed, the time estimation section 115 estimates, for each VM, a migration time which is the time required for the VM to migrate from a server on which the VM is currently running to another server. Specifically, upon receiving a necessary time estimation request, the time estimation section 115 notifies the overall monitoring section 116 of the progress report request. Upon receiving the progress report from the overall monitoring section 116, the time estimation section 115 estimates, for each VM, the necessary time on the basis of the amount of transfer data, the transfer throughput, and the amount of changed data of the memory, and notifies the comparing section 113 of an expected necessary time table) - “in response to the request, calculating initial estimated migration durations for the virtual computing instances of the group based on total available resources and a number of active virtual computing instances being migrated; (Sekiguchi ¶ [0055] planning section 112 refers to the resource status table in initial state in Fig.5, makes a deployment plan, and creates a deployment plan table. For example, in resource status table in initial state, the total amount of VM capacities of all VMs is 2+2+3+2 +3+2+2=16, and the capacity of each PM is 9 so that planning section 112 makes new deployment plan in which VMs are packed into 2 PMs to reduce power consumption. Among from many packing manners, the planning section selects, to make a deployment plan with a shorter deployment time which is the time for completing all the migrations, a manner such that selecting VMs requiring short migration time to decrease the total number of migrations and migrating many VMs in parallel. For example, in Fig.5, it may be effective that, first, migrating VM 31 to another PM, and next, migrating VM 11 and VM 12, or VM 41 and VM 42 to another PM. Sekiguchi ¶ [0069] 2nd-3rd sentences: specifically upon receiving necessary time estimation request, the time estimation section 115 notifies the overall monitoring section 116 of the progress report request. Upon receiving the progress report from the overall monitoring section 116, the time estimation section 115 estimates, for each VM, the necessary time on basis of the amount of transfer data, the transfer throughput, and the amount of changed data of the memory, and notifies the comparing section 113 of an expected necessary time table. ¶ [0070] 5th sentence: before execution of the entire deployment plan has been completed, time estimation section 115 recalculates the necessary time, when a predetermined trigger occurs, to estimate the finish time of the entire deployment in case in which the current deployment plan is executed without change. Sekiguchi ¶ [0072] The method for estimating the necessary time on basis of the amount of transfer data, the transfer throughput, and the amount of changed data of the memory will be specifically discussed. Here, a case will be discussed in which memory capacity of the destination VM is 2 GB, the memory change ratio is 25%, and transfer throughput is 1 Gbps. The memory change ratio is a ratio of the amount of data changed during the transfer to the amount of transferred data. Since all memory 2 GB is transferred at the first transfer, the time estimation section 115 estimates that the 1st transfer time required for 1st transfer is 2 [GB]/1 [Gbps]=16 [sec], and estimates that the amount of changed data during 1st transfer is 2 [GB]×25 [%]=512 [MB]. Sekiguchi ¶ [0073] Regarding 2nd transfer, since the amount of changed data during the 1st transfer is 512 [MB], the time estimation section 115 determines that the amount of transfer data at the 2nd transfer is 512 [MB]. The time estimation section 115 estimates that the 2nd transfer time required for the 2nd transfer is 512 [MB]/1 [Gbps]=4 [sec], and estimates that the amount of changed data during the 2nd transfer is 512 [MB]×25 [%]=128 [MB]. Sekiguchi ¶ [0074] Regarding the 3rd transfer, since the amount of changed data during the 2nd transfer is 128 [MB], the time estimation section 115 determines that the amount of transfer data at the 3rd transfer is 128 [MB]. The time estimation section 115 estimates that the 3rd transfer time required for the 3rd transfer is 128 [MB]/1 [Gbps]=1 [sec], and estimates that the amount of changed data during the 3rd transfer is 128 [MB]×25 [%]=32 [MB]. Sekiguchi ¶ [0075] 1st, 2nd, 4th sentences: Regarding 4th transfer, since the amount of changed data during 3rd transfer is 32 [MB], the time estimation section 115 determines that the amount of transfer data at 4th transfer is 32 [MB]. The time estimation section 115 estimates that the 4th transfer time required for 4th transfer is 32 [MB]/1 [Gbps]=0.25 [sec], and estimates that the amount of changed data during the 4th transfer is 32 [MB]×25 [%]=8 [MB]. In the above example, the necessary time is the sum of the 1st to 4th transfer time, which is 16+4+1+0.25=21.25 [sec]. Sekiguchi ¶ [0076] A generalized case of necessary time calculation will be discussed. First, it is assumed that the amount of transfer data is Mr1, the transfer throughput is tp, and the memory change ratio is r at present moment. In this case, 1st transfer time t1 is t1=Mr1/tp. The amount of changed data Mr2 at this time is Mr2=Mr1×r. At 2nd transfer, the amount of transfer data is Mr2, and the second transfer time t2 is t2=Mr2/tp. The amount of changed data Mr3 at this time is Mr3=Mr2×r=Mr1×r2. ¶ [0077] 1st-4th sentences: the i-th transfer time ti is ti=Mri/tp, and the amount of changed data is Mri=Mri-1×r=Mri-2×r×r=…=Mr1×ri-1, so that ti is calculated by the expression (1). The necessary time T is sum of the transfer times t1…ti, so that the necessary time T is calculated by expression (2). When n approaches infinity, the necessary time T is calculated by expression (3) below. In this way, by using the necessary time T=initial amount of transfer data / {transfer throughput×(1-memory change ratio)}, the necessary time may be estimated considering repetitive copies. ¶ [0084] After updating the resource status table, the migration control apparatus 110 estimates the necessary time when the execution of the current deployment plan is continued. As a result, the migration control apparatus 110 estimates that the migration time of the VM 31 and the VM 42 is 10 minutes and 2 minutes, respectively, and estimates that the execution of the entire deployment plan will be completed in the total time of 12 minutes) “and” - “calculating revised estimated migration durations for at least one of the virtual computing instances of the group selected for migration based on the total available resources and a number of current active virtual computing instances being migrated when migration of at least one of the virtual computing instances of the group is predicted to complete before other virtual computing instances of the group” (Sekiguchi ¶ [0013] last sentence: The plan execution section performs the new migration when it has been determined that the new migration will be completed earlier. ¶ [0058] 1st sentence: comparing section 113 compares estimated deployment time and a deployment time in a current deployment plan, and determines which is earlier between a finish time of the deployment according to the new deployment plan and a finish time of the deployment according to the current deployment plan. ¶ [0059] 4th sentence: The comparing section 113 calculates, for both deployment plans, the finish time of the entire deployment on the basis of the time for completion of the entire deployment, and determines that the deployment plan whose finish time of the entire deployment is earlier is better. ¶ [0061] 1st sentence: When the finish time of the deployment according to the new deployment plan is earlier than the finish time of the deployment according to the current deployment plan, the plan execution section 114 executes the new deployment plan), “wherein the revised estimated migration durations are associated with the migration duration prediction for the group of virtual computing instances from the source computing environment to the destination computer environment” (Sekiguchi ¶ [0055] planning section 112 refers to resource status table in initial state in Fig.5, makes a deployment plan, and creates a deployment plan table. For example, in the resource status table in the initial state, the total amount of VM capacities of all VMs is 2+2+3+2 +3+2+2=16, and the capacity of each PM is 9 so that planning section 112 makes new deployment plan in which the VMs are packed into two PMs to reduce power consumption. Among from many packing manners, the planning section selects, to make a deployment plan with a shorter deployment time which is the time for completing all the migrations, a manner such that selecting VMs requiring short migration time to decrease the total number of migrations and migrating many VMs in parallel. For example, in the example of Fig.5, it may be effective that, first, migrating VM 31 to another PM, and next, migrating VM 11 and VM 12, or VM 41 and VM 42 to another PM. ¶ [0056] Fig.4 illustrates deployment plan table that includes target VM 401, source PM 402, destination PM 403, condition 404, prerequisite 405, and status after completion 406. The target VM 401 indicates ID for identifying a VM to be migrated. source PM 402 indicates an ID for identifying a PM from which the VM is migrated. The destination PM 403 indicates an ID for identifying a PM to which the VM is migrated. condition 404 indicates a condition for migrating the VM. For example, when migration condition 404 includes server 1, server 3, it is indicated that the VM may be migrated when no migration related to server 1 or 3 is performed. ¶ [0057] prerequisite 405 indicates a condition that must be satisfied when the VM is migrated. The status after completion 406 indicates a status established after migration has been completed. Before target VM is migrated, migration condition 404 and prerequisite 405 need to be satisfied. For example, with respect to VM 42, migration condition 404 includes server 1, server 4, and prerequisite 405 includes termination of VM 41. Thus VM 42 may be migrated when no migration related to the server 1 or 4 is executed and VM 41 has been terminated. mid-¶ [0066] in Fig.15, the monitor list includes target VM 1501, source PM 1502, destination PM 1503, start time 1504, transfer data 1505, transferred data 1506, and changed data 1507. The target VM 1501 indicates an ID for identifying a VM to be monitored. The source PM 1502 indicates an ID for identifying a PM from which the VM is migrated. The destination PM 1503 indicates an ID for identifying a PM to which the VM is migrated. The start time 1504 indicates the time when the migration starts. The transfer data 1505 indicates the amount of data to be transferred. The transferred data 1506 indicates the amount of data that has already been transferred. The changed data 1507 indicates the amount of data, out of the transfer data, that has been changed. The configuration of the monitor list is similar to that of the overall monitor list. While the overall monitor list includes information related to every target VM, the monitor list includes information related to only the target VMs running on each PM. The initial value of transfer data 1505 is same as VM capacity of the target VM. In each migration, a plurality of copies are performed until the migration is completed. The transfer data 1505 is updated every time each of the copies is started, and the start time 1504 is also changed to the current time. Upon receiving a progress report request from the overall monitoring section 116, the monitoring section aggregates progress data of each target VM and updates the transferred data 1506 and the changed data 1507 in the monitor list at the time. additional details at ¶ [0071]-¶ [0080). Claims 2,9,16 Sekiguchi teaches all the limitations in claims 1,8,15 above. Further, Sekiguchi teaches “wherein calculating the initial estimated migration durations includes calculating the initial estimated migration durations for the virtual computing instances of the group based on the total available resources divided by the number of virtual computing instances in the group” (Sekiguchi ¶ [0011] 1st sentence: Resources such as a central processing unit (CPU) and network bandwidth are used in live migration, so that a necessary time from the start to end of live migration depends on amount of memory data to be copied, an available CPU usage rate, network bandwidth and so on. ¶ [0065] 2nd sentence: overall monitoring section 116 dynamically obtains and aggregates an amount of transferred data, an amount of changed data, and an elapsed time of the VM during the live migration. ¶ [0075] 4th sentence: the necessary time is the sum of the first transfer time to the fourth transfer time, which is 16+4+1+0.25=21.25 [sec]. Then at ¶ [0076] A generalized case of necessary time calculation will be discussed. First, it is assumed that amount of transfer data is Mr1, the transfer throughput is tp, and the memory change ratio is r at the present moment. In this case 1st transfer time t1 is t1=Mr1/tp. The amount of changed data Mr2 at this time is Mr2=Mr1×r. At 2nd transfer, the amount of transfer data is Mr2, and the 2nd transfer time t2 is t2=Mr2/tp. The amount of changed data Mr3 at this time is Mr3=Mr2×r=Mr1×r2. ¶ [0077] Similarly, the i-th transfer time ti is ti=Mri/tp, and the amount of changed data is Mri=Mri-1×r=Mri-2×r×r=…=Mr1×ri-1, so that ti is calculated by the expression (1). The necessary time T is sum of the transfer times t1…ti, so that the necessary time T is calculated by expression (2). When n approaches infinity, the necessary time T is calculated by expression (3) below. In this way, by using the necessary time T=initial amount of transfer data / {transfer throughput×(1−memory change ratio)}, the necessary time may be estimated considering repetitive copies. The transfer throughput and the memory change ratio are not predetermined values, but they change dynamically during a live migration. Here, the transfer throughput is a value obtained by dividing the amount of transferred data by the elapsed time, and the memory change ratio is a value obtained by dividing the amount of changed data by the amount of transfer data. PNG media_image1.png 324 716 media_image1.png Greyscale ) Claims 3,10,17 Sekiguchi teaches all the limitations in claims 1,8,15 above. Further, Sekiguchi teaches calculating the revised estimated migration durations includes - calculating the revised estimated migration durations for at least one of the virtual computing instances of the group selected for migration based on the total available resources divided by the number of current active virtual computing instances being migrated (Sekiguchi ¶ [0077] the i-th transfer time ti is ti=Mri/tp and amount of changed data is Mri=Mri-1×r=Mri-2 × r × r =…=Mr1×ri-1, so that ti is calculated by the expression (1). The necessary time T is sum of transfer times t1…ti, so that the necessary time T is calculated by expression (2). When n approaches infinity, the necessary time T is calculated by expression (3) below. In this way, by using the necessary time T=initial amount of transfer data / {transfer throughput×(1−memory change ratio)}, the necessary time may be estimated considering repetitive copies. The transfer throughput and the memory change ratio are not predetermined values, but they change [or are revised] dynamically during a live migration. Here, the transfer throughput is a value obtained by dividing the amount of transferred data by the elapsed time, and the memory change ratio is a value obtained by dividing the amount of changed data by the amount of transfer data). PNG media_image1.png 324 716 media_image1.png Greyscale Claims 4,11,18 Sekiguchi teaches all the limitations in claims 1,8,15 above. Sekiguchi teaches “wherein calculating the revised estimated migration durations is iteratively executed as long as a number of the virtual computing instances of the group selected for migration is greater than one” (Sekiguchi Fig. 19 step S403-> step S404 and ¶ [0120] In operation S403, the migration control apparatus 110 adds “1” to i. ¶ [0121] In operation S404, the migration control apparatus 110 determines whether the value of i is smaller than the total number L of target VMs. When the value of i is smaller than the total number L of target VMs (“Yes” in operation S404), the migration control apparatus 110 returns the process to operation S402. When the value of i is not smaller than the total number L of target VMs (“No” in operation S404), [interpreted as L is not larger than i=1] the migration control apparatus 110 terminates the time estimation. Similarly, Fig. 20 step S504-> S505, ¶ [0126] In operation S504, the migration control apparatus 110 adds 1 to i. ¶[0127] In operation S505, migration control apparatus 110 determines whether the value of i is smaller than the total number L of target VMs. ¶ [0128] When the value of i is smaller than the total number L of target VMs (“Yes” in operation S505) [interpreted as L is not larger than i=1], the migration control apparatus 110 returns the process to operation S503) Claims 5,12,19 Sekiguchi teaches all limitations in claims 1,8,15 above. Further Sekiguchi teaches “wherein calculating the initial estimated migration durations includes calculating the initial estimated migration durations for the virtual computing instances of the group using traits of the virtual computing instances in the group and system traits of the source and destination computing environments”. (Sekiguchi ¶ [0055] planning section 112 refers to the resource status table in initial state in Fig.5, makes a deployment plan, and creates a deployment plan table. For example, in the resource status table in the initial state, the total amount of VM capacities of all VMs is 2+2+3+2 +3+2+2=16, and the capacity of each PM is 9 so that planning section 112 makes new deployment plan in which the VMs are packed into two PMs to reduce power consumption. Among from many packing manners, the planning section selects, to make a deployment plan with a shorter deployment time which is the time for completing all the migrations, a manner such that selecting VMs requiring short migration time to decrease the total number of migrations and migrating many VMs in parallel. For example, in the example of Fig.5, it may be effective that, first, migrating VM 31 to another PM, and next, migrating VM 11 and VM 12, or VM 41 and VM 42 to another PM. Sekiguchi ¶ [0056] Fig.4 illustrates deployment plan table that includes target VM 401, source PM 402, destination PM 403, condition 404, prerequisite 405, and status after completion 406. The target VM 401 indicates ID for identifying a VM to be migrated. source PM 402 indicates an ID for identifying a PM from which the VM is migrated. The destination PM 403 indicates an ID for identifying a PM to which the VM is migrated. condition 404 indicates a condition for migrating the VM. For example, when migration condition 404 includes server 1, server 3, it is indicated that the VM may be migrated when no migration related to the server 1 or the server 3 is performed. Sekiguchi ¶ [0057] prerequisite 405 indicates a condition that must be satisfied when the VM is migrated. The status after completion 406 indicates a status established after migration has been completed. Before target VM is migrated, migration condition 404 and prerequisite 405 need to be satisfied. For example, with respect to VM 42, migration condition 404 includes server 1, server 4, and prerequisite 405 includes termination of VM 41. Thus VM 42 may be migrated when no migration related to the server 1 or server 4 is executed and the VM 41 has been terminated. Sekiguchi mid-¶ [0066] As illustrated in Fig.15, the monitor list includes target VM 1501, source PM 1502, destination PM 1503, start time 1504, transfer data 1505, transferred data 1506, and changed data 1507. The target VM 1501 indicates an ID for identifying a VM to be monitored. The source PM 1502 indicates an ID for identifying a PM from which the VM is migrated. The destination PM 1503 indicates an ID for identifying a PM to which the VM is migrated. The start time 1504 indicates the time when the migration starts. The transfer data 1505 indicates the amount of data to be transferred. The transferred data 1506 indicates the amount of data that has already been transferred. The changed data 1507 indicates the amount of data, out of the transfer data, that has been changed. The configuration of the monitor list is similar to that of the overall monitor list. While the overall monitor list includes information related to every target VM, the monitor list includes information related to only the target VMs running on each PM. The initial value of the transfer data 1505 is the same as the VM capacity of the target VM. In each migration, a plurality of copies are performed until the migration is completed. The transfer data 1505 is updated every time each of the plurality of copies is started, and the start time 1504 is also changed to the current time. Upon receiving a progress report request from the overall monitoring section 116, the monitoring section aggregates progress data of each target VM and updates the transferred data 1506 and the changed data 1507 in the monitor list at the time Sekiguchi ¶ [0069] 2nd-3rd sentences: upon receiving necessary time estimation request, the time estimation section 115 notifies the overall monitoring section 116 of the progress report request. Upon receiving the progress report from the overall monitoring section 116, the time estimation section 115 estimates, for each VM, the necessary time on basis of the amount of transfer data, the transfer throughput, and the amount of changed data of the memory, and notifies the comparing section 113 of an expected necessary time table. ¶ [0070] 5th sentence: before execution of the entire deployment plan has been completed, time estimation section 115 recalculates the necessary time, when a predetermined trigger occurs, to estimate the finish time of the entire deployment in case in which the current deployment plan is executed without change. Sekiguchi ¶ [0072] The method for estimating the necessary time on basis of the amount of transfer data, the transfer throughput, and the amount of changed data of the memory will be specifically discussed. Here, a case will be discussed in which memory capacity of the destination VM is 2 GB, the memory change ratio is 25%, and transfer throughput is 1 Gbps. The memory change ratio is a ratio of the amount of data changed during the transfer to the amount of transferred data. Since all memory 2 GB is transferred at the first transfer, the time estimation section 115 estimates that the 1st transfer time required for 1st transfer is 2 [GB]/1 [Gbps]=16 [sec], and estimates that the amount of changed data during 1st transfer is 2 [GB]×25 [%]=512 [MB]. Sekiguchi ¶ [0073] Regarding 2nd transfer, since the amount of changed data during the 1st transfer is 512 [MB], the time estimation section 115 determines that the amount of transfer data at the 2nd transfer is 512 [MB]. The time estimation section 115 estimates that the 2nd transfer time required for the 2nd transfer is 512 [MB]/1 [Gbps]=4 [sec], and estimates that the amount of changed data during the 2nd transfer is 512 [MB]×25 [%]=128 [MB]. Sekiguchi ¶ [0074] Regarding the 3rd transfer, since the amount of changed data during the 2nd transfer is 128 [MB], the time estimation section 115 determines that the amount of transfer data at the 3rd transfer is 128 [MB]. The time estimation section 115 estimates that the 3rd transfer time required for the 3rd transfer is 128 [MB]/1 [Gbps]=1 [sec], and estimates that the amount of changed data during the 3rd transfer is 128 [MB]×25 [%]=32 [MB]. Sekiguchi ¶ [0075] 1st, 2nd, 4th sentences: Regarding 4th transfer, since the amount of changed data during 3rd transfer is 32 [MB], the time estimation section 115 determines that the amount of transfer data at 4th transfer is 32 [MB]. The time estimation section 115 estimates that the 4th transfer time required for 4th transfer is 32 [MB]/1 [Gbps]=0.25 [sec], and estimates that the amount of changed data during the 4th transfer is 32 [MB]×25 [%]=8 [MB]. In the above example, the necessary time is the sum of the 1st to 4th transfer time, which is 16+4+1+0.25=21.25 [sec]. Sekiguchi ¶ [0076] A generalized case of necessary time calculation will be discussed. First, it is assumed that the amount of transfer data is Mr1, the transfer throughput is tp, and the memory change ratio is r at the present moment. In this case, the first transfer time t1 is t1=Mr1/tp. The amount of changed data Mr2 at this time is Mr2=Mr1×r. At the second transfer, the amount of transfer data is Mr2, and the second transfer time t2 is t2=Mr2/tp. The amount of changed data Mr3 at this time is Mr3=Mr2×r=Mr1×r2. ¶ [0077] 1st-4th sentences: the i-th transfer time ti is ti=Mri/tp, and the amount of changed data is Mri=Mri-1×r=Mri-2×r×r=…=Mr1×ri-1, so that ti is calculated by the expression (1). The necessary time T is sum of transfer times t1…ti, so that the necessary time T is calculated by expression (2). When n approaches infinity, the necessary time T is calculated by expression (3) below. In this way, by using the necessary time T=initial amount of transfer data / {transfer throughput×(1-memory change ratio)}, the necessary time may be estimated considering repetitive copies. ¶ [0084] After updating the resource status table, the migration control apparatus 110 estimates the necessary time when the execution of the current deployment plan is continued. As a result, the migration control apparatus 110 estimates that the migration time of the VM 31 and the VM 42 is 10 minutes and 2 minutes, respectively, and estimates that the execution of the entire deployment plan will be completed in the total time of 12 minutes). Claims 6,13,20. Sekiguchi teaches all limitations in claims 5,12,19 above. Further, Sekiguchi teaches “wherein the traits of the virtual computing instances in the group includes traits of virtual machines in the group” (Sekiguchi ¶ [0013] provided is a migration control apparatus controlling migration of virtual machine. The migration control apparatus includes a monitoring section, a planning section, a time estimation section, a comparing section, and plan execution section. The monitoring section monitors a status of a current migration of the virtual machine running on 1st physical machine. Current migration is performed in accordance with a current migration plan. The planning section makes, on basis of status of current migration, new migration plan migrating virtual machine from 1st physical machine to 2nd physical machine. The time estimation section estimates 1st migration time required to perform new migration in accordance with the new migration plan. The comparing section compares 1st with 2nd migration time to determine which migration will be completed earlier between the new and current migration. The 2nd migration time is an estimated time required to complete the current migration. The plan execution section performs the new migration when it has been determined that the new migration will be completed earlier. Additional details at ¶¶ [0055]-[0057], mid-[0066], [0069] 2nd-3rd sentences, [0072]-[0074], [0075] 1st, 2nd, 4th sentences, [0076], [0077] 1st-4th sentences). Rejections under 35 § U.S.C. 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 7,14 are rejected under 35 U.S.C. 103 as being unpatentable over Sekiguchi as applied to claims 1,9 above, and in view of Cui et al, US 20200026538 A1 hereinafter Cui. Claims 7,14. Sekiguchi teaches all the limitations in claims 1,9 above Sekiguchi does not explicitly teach: “wherein calculating the initial and revised estimated migration durations includes employing machine learning models” as claimed. Cui however in analogous art of virtual machines migration teach/ suggests: “wherein calculating the initial and revised estimated migration durations includes employing machine learning models” (Cui mid-¶ [0014] As used herein, failure of a VM transfer include, e.g., the VM transfer timing out when a destination host does not receive transferred data for more than predefined period of time (e.g. 120 seconds). The machine learning model(s) may be trained using supervising learning techniques to optimize relationships between various factors and, the trained machine learning model(s) may each be a two-class classifier that takes feature vectors as inputs and outputs labels indicating success or failure of corresponding VM transfers, such as the labels l1=0 and l2=1, where: VM transfer successful if l =0, VM transfer failure, if l = 1. When failure of the VM transfer is predicted, remediation action(s) may be taken, such as notifying a source VM transfer engine in a hypervisor running in source host computer 106S to slow down [or duration] (also sometimes referred to as back off) a data transfer rate of the VM transfer, thereby preventing the transfer from failing, as discussed below. In another embodiment, the machine learning model(s) may be trained to predict a probability of success, also referred to herein as the predicted success rate, as opposed to simply success or failure. Similarly, see Cui ¶ [0025] Feature preparation at 404 includes preparing the initial training data for machine learning. each of the VM transfer performance metrics described above, namely disk input/output (I/O) read rate, data insertion rate, compression ratio, compression throughput, network latency, network throughput, packet loss, etc. is considered a feature. The problem then becomes: given a pattern p represented by a set of d features, each of which is a performance metric collected at a point in time during a VM transfer, i.e., p→x={x1, x2,…, xd}, should remediation action(s) be taken to prevent failure of the VM transfer? In order to solve this problem, the features from representative VM transfers may be prepared and used to train machine learning models to predict the success or failure of VM transfers. Predictions made by such machine learning model(s) may then be used to determine an action to take, such as slowing down [duration] the data transfer rate of a VM transfer to prevent a predicted failure of the transfer. Similarly, Cui ¶ [0031] At step 530, source control module 134 inputs the prepared fe
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
Nov 21, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602627
SOLVING SUPPLY NETWORKS WITH DISCRETE DECISIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12555059
System and Method of Assigning Customer Service Tickets
2y 5m to grant Granted Feb 17, 2026
Patent 12547962
GENERATIVE DIFFUSION MACHINE LEARNING FOR RESERVOIR SIMULATION MODEL HISTORY MATCHING
2y 5m to grant Granted Feb 10, 2026
Patent 12450534
HETEROGENEOUS GRAPH ATTENTION NETWORKS FOR SCALABLE MULTI-ROBOT SCHEDULING
2y 5m to grant Granted Oct 21, 2025
Patent 12406213
SYSTEM AND METHOD FOR GENERATING FINANCING STRUCTURES USING CLUSTERING
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
28%
Grant Probability
67%
With Interview (+38.9%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month