Prosecution Insights
Last updated: April 19, 2026
Application No. 18/343,250

VIRTUAL MACHINE MIGRATION METHOD, APPARATUS, AND SYSTEM

Non-Final OA §103
Filed
Jun 28, 2023
Examiner
MUDRICK, TIMOTHY A
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Cloud Computing Technologies Co. Ltd.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
97%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
447 granted / 532 resolved
+29.0% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
564
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 532 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 18/343,250 filed on 6/28/2023 is presented for examination. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Drawings The applicant’s drawings submitted are acceptable for examination purposes. Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web. Information Disclosure Statement As required by M.P.E.P. 609, the applicant’s submissions of the Information Disclosure Statement dated 9/25/2023 and 7/31/2024 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-9 and 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over Goggin (US 2012/0042034) in view of Reuther (US 2011/0302577). As per claim 1, Goggin discloses a virtual machine migration method for migrating a source virtual machine running on a source server to a destination server, wherein the method comprises: sending, by a first front-end apparatus through a first internal channel, and device state information of the source virtual machine to a first back-end apparatus (Paragraph 4 “The VM's running the application also involves the VF inserting one or more IO responses received from the physical storage to an IO response queue associated with the VM. State information that is indicative of the request queue and of the response queue is provided within a memory region of the VF. In the course of migration, de-queuing of requests from the request queue is suspended. While the de-queuing is suspended, state information is transferred from the VF memory region to a memory region associated with the virtualization intermediary within the source computing machine. Subsequently, the state information is transferred from the second memory region within the source computing machine to the destination machine.”), wherein the first front-end apparatus is disposed in the source server (Paragraph 54 “Commonly assigned U.S. Pat. No. 7,680,919, invented by M. Nelson, issued Mar. 16, 2010, entitled, Virtual Machine Migration, describes a `farm` of host machines, referred to as `servers`, each of which may host one or more virtual machines (VMs), as well as mechanisms for migrating a VM from one server (the source server) to another (the destination server) while the VM is still running.”), the first back-end apparatus is disposed in a first offloading card inserted in the source server (Paragraph 54 “SR IOV storage adapter”), and the first internal channel is disposed between the first offloading card and the source server (Paragraph 54 “The general configuration of the server farm 400 includes a plurality of user machines 402-1 and 402-2 . . . 402-n that access a farm of host machines (i.e. servers) 300, 404-1, . . . 404-r via a network 406. The server labeled 300 in FIG. 4 corresponds to the identically labeled system 300 of FIG. 3A in that it comprises VM 304 that is coupled for direct access to the SR IOV storage adapter 306.”); sending the device state information to a second back-end apparatus through an external channel, wherein the second back-end apparatus is disposed in a second offloading card inserted in the destination server (Paragraph 55 “As indicated by arrow 403, VM 304 which runs on source machine 300 is relocated to destination machine 404-2 where it is instantiated as VM 304'. In some embodiments, VMs can be migrated only between machines that share storage where the VMs' disks reside. In the example server farm 400 of FIG. 4, in order to allow for inter-server migration, the host machines 300 and 404-2 to 404-r, therefore, either share an external, common storage system or can access each other's internal storage. This assumption eliminates the need to migrate entire disks. One way to arrange this is for all of the servers in the farm 400 to be connected via a system such as Fibrechannel. This is illustrated in FIG. 4 as the channel 408.”). Goggin does not expressly disclose but Reuther discloses sending dirty memory page address information (Paragraph 41 “In response, virtualization module 402 can execute and set a bit in tracking table 424 corresponding to the third page of guest physical memory 420 to identify that the page is now "dirty," e.g., changed from what has been sent to target computer system 408. As one or ordinary skill in the art can appreciate, guest operating system 412 may attempt to write to multiple pages while pages are being sent from source virtual machine 406 to target virtual machine 404. This technique significantly reduces the performance of virtual machine 406 due to the fact that each page that guest operating system 412 attempts to access results in running virtualization module 402 instructions to change the page from read-only to writable.”); reading, by the first back-end apparatus through the first internal channel, a dirty memory page from a memory of the source server according to the dirty memory page address information (Paragraph 48 “While each iteration is shown as a cycle of remapping as read-only; copying dirty pages; clearing tracking 424; and sending pages, nothing in the disclosure limits virtualization module 402 from operating in such a manner. Instead, this process is used for illustration purposes. Therefore, in an exemplary embodiment virtualization module 402 can be configured to operate on batches of pages or individual pages. In this exemplary embodiment, virtualization module 402 can be configured to remap a batch of pages as read-only; copy the batch of dirty pages; clear tracking 424; and send the batch to target computer system 408. In another exemplary embodiment, when a group of pages is mapped from read-only to writable, virtualization module 402 can be configured to start remapping these pages as read-only and copying them before continuing on to other pages.”); and sending the dirty memory page and the dirty memory page address information to a second back-end apparatus through an external channel (Paragraph 48). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Goggin to include the teachings of Reuther because it provides for the purpose of keeping track of which memory pages are still valid and which need to be updated. By following the well-known conventions of clean and dirty memory pages as illustrated by Reuther, the combination benefits by ensuring that local virtual machines can have their own copy of data while still maintaining the ability to synchronize with other hosts. As per claim 2, Goggin further discloses further comprising: sending, by the second back-end apparatus through a second internal channel, the device state information to a second front-end apparatus, wherein the second internal channel is disposed between the second offloading card and the destination server, and the second front-end apparatus is disposed in the destination server (Fig. 4 and paragraph 50); setting, by the second front-end apparatus, a device state of a destination virtual machine according to the device state information (Fig. 4 and paragraph 50). Goggin does not expressly disclose but Reuther discloses setting, by the second back-end apparatus through the second internal channel, the dirty memory page in a memory of the destination server according to the dirty memory page address information (Paragraph 48). As per claim 3, Goggin further discloses wherein the external channel comprises a first data link for transmitting the device state information (Fig. 4, 408). Goggin does not expressly disclose but Reuther discloses a second data link for transmitting the dirty memory page and the dirty memory page address information. (Fig. 4, Communication Channel 418). As per claim 5, Goggin further discloses wherein the first data link and the second data link are implemented through a transmission control protocol (TCP) link or a user datagram protocol (UDP) link (Paragraph 16). As per claim 6, one of ordinary skill would have known to use a VSOCK link to implement an internal channel as it is a substitution of one well-known communication channel as shown by Goggin for another. As per claims 7-9, 11 and 12, they are system claims having similar limitations as cited in claims 1-3 and 5-7 and are rejected under the same rationale. As per claims 13-16, they are card claims having similar limitations as cited in claims 1-3 and 5-7 and are rejected under the same rationale. Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Goggin in view of Reuther in further view of Tarasuk-Levin (US 2015/0381589). As per claim 4, Goggin does not expressly disclose but Tarasuk-Levin discloses wherein the method further comprises: compressing and encrypting, by the first back-end apparatus, the dirty memory page and the device state information of the source virtual machine (Abstract “The encryption of the memory blocks of the VM is performed optionally before a request for live migration is received or after said request. The more resource intensive decryption of the memory blocks of the VM is performed by the destination host in a resource efficient manner, reducing the downtime apparent to users. Some examples contemplate decrypting memory blocks of the transmitted VM on-demand and opportunistically, according to a pre-determined rate, or in accordance with parameters established by a user.”); and decompressing and decrypting, by the second back-end apparatus, the dirty memory page and the device state information of the source virtual machine (Abstract). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Goggin as modified to include the teachings of Tarasuk-Levin because it ensures that data is secured during transit. In this way, the combination benefits by ensuring that 3rd parties cannot read the contents of the messages while in transit. As per claim 10, it is a system claim having similar limitations as cited in claim 4 and is thus rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wilson (US 9,361,145) discloses a DMA-capable device of a virtualization host stores a DMA write record, indicating a portion of host memory that is targeted by a DMA write operation, in a write buffer accessible from a virtualization management component of the host. The virtualization management component uses the DMA write record to identify a portion of memory to be copied to a target location to save a representation of a state of a particular virtual machine instantiated at the host. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY A MUDRICK whose telephone number is (571)270-3374. The examiner can normally be reached 9am-5pm Central Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571)272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIMOTHY A MUDRICK/Primary Examiner, Art Unit 2198 2/11/2026
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Aug 15, 2023
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602243
METHOD AND SYSTEM FOR MIGRATABLE COMPOSED PER-LCS SECURE ENCLAVES
2y 5m to grant Granted Apr 14, 2026
Patent 12591463
DATA TRANSMISSION METHOD AND DATA TRANSMISSION SERVER
2y 5m to grant Granted Mar 31, 2026
Patent 12585501
MACHINE-LEARNING (ML)-BASED RESOURCE UTILIZATION PREDICTION AND MANAGEMENT ENGINE
2y 5m to grant Granted Mar 24, 2026
Patent 12578971
Container Storage Interface Filter Driver-based Use of a Non-Containerized-Based Storage System with Containerized Applications
2y 5m to grant Granted Mar 17, 2026
Patent 12561174
FRAMEWORK FOR EFFECTIVE STRESS TESTING AND APPLICATION PARAMETER PREDICTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
97%
With Interview (+13.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 532 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month