DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 12/26/2025 has been entered. Claims 1-4, 9-10, 13, 16-19, 23-24, 26-29, and 32-33 remain pending in this application. Applicant’s amendment to claim 24 has overcome the 112(b) insufficient antecedent basis rejection previously set forth in the Non-Final office action mailed on 10/02/2025. Therefore, Examiner withdraws the 112(b) insufficient antecedent basis rejection of claims 24 and 26-29.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 9-10, 13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gong et al. (CN 112148421 A hereinafter Gong) in view of Jiang et al. (US Pub. No. 2020/0249987 A1 hereinafter Jiang) in view of Nelogal et al. (US Pat. No. 11,100,033 hereinafter Nelogal).
As per claim 1, Gong teaches a system for realizing live migration, the system connected to a kernel space (Pg. 9, “As shown in the figure, the system architecture 600 may include a host 610 and one or more endpoint devices (Endpoint, EP) (for example, endpoint device 621, endpoint device 622), wherein the host 610 can run user space 611 and kernel space 612.”), the system comprising: a hardware that is virtualized to a plurality of pieces of virtual hardware (Pg. 17, “Virtual machine pass-through technology refers to supporting virtual machines to bypass the hypervisor layer and directly access physical I/O devices, so that virtual machines can obtain performance close to physical machines. SR-IOV pass-through technology is a hardware-based virtualization solution. Through SR-IOV technology, virtual machines can be directly connected to physical network cards, and multiple virtual machines can efficiently share physical network cards.” Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”); and a read only memory (ROM) (Pg. 11, “The memory may include volatile memory (volatile memory), such as random access memory (random access memory, RAM); the memory may also include non-volatile memory (non-volatile memory), such as read-only memory (read-only memory, ROM)…”); a physical function (Pg. 17, “As shown in FIG. 4, it includes VM 410, VM 420, host operating system 430 and endpoint device (endpoint, EP) 440. EP 440 can be configured with 1 PF and 4 VFs.”) configured to: receive a live migration activation request from the kernel space (Pg. 20, “The kernel space 612 may include a VFIO2 module and physical functions (PF) drivers. Among them, VFIO2 can be used to provide an interface for accessing hardware devices to the user space. For example, VFIO2 is used to provide a unified abstract interface for direct hot migration to the user space 611 to shield the underlying hardware differences…In the SR-IOV mode, the PF driver in the system architecture 600 can realize the transfer of the hot migration instruction to the endpoint device, thereby realizing the hot migration of the virtual function through the device.”), wherein the virtual hardware is one of the plurality of virtual hardware (Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”); send a to-be-migrated instruction to the kernel space, wherein the to-be-migrated instruction records to-be-migrated data, wherein the to-be-migrated data is related to the specific virtual hardware (Pg. 27, “For example, the source-side virtual machine may send a first instruction through the PF driver to instruct the source-end pass-through device to start the memory mark dirty mode, and the PF driver notifies all source-end pass-through devices of the first instruction through the source-end pass-through device driver. Among them, the memory mark dirty mode (also known as the dirty mode) refers to the source-end pass-through device writing data to the memory of the source-end virtual machine through DMA, and the source-end pass-through device marks the memory address where the data is located.”); take out the to-be-migrated data from the specific virtual hardware and send the to-be-migrated data to the kernel space (Pg. 21, “It should be understood that the source-end pass-through device can continuously write data to the memory of the source-end virtual machine through DMA. Before the source-end pass-through device stops running, the dirty page information is continuously updated information.”); and send an end signal to the kernel space after the to-be-migrated data is sent (Pg. 29, “Optionally, after step 907, the source-end virtual machine may send a shutdown instruction to the source-end pass-through device through the PF driver, and the shutdown instruction is used to instruct the source-end pass-through device to stop running. For example, the stop command above can instruct the source through device to stop receiving or sending data on the data plane. Specifically, the source-end virtual machine may send a pre-stop instruction to the PF driver, and the PF driver may send the pre-stop instruction to the source-end pass-through device through the aforementioned PF channel.”).
Gong fails to teach the migration request specifying specific virtual hardware and leaving a specific virtual function unused while maintaining operation of other virtual functions.
However, Jiang teaches wherein the live migration activation request specifies specific virtual hardware (¶ [0015], “For example, when a respective one of the virtual machines 121a-121n is being executed on a source computing device 105 associated with a GPU 115 and a migration request is initiated, the GPU 115 is instructed, in response the migration request, to identify and preempt the respective one of the virtual functions 119a-119n executing during the time interval 123a in which the migration request occurred, and save the context associated with the preempted virtual function 119a.” ¶ [0025], “Beginning with block 403, when the migration system 200 (FIG. 2) is invoked to perform a live migration of a virtual machine 121a-121n (FIG. 1) associated with a corresponding virtual function 119a-119n (FIG. 1) at a GPU 115 (FIG. 1) running an engine execution, the GPU is configured to obtain a migration request from a client over a network or local management utility.”) and leave a specific virtual function unused while maintaining operation of other virtual functions, wherein the specific virtual function corresponds to the specific virtual hardware (¶ [0017], “Accordingly, various embodiments of the present disclosure provide migration of states associated with virtual functions 119a-119n from a source GPU 115 to a destination GPU without the requirement of saving entire all contexts associated with each of the virtual functions 119a-119n to memory before migration, thereby increasing migration speed and reducing migration overhead associated with the migration of virtual machines 121a-121n from one host computing device 105 to another.” ¶ [0019]-[0020], “The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n…Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart).”).
Gong and Jiang are considered to be analogous to the claimed invention because they are in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the migration request of Gong to include specific virtual hardware as taught by Jiang to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 as both references migrate data associated with virtual hardware.
Jiang also teaches take out the to-be-migrated data in batches from the specific virtual hardware (¶ [0020]-[0021], “Saved information also includes metadata that was saved into cache 219 and system memory related to the command buffer 217, register data 221, information in the system memory 223 and subsequent engine execution information (i.e., information relating to the execution of subsequent commands or instructions for continued execution after resuming the preempted virtual function). For example, the saved information can be associated with the interrupted command and a subsequent command. This information is transferred into a memory such as a cache. Once the data required for resuming the interrupted command associated with the virtual function 119a at the source computing device 201 is saved and the migration is initiated, the host driver instructs the GPU to extract all of the saved information and transfer only the data required to re-initialize the virtual function 119a to the destination machine 205. The destination machine 205 is associated with a corresponding physical function 204. The extracted data is then restored iteratively into the destination machine 205. The destination machine 205 performs an initialization to initialize a virtual function 119t at the destination machine 205 to be in the same state as the source machine 201 to be executable.”).
Gong and Jiang fail to teach a read only memory that stores firmware, wherein the firmware comprises a physical function.
However, Nelogal teaches a read only memory (ROM) that stores firmware, wherein the firmware comprises: a physical function (Col. 9, lines 27-37, “If it is determined that I/O requests for the boot virtual function have not been received, method 400 may proceed to 410, where the firmware implements handling for configuring the physical function of the controller.” Col. 10, lines 11-21, “A physical function may implement firmware from a common option ROM, such as an expansion ROM, that can find storage resources behind each virtual function present on the controller.”).
Gong, Jiang, and Nelogal are all considered to be analogous to the claimed invention because they are all in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the ROM of Gong and Jiang with the ROM and firmware of Nelogal to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 as all the references deal with virtual machines and virtual resources.
As per claim 9, Gong, Jiang, and Nelogal teach the system of claim 1. Jiang also teaches wherein the to-be-migrated data comprises one of drive program information, firmware information, hardware information, context information and state information (¶ [0020], “Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart). Saved information also includes metadata that was saved into cache 219 and system memory related to the command buffer 217, register data 221, information in the system memory 223 and subsequent engine execution information (i.e., information relating to the execution of subsequent commands or instructions for continued execution after resuming the preempted virtual function).”).
As per claim 10, Gong, Jiang, and Nelogal teach the system of claim 1. Gong also teaches wherein the kernel space is connected to a user space that carries a user virtual machine, wherein the user virtual machine initializes the live migration activation request (Pg. 19-20, “The host 610 may include a user space 611 and a kernel space 612. The user space 611 may include virtual operations. The system simulator Qemu, where Qemu is a virtualized simulator realized by pure software, through Qemu enables guestOS to interact with the hard disk, network card, CPU, CD-ROM, audio device, USB and other devices on the physical host. In the system architecture shown in FIG. 6, Qemu can receive the hot migration instruction sent by the user and send the hot migration instruction from the user space of the physical host to the kernel space 612.”).
As per claim 13, Gong, Jiang, and Nelogal teach the system of claim 10. Gong also teaches wherein the kernel space carries a physical function drive program (Pg. 20, “The kernel space 612 may include a VFIO2 module and physical functions (PF) drivers.”), and the user space includes a user end kernel space, wherein the user end kernel space carries a virtual function drive program (Pg. 17, “The Guest OS inside the virtual machine can load the corresponding VF driver to access the VF device.”), the physical function drive program receives the live migration activation request from the user virtual machine (Pg. 21, “In other words, the source virtual machine can send a migration instruction to the PF driver, and the PF driver transparently transmits the acquired migration instruction to the source-side pass-through device.”) and sends the live migration activation request to the virtual function drive program, and the virtual function drive program stops executing tasks from the user space temporarily (Pg. 29, “Among them, sufficient convergence can mean that when there is very little dirty page data to be copied, the source virtual machine can stop running and copy the remaining small part of the dirty page data to the destination virtual machine at one time…Optionally, after step 907, the source-end virtual machine may send a shutdown instruction to the source-end pass-through device through the PF driver, and the shutdown instruction is used to instruct the source-end pass-through device to stop running.”).
As per claim 16, Gong, Jiang, and Nelogal teach the system of claim 13. Gong also teaches wherein when execution of tasks from the user space is stopped temporarily, the virtual function drive program notifies the physical function drive program, and then the physical function drive program sends the live migration activation request to the physical function (Pg. 21, “In other words, the source virtual machine can send a migration instruction to the PF driver, and the PF driver transparently transmits the acquired migration instruction to the source-side pass-through device.” Pg. 25, “In the embodiment of the present application, the migration instruction and the dirty page information may be transmitted through the PF driver. Wherein, the pass-through device configured by the virtual machine may be multiple virtual pass-through devices running in one EP, that is, it may be different VF devices virtual in the EP, and the VF information management module 810 may be used to manage different VF devices. For example, through a VF, the corresponding PF device can be determined…”).
Claim(s) 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Gong, Jiang, and Nelogal as applied to claim 1 above, and further in view of Pershin et al. (US Pub. No. 2015/0378759 A1 hereinafter Pershin).
As per claim 2, Gong, Jiang, and Nelogal teach the system of claim 1. Gong teaches wherein the hardware comprises a computation apparatus, the specific virtual hardware is a specific virtual computation apparatus (Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”). Jiang teaches an intelligence processing apparatus configured to perform a convolution computation of a neural network (¶ [0019], “A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n. The virtualized GPU is therefore shared across many virtual machines 121a-121n.” Examiner’s Note: One of ordinary skill in the art would recognize that a GPU is capable of performing a convolution computation of a neural network.). Nelogal teaches wherein the shared storage unit is virtualized to a plurality of virtual shared storage units (Col. 8, lines 23-39, “In addition, storage controller 122 may allocate virtual function 310 to hypervisor 106 and allocate selected storage resources 124 to virtual function 310, in accordance with SR-IOV, as shown in FIG. 3. Storage resources 124 allocated to virtual function 310 may thus be used by hypervisor 106. For example, some server platforms support physical storage resources in the rear thereof for booting purposes, and such storage resources may be allocated to virtual function 310.” See also Fig. 3.).
Although Gong, Jiang, and Nelogal teach a shared storage unit virtualized to a plurality of shared storage units, they fail to explicitly show the relationship between the shared storage unit, virtual computation apparatus, and the to-be-migrated data.
Accordingly, Pershin teaches a shared storage unit configured to temporarily store a computation intermediate value of the convolution computation, wherein the shared storage unit is virtualized to a plurality of virtual shared storage units, and the specific virtual computation apparatus corresponds to one virtual shared storage unit, wherein the to-be-migrated data includes the computation intermediate value stored in the virtual shared storage unit (¶ [0023], “In turn, the source SRM 112 may instruct the source VM manager 116 to migrate each of the VM(s), e.g., in a sequence. The source VM manager 116 then instructs the source host 114 to migrate the VM to the destination host 134. The VM migration engine 120 of the source host 114 then interacts with the VM migration engine 140 at the destination host 134 to transfer data for the VM—e.g., data stored in virtual memory, data stored in a virtual disk, or both—from the source host 114 to the destination host 134.” ¶ [0029], “Each VM also includes virtual storage for storing data related to the VM. The virtual storage can include, e.g., virtual memory 243 and a virtual disk 244. The guest OS 242 and/or the user applications 241 can store data to and access data from the virtual memory 243 and the virtual disk 244. The hypervisor 220 can map the VM's virtual storage to hardware storage (e.g., hardware memory 216, local storage unit, and/or a shared storage unit 250). For example, when the guest OS 242 writes data to virtual memory 243, the hypervisor 220 can store the data in a corresponding location in hardware memory 216 based on the mapping.”).
Gong, Jiang, Nelogal, and Pershin are all considered to be analogous to the claimed invention because they are all in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Gong, Jiang, and Nelogal with the teachings of Pershin to show the relationship between the shared storage unit, virtual computation apparatus, and the to-be-migrated data.
As per claim 3, Gong, Jiang, Nelogal, and Pershin teach the system of claim 2. Jiang teaches the specific virtual computation apparatus (¶ [0019], “A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n. The virtualized GPU is therefore shared across many virtual machines 121a-121n.”). Pershin teaches a storage unit core that is virtualized to a plurality of storage unit cores, and the virtual machine is configured with one virtual storage unit core, wherein the to-be-migrated data includes data stored in the virtual storage unit core (¶ [0023], “In turn, the source SRM 112 may instruct the source VM manager 116 to migrate each of the VM(s), e.g., in a sequence. The source VM manager 116 then instructs the source host 114 to migrate the VM to the destination host 134. The VM migration engine 120 of the source host 114 then interacts with the VM migration engine 140 at the destination host 134 to transfer data for the VM—e.g., data stored in virtual memory, data stored in a virtual disk, or both—from the source host 114 to the destination host 134.” ¶ [0029], “Each VM also includes virtual storage for storing data related to the VM. The virtual storage can include, e.g., virtual memory 243 and a virtual disk 244. The guest OS 242 and/or the user applications 241 can store data to and access data from the virtual memory 243 and the virtual disk 244. The hypervisor 220 can map the VM's virtual storage to hardware storage (e.g., hardware memory 216, local storage unit, and/or a shared storage unit 250). For example, when the guest OS 242 writes data to virtual memory 243, the hypervisor 220 can store the data in a corresponding location in hardware memory 216 based on the mapping.”).
As per claim 4, Gong, Jiang, and Nelogal teach the system of claim 1. Nelogal teaches wherein the hardware comprises a storage apparatus that is virtualized to a plurality of virtual shared storage apparatuses (Col. 8, lines 23-39, “In addition, storage controller 122 may allocate virtual function 310 to hypervisor 106 and allocate selected storage resources 124 to virtual function 310, in accordance with SR-IOV, as shown in FIG. 3. Storage resources 124 allocated to virtual function 310 may thus be used by hypervisor 106. For example, some server platforms support physical storage resources in the rear thereof for booting purposes, and such storage resources may be allocated to virtual function 310.” See also Fig. 3.).
Gong, Jiang, and Nelogal fail to explicitly teach that the to-be-migrated data is stored in the virtual storage apparatus.
However, Pershin teaches the specific virtual hardware is a specific virtual storage apparatus, and the to-be-migrated data includes data stored in the specific virtual storage apparatus (¶ [0023], “In turn, the source SRM 112 may instruct the source VM manager 116 to migrate each of the VM(s), e.g., in a sequence. The source VM manager 116 then instructs the source host 114 to migrate the VM to the destination host 134. The VM migration engine 120 of the source host 114 then interacts with the VM migration engine 140 at the destination host 134 to transfer data for the VM—e.g., data stored in virtual memory, data stored in a virtual disk, or both—from the source host 114 to the destination host 134.” ¶ [0029], “Each VM also includes virtual storage for storing data related to the VM. The virtual storage can include, e.g., virtual memory 243 and a virtual disk 244. The guest OS 242 and/or the user applications 241 can store data to and access data from the virtual memory 243 and the virtual disk 244. The hypervisor 220 can map the VM's virtual storage to hardware storage (e.g., hardware memory 216, local storage unit, and/or a shared storage unit 250). For example, when the guest OS 242 writes data to virtual memory 243, the hypervisor 220 can store the data in a corresponding location in hardware memory 216 based on the mapping.”).
Refer to claim 2 for motivation to combine.
Claim(s) 17 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang in view of Dong (US Pub. No. 2012/0254862 A1) in view of Nelogal.
As per claim 17, Jiang teaches a system for realizing live migration, the system connected to a kernel space (¶ [0018], “The migration system 200 represents a live migration of a respective one of the virtual functions 119a-119n executing in corresponding one of the plurality of virtual machines 121a-121n in accordance some embodiments of the GPU 115 shown in FIG. 1.”), the system comprising: a hardware that is virtualized to a plurality of pieces of hardware (¶ [0019], “A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n. The virtualized GPU is therefore shared across many virtual machines 121a-121n.”), wherein the to-be-migrated data is corresponded to a specific virtual hardware, wherein the specific virtual hardware is one of the plurality of virtual hardware (¶ [0020], “The migration system 200 can detect and extract the command stop point associated with a preempted command. Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart).”); and a plurality of virtual functions managed by the physical function, wherein the specific virtual hardware corresponds to one virtual function of the plurality of virtual functions (¶ [0011]-[0013], The virtual environment implemented on the GPU 115 also provides virtual functions 119a-119n to other virtual components implemented on a physical machine. A single physical function implemented in the GPU 115 is used to support one or more virtual functions…The physical function allocates the virtual functions 119a-119n to different virtual components in the physical machine on a time-sliced basis…In some embodiments, each of the virtual functions 119a-119n shares one or more physical resources of a source computing device 105 with the physical function and other virtual functions 119a-119n.“ ¶ [0019], “The source machine 201 implements a hypervisor (not shown) for the physical function 203. Some embodiments of the physical function 203 support multiple virtual functions 119a-119n. A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n.”), and leave the one virtual function unused, while maintaining operation of other virtual functions (¶ [0017], “Accordingly, various embodiments of the present disclosure provide migration of states associated with virtual functions 119a-119n from a source GPU 115 to a destination GPU without the requirement of saving entire all contexts associated with each of the virtual functions 119a-119n to memory before migration, thereby increasing migration speed and reducing migration overhead associated with the migration of virtual machines 121a-121n from one host computing device 105 to another.” ¶ [0019]-[0020], “The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n…Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart).”). Jiang also teaches a read only memory (ROM) (¶ [0026], “Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory)…”).
Jiang fails to explicitly teach the physical function receiving to-be-migrated data and sending the to-be-migrated data to the virtual hardware through the virtual function.
However, Dong teaches a physical function configured to receive to-be-migrated data from the kernel space (¶ [0041], “In block 660, the virtual function state (VF state) may be saved by accessing the state information stored in the S_States…In one embodiment, the software states or the shadow states (S_States) of the VF 222 may be shared with the PFD 248. In other embodiment, the software states or the shadow states (S-States) of the VF 222 may be read using the PFD/VFD communication channel.” ¶ [0031], “In one embodiment, the PFD 248 may directly access the PF resources provided in the computing platform 110-K and the PFD 248 may configure and manage the virtual functions VF 222 and 223 through trap and emulating accesses from VFDs 258-1 to 258-K.”) and wherein the physical function is further configured to: send to-be-migrated data to the specific virtual hardware through the virtual function, and send an end signal to the kernel space after the to-be-migrated data is sent (¶ [0043], “In block 720, the MM 246 and PFD 248 may restore the VCPU and device states of the VF 222 in the target platform 120-1 or a target VM 250-K, for example, by restoring the guest memory contents, which may include the S-States as well. In one embodiment, the hardware device states may be restored. In one embodiment, the PFD 248 may directly write the visible states VS 227 of the VF 222 however, the invisible states IS 228 may be readily available from the S_States created by the self-emulation layer 249.”).
Jiang and Dong are considered to be analogous to the claimed invention because they are in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the physical function of Jiang with the functionality of the physical function of Dong to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 because both references migrate data associated with virtual hardware and virtual functions.
Jiang and Dong fail to teach a read only memory storing firmware comprising a physical function.
However, Nelogal teaches a read only memory (ROM) that stores firmware, wherein the firmware comprises: a physical function (Col. 9, lines 27-37, “If it is determined that I/O requests for the boot virtual function have not been received, method 400 may proceed to 410, where the firmware implements handling for configuring the physical function of the controller.” Col. 10, lines 11-21, “A physical function may implement firmware from a common option ROM, such as an expansion ROM, that can find storage resources behind each virtual function present on the controller.”).
Jiang, Dong, and Nelogal are all considered to be analogous to the claimed invention because they are all in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the ROM of Gong and Jiang with the ROM and firmware of Nelogal to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 as all the references deal with virtual machines and virtual resources.
As per claim 23, Jiang, Dong, and Nelogal teach the system of claim 17. Jiang also teaches wherein to-be-migrated data includes one of drive program information, firmware information, hardware information, context and state information (¶ [0020], “Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart). Saved information also includes metadata that was saved into cache 219 and system memory related to the command buffer 217, register data 221, information in the system memory 223 and subsequent engine execution information (i.e., information relating to the execution of subsequent commands or instructions for continued execution after resuming the preempted virtual function).”).
Claim(s) 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang, Dong, and Nelogal as applied to claim 17 above, and further in view of Pershin.
As per claim 18, Jiang, Dong, and Nelogal teach the system of claim 17. Jiang teaches wherein the hardware comprises a computation apparatus, the specific virtual hardware is a specific virtual computation apparatus, and the computation apparatus comprises: an intelligence processing apparatus configured to perform a convolution computation of a neural network (¶ [0019], “A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n. The virtualized GPU is therefore shared across many virtual machines 121a-121n.” Examiner’s Note: One of ordinary skill in the art would recognize that a GPU is capable of performing a convolution computation of a neural network.). Nelogal teaches wherein the shared storage unit is virtualized to a plurality of virtual shared storage units (Col. 8, lines 23-39, “In addition, storage controller 122 may allocate virtual function 310 to hypervisor 106 and allocate selected storage resources 124 to virtual function 310, in accordance with SR-IOV, as shown in FIG. 3. Storage resources 124 allocated to virtual function 310 may thus be used by hypervisor 106. For example, some server platforms support physical storage resources in the rear thereof for booting purposes, and such storage resources may be allocated to virtual function 310.” See also Fig. 3.).
Although Jiang, Dong, and Nelogal teach a shared storage unit virtualized to a plurality of shared storage units, they fail to explicitly show the relationship between the shared storage unit, virtual computation apparatus, and the to-be-migrated data.
Accordingly, Pershin teaches a shared storage unit configured to temporarily store a computation intermediate value of the convolution computation, wherein the shared storage unit is virtualized to a plurality of virtual shared storage units, and the specific virtual computation apparatus corresponds to one virtual shared storage unit, wherein the to-be-migrated data includes the computation intermediate value stored in the virtual shared storage unit (¶ [0023], “In turn, the source SRM 112 may instruct the source VM manager 116 to migrate each of the VM(s), e.g., in a sequence. The source VM manager 116 then instructs the source host 114 to migrate the VM to the destination host 134. The VM migration engine 120 of the source host 114 then interacts with the VM migration engine 140 at the destination host 134 to transfer data for the VM—e.g., data stored in virtual memory, data stored in a virtual disk, or both—from the source host 114 to the destination host 134.” ¶ [0029], “Each VM also includes virtual storage for storing data related to the VM. The virtual storage can include, e.g., virtual memory 243 and a virtual disk 244. The guest OS 242 and/or the user applications 241 can store data to and access data from the virtual memory 243 and the virtual disk 244. The hypervisor 220 can map the VM's virtual storage to hardware storage (e.g., hardware memory 216, local storage unit, and/or a shared storage unit 250). For example, when the guest OS 242 writes data to virtual memory 243, the hypervisor 220 can store the data in a corresponding location in hardware memory 216 based on the mapping.”).
Jiang, Dong, Nelogal, and Pershin are all considered to be analogous to the claimed invention because they are all in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jiang, Dong, and Nelogal with the teachings of Pershin to show the relationship between the shared storage unit, virtual computation apparatus, and the to-be-migrated data.
As per claim 19, Jiang, Dong, Nelogal, and Pershin teach the system of claim 2. Jiang teaches the specific virtual computation apparatus (¶ [0019], “A hypervisor launches one or more virtual machines 121a-121n for execution on a physical resource such as the GPU 115 that supports the physical function 203. The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n. The virtualized GPU is therefore shared across many virtual machines 121a-121n.”). Pershin teaches a storage unit core that is virtualized to a plurality of storage unit cores, and the virtual machine is configured with one virtual storage unit core, wherein the virtual function stores corresponding data in the to-be-migrated data to the virtual storage unit core (¶ [0023], “In turn, the source SRM 112 may instruct the source VM manager 116 to migrate each of the VM(s), e.g., in a sequence. The source VM manager 116 then instructs the source host 114 to migrate the VM to the destination host 134. The VM migration engine 120 of the source host 114 then interacts with the VM migration engine 140 at the destination host 134 to transfer data for the VM—e.g., data stored in virtual memory, data stored in a virtual disk, or both—from the source host 114 to the destination host 134.” ¶ [0029], “Each VM also includes virtual storage for storing data related to the VM. The virtual storage can include, e.g., virtual memory 243 and a virtual disk 244. The guest OS 242 and/or the user applications 241 can store data to and access data from the virtual memory 243 and the virtual disk 244. The hypervisor 220 can map the VM's virtual storage to hardware storage (e.g., hardware memory 216, local storage unit, and/or a shared storage unit 250). For example, when the guest OS 242 writes data to virtual memory 243, the hypervisor 220 can store the data in a corresponding location in hardware memory 216 based on the mapping.”).
Claim(s) 24 and 26-29 are rejected under 35 U.S.C. 103 as being unpatentable over Jiang, Dong, and Nelogal as applied to claim 17 above, and further in view of Gong.
As per claim 24, Jiang, Dong, and Nelogal teach the system of claim 17.
Jiang, Dong, and Nelogal fail to teach the relationship between user and kernel space as claimed.
However, Gong teaches wherein the kernel space is connected to a user space that carries a user virtual machine, wherein the user virtual machine receives the to-be-migrated data from off-chip and initializes the live migration activation request (Pg. 19-20, “The host 610 may include a user space 611 and a kernel space 612. The user space 611 may include virtual operations. The system simulator Qemu, where Qemu is a virtualized simulator realized by pure software, through Qemu enables guestOS to interact with the hard disk, network card, CPU, CD-ROM, audio device, USB and other devices on the physical host. In the system architecture shown in FIG. 6, Qemu can receive the hot migration instruction sent by the user and send the hot migration instruction from the user space of the physical host to the kernel space 612.”).
Jiang, Dong, Nelogal, and Gong are all considered to be analogous to the claimed invention because they are all in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Jiang, Dong, and Nelogal with Gong to show the relationship between user space and kernel space when initiating a live migration.
As per claim 26, Jiang, Dong, Nelogal, and Gong teach the system of claim 24. Dong teaches the physical function drive program configured to receive the to-be-migrated data from the user virtual machine and send the to-be-migrated data to the physical function in response to the live migration activation request ((¶ [0041], “In block 660, the virtual function state (VF state) may be saved by accessing the state information stored in the S_States…In one embodiment, the software states or the shadow states (S_States) of the VF 222 may be shared with the PFD 248. In other embodiment, the software states or the shadow states (S-States) of the VF 222 may be read using the PFD/VFD communication channel.” ¶ [0031], “In one embodiment, the PFD 248 may directly access the PF resources provided in the computing platform 110-K and the PFD 248 may configure and manage the virtual functions VF 222 and 223 through trap and emulating accesses from VFDs 258-1 to 258-K.”). Gong teaches wherein the kernel space carries physical function drive program (Pg. 20, “The kernel space 612 may include a VFIO2 module and physical functions (PF) drivers.”).
As per claim 27, Jiang, Dong, Nelogal, and Gong teach the system of claim 26. Dong also teaches wherein the physical function drive program sends end signal to the user virtual machine (¶ [0040], “In block 620, the virtual CPUs (VCPUs) associated with the VM 250-1 may be paused. In one embodiment, the PFD 248 may pause the VCPU. In block 640, the PFD 248 may be invoked to pause the virtual function VF 222, which may be associated with the VM 250-1.”).
As per claim 28, Jiang, Dong, Nelogal, and Gong teach the system of claim 27. Dong teaches the user virtual machine notifies the virtual function drive program that the live migration has been done in response to the end signal; the virtual function drive program receives tasks from the user space; the task controls the specific virtual hardware (¶ [0043]-[0044], “In block 720, the MM 246 and PFD 248 may restore the VCPU and device states of the VF 222 in the target platform 120-1 or a target VM 250-K, for example, by restoring the guest memory contents, which may include the S-States as well. In one embodiment, the hardware device states may be restored. In one embodiment, the PFD 248 may directly write the visible states VS 227 of the VF 222 however, the invisible states IS 228 may be readily available from the S_States created by the self-emulation layer 249. In one embodiment, the self-emulation layer 249 may present the in-memory S-States to the up-level VF driver in the target VM 250-K or a target VM within the computing platform 120-1 to maintain state continuity for the VF driver in the target VM 250-K or a target VM within the computing platform 120-1. In one embodiment, the hardware device states may be different from that of the S-States in the computing platform 120-1 as the in-memory S-States are presented to the up-level VF driver such as VFD 258-K in the target VM 250-K. In one embodiment, a self-convergence technique may be used to quickly converge or synchronize the S-States with the device states (invisible states) in the VF 223 of the target VM 250-K. The self-convergence technique is described below. In block 750, the VCPU may be resumed to continue execution in the target VM 250-K after the migration of the VM 250-1 is completed.). Gong teaches wherein the user space includes user end kernel space that carries virtual function drive program (Pg. 17, “The Guest OS inside the virtual machine can load the corresponding VF driver to access the VF device.”).
As per claim 29, Jiang, Dong, Nelogal, and Gong teach the system of claim 28. Gong also teaches wherein the user virtual machine changes state of base address register in response to the end signal, and the base address register points to the specific virtual hardware (Pg. 20, “Qemu can include virtual function input/output (VFIO) and virtual base address register (vBar). The VFIO in Qemu is used to call various interfaces provided by the kernel space VFIO2 to complete the pass-through The presentation and function of the device; the vbar in Qemu includes the area mapped to the device specific region allocated by the VFIO2 module of the kernel space 612 for each pass-through device, for users to send migration instructions and information queries to the pass-through device.” Pg. 22, “The state information of the device at the time when it stops running. The state information includes the information of the register and the memory descriptor of each virtual pass-through device. It should be noted that the register information may refer to the state information of the register when the source-end pass-through device stops running, for example, it may be the index of the register's receive queue, send queue, and control queue in the source-end pass-through device.”)
Claim(s) 32-33 are rejected under 35 U.S.C. 103 as being unpatentable over Gong in view of Jiang.
As per claim 32, Gong teaches a method for implementing a live migration storage path in a system comprising hardware virtualized to a plurality of pieces of virtual hardware (Pg. 17, “Virtual machine pass-through technology refers to supporting virtual machines to bypass the hypervisor layer and directly access physical I/O devices, so that virtual machines can obtain performance close to physical machines. SR-IOV pass-through technology is a hardware-based virtualization solution. Through SR-IOV technology, virtual machines can be directly connected to physical network cards, and multiple virtual machines can efficiently share physical network cards.” Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”), the method comprising: receiving a live migration request (Pg. 20, “The kernel space 612 may include a VFIO2 module and physical functions (PF) drivers. Among them, VFIO2 can be used to provide an interface for accessing hardware devices to the user space. For example, VFIO2 is used to provide a unified abstract interface for direct hot migration to the user space 611 to shield the underlying hardware differences…In the SR-IOV mode, the PF driver in the system architecture 600 can realize the transfer of the hot migration instruction to the endpoint device, thereby realizing the hot migration of the virtual function through the device.”), wherein the virtual hardware is one of the plurality of virtual hardware (Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”), and sending to-be-migrated data from the specific virtual hardware (Pg. 21, “It should be understood that the source-end pass-through device can continuously write data to the memory of the source-end virtual machine through DMA. Before the source-end pass-through device stops running, the dirty page information is continuously updated information.”); and sending an end signal after the to-be-migrated data is sent (Pg. 29, “Optionally, after step 907, the source-end virtual machine may send a shutdown instruction to the source-end pass-through device through the PF driver, and the shutdown instruction is used to instruct the source-end pass-through device to stop running. For example, the stop command above can instruct the source through device to stop receiving or sending data on the data plane. Specifically, the source-end virtual machine may send a pre-stop instruction to the PF driver, and the PF driver may send the pre-stop instruction to the source-end pass-through device through the aforementioned PF channel.”).
Gong fails to teach the migration request specifying virtual hardware and leaving a specific virtual function unused.
However, Jiang teaches receiving a live migration request, which specifies specific virtual hardware (¶ [0015], “For example, when a respective one of the virtual machines 121a-121n is being executed on a source computing device 105 associated with a GPU 115 and a migration request is initiated, the GPU 115 is instructed, in response the migration request, to identify and preempt the respective one of the virtual functions 119a-119n executing during the time interval 123a in which the migration request occurred, and save the context associated with the preempted virtual function 119a.” ¶ [0025], “Beginning with block 403, when the migration system 200 (FIG. 2) is invoked to perform a live migration of a virtual machine 121a-121n (FIG. 1) associated with a corresponding virtual function 119a-119n (FIG. 1) at a GPU 115 (FIG. 1) running an engine execution, the GPU is configured to obtain a migration request from a client over a network or local management utility.”) and leaving a specific virtual function unused, wherein the specific virtual function is corresponded to the specific virtual hardware (¶ [0017], “Accordingly, various embodiments of the present disclosure provide migration of states associated with virtual functions 119a-119n from a source GPU 115 to a destination GPU without the requirement of saving entire all contexts associated with each of the virtual functions 119a-119n to memory before migration, thereby increasing migration speed and reducing migration overhead associated with the migration of virtual machines 121a-121n from one host computing device 105 to another.” ¶ [0019]-[0020], “The virtual function 119a-119n are assigned to a corresponding virtual machines 121a-121n. In the illustrated embodiment, the virtual function 119a is assigned to the virtual machine 121a, the virtual function 119b is assigned to the virtual machine 121b, and the virtual function 119n is assigned to the virtual machine 121n. The virtual functions 119a-119n then serve as the GPU and provide GPU functionality to the corresponding virtual machines 121a-121n…Upon receipt of a migration request, the source GPU is configured to extract a set of information corresponding to the state of the preempted virtual function 119a. For example, when a virtual function 119a is executing and migration is started, the GPU is instructed to preempt the virtual function 119a and save the context of the virtual function 119a at the point of execution corresponding to where the command is paused or interrupted, the status associated with the virtual function 119a, the status of the preempted command, and information associated with resuming an execution of the interrupted command (i.e., information critical for the engine to restart).”).
Gong and Jiang are considered to be analogous to the claimed invention because they are in the same field of migrating virtual machines and/or virtual resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the migration request of Gong to include specific virtual hardware as taught by Jiang to arrive at the claimed invention. This substitution would have been reasonable under MPEP § 2143 as both references migrate data associated with virtual hardware.
Jiang also teaches sending the to-be-migrated data in batches from the specific virtual hardware (¶ [0020]-[0021], “Saved information also includes metadata that was saved into cache 219 and system memory related to the command buffer 217, register data 221, information in the system memory 223 and subsequent engine execution information (i.e., information relating to the execution of subsequent commands or instructions for continued execution after resuming the preempted virtual function). For example, the saved information can be associated with the interrupted command and a subsequent command. This information is transferred into a memory such as a cache. Once the data required for resuming the interrupted command associated with the virtual function 119a at the source computing device 201 is saved and the migration is initiated, the host driver instructs the GPU to extract all of the saved information and transfer only the data required to re-initialize the virtual function 119a to the destination machine 205. The destination machine 205 is associated with a corresponding physical function 204. The extracted data is then restored iteratively into the destination machine 205. The destination machine 205 performs an initialization to initialize a virtual function 119t at the destination machine 205 to be in the same state as the source machine 201 to be executable.”).
As per claim 33, Gong and Jiang teach the method of claim 32. Gong teaches wherein the hardware is one of a computation apparatus, a storage apparatus, a video encoding and decoding apparatus and a JPEG encoding and decoding apparatus of an artificial intelligence on-chip system (Pg. 20, “Exemplarily, the pass-through device may be a virtual device (supporting Linux Endpoint Framework) based on application specific integrated circuit (ASIC) or field programmable gate array (FPGA) or other virtual devices with computing resources and storage resource equipment, etc.…”).
Response to Arguments
Applicant's arguments filed 12/26/2025 have been fully considered but they are not persuasive. Applicant argues that none of the references used to reject the independent claims teach the limitation of “sending data in batches.” However, Examiner directs Applicant to various paragraphs of the Jiang reference that teach “iteratively” restoring the virtual on the destination machine (See Jiang para. 0017, 0021, and 0025). Specifically, para. 0021 and 0025 of Jiang explain that “The extracted data [from the virtual function] is then restored iteratively into the destination machine…” such data can include cache data, register data, data in system memory, etc. Furthermore, claim 17 of Jiang also highlights this iterative transfer of data stating “wherein the transferring further includes: iteratively transferring data associated with a command buffer, data associated with internal SRAM, a set of register data, and data associated with system memory. One of ordinary skill in the art would recognize that an iterative restoration of data would consist of portions of data being restored one portion after another. In other words, one iteration of restored data is equivalent to one batch of to-be-migrated data as claimed in the instant application. Additionally, Applicant has amended independent claims 1 and 17 with the limitation of “leave a specific virtual function unused…”; however, Jiang teaches this limitation, as indicated in the rejection above, in para. 0017 which states “…present disclosure provide migration of states associated with virtual functions…without requirement of saving entire all contexts associated with each of the virtual functions…”
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ROBERT DAKITA EWALD whose telephone number is (703)756-1845. The examiner can normally be reached Monday-Friday: 9:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197
(toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.D.E./Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199