Prosecution Insights
Last updated: April 19, 2026
Application No. 18/466,800

MULTI-OS HETEROGENEOUS VIRTUAL MACHINE

Non-Final OA §101§103
Filed
Sep 13, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Hewlett Packard Enterprise Development LP
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The following guidelines illustrate the preferred layout for the specification of a utility application. These guidelines are suggested for the applicant’s use. Arrangement of the Specification As provided in 37 CFR 1.77(b), the specification of a utility application should include the following sections in order. Each of the lettered items should appear in upper case, without underlining or bold type, as a section heading. If no text follows the section heading, the phrase “Not Applicable” should follow the section heading: (a) TITLE OF THE INVENTION. (b) CROSS-REFERENCE TO RELATED APPLICATIONS. (c) STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT. (d) THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT. (e) INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A COMPACT DISC OR AS A TEXT FILE VIA THE OFFICE ELECTRONIC FILING SYSTEM (EFS-WEB). (f) STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR. (g) BACKGROUND OF THE INVENTION. (1) Field of the Invention. (2) Description of Related Art including information disclosed under 37 CFR 1.97 and 1.98. (h) BRIEF SUMMARY OF THE INVENTION. (i) BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S). (j) DETAILED DESCRIPTION OF THE INVENTION. (k) CLAIM OR CLAIMS (commencing on a separate sheet). (l) ABSTRACT OF THE DISCLOSURE (commencing on a separate sheet). (m) SEQUENCE LISTING. (See MPEP § 2422.03 and 37 CFR 1.821-1.825. A “Sequence Listing” is required on paper if the application discloses a nucleotide or amino acid sequence as defined in 37 CFR 1.821(a) and if the required “Sequence Listing” is not submitted as an electronic document either on compact disc or as a text file via the Office electronic filing system (EFS-Web.) In this application the Abstract filed on 09/13/23 is not on a separate sheet. The Abstract here also includes the title. The separate sheet should only contain the Abstract. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claims 11-19 are directed to non-statutory subject matter. Claim 11 is directed to a “non-transitory computer-readable storage medium”. The “non-transitory computer-readable storage medium” is not disclosed in the specification or disclosed to exclude non-statutory embodiment. For instance, the “non-transitory computer-readable storage medium” as disclosed on paragraphs 0043/0059 does not exclude carrier wave, transmission medium and the like and is therefore directed to non-statutory subject matter. Claims 12-19 are rejected for the same reason as claim 11 above. Appropriate corrected is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 11, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2022/0279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421). As to claim 1, Connor’057 teaches a method comprising: executing, within a virtual machine (VM) (virtual machines (VMs)) operating on a host device, a first operating system (OS) (operating system (OS)) running a client application (applications) (“…Processors 102 can execute an operating system (OS), driver, and/or processes. In some examples, an OS can include Linux®, Windows® Server, FreeBSD®, Android®, MacOS®, iOS®, or any other operating system. One or more of processors 102 can execute processes 104. Processes 104 can include one or more of: applications, virtual machines (VMs), microVMs, containers, microservices, serverless applications, and so forth…A VM can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can include an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the client or server's CPU, memory, hard disk, network, and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from another, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host…” paragraphs 0013/0014); receiving, by the first OS, a transaction command from the client application, wherein the transaction command is associated with a remote memory access (RDMA application program interface (API) semantics (e.g., RDMA Verbs)) (“…At least to provide support for reliable transport protocol APIs utilized by an application and utilize a CSP's reliable transport protocol, translation of communications based on a reliable transport protocol utilized by an application to or from CSP's proprietary reliable transport protocol can occur. For example, applications could utilize RDMA application program interface (API) semantics (e.g., RDMA Verbs), whereas traffic could be transmitted by a cloud native reliable transport such as a CSP's proprietary packet transmission protocol. A translator circuitry can receive or intercept RDMA Verbs API commands from an application, driver, or OS and translate the commands to a format consistent with commands of a CSP's reliable transport protocol. For example, the translation circuitry can convert RoCE format API semantics (or other API semantics) to utilize different proprietary protocols. For example, the translation circuitry can indicate receipt of packets on a proprietary protocol using RoCE format API semantics (or other API semantics). Accordingly, datacenter applications written to utilize RDMA or other reliable transport protocol can utilize RDMA APIs and do not need to be re-written to utilize cloud native datacenter reliable transport protocols…” paragraphs 0011/0019); converting (translator circuitry/RDMA Intercept 130), by the first OS, the transaction command to one or more network packets (“…At least to provide support for reliable transport protocol APIs utilized by an application and utilize a CSP's reliable transport protocol, translation of communications based on a reliable transport protocol utilized by an application to or from CSP's proprietary reliable transport protocol can occur. For example, applications could utilize RDMA application program interface (API) semantics (e.g., RDMA Verbs), whereas traffic could be transmitted by a cloud native reliable transport such as a CSP's proprietary packet transmission protocol. A translator circuitry can receive or intercept RDMA Verbs API commands from an application, driver, or OS and translate the commands to a format consistent with commands of a CSP's reliable transport protocol. For example, the translation circuitry can convert RoCE format API semantics (or other API semantics) to utilize different proprietary protocols. For example, the translation circuitry can indicate receipt of packets on a proprietary protocol using RoCE format API semantics (or other API semantics). Accordingly, datacenter applications written to utilize RDMA or other reliable transport protocol can utilize RDMA APIs and do not need to be re-written to utilize cloud native datacenter reliable transport protocols…In some examples, RDMA intercept 130 can translate a RoCE transmit request to a packet consistent with reliable transport protocol 140. For example, where RoCE operations are the same as operations of reliable transport protocol 140, RDMA intercept 130 can apply features specified by RoCE and available from reliable transport protocol 140. For reliable transport protocol 140 operations that RoCE does not provide, RDMA intercept 130 can select available reliable transport protocol 140 operations. For example, where path selection or congestion management of reliable transport protocol 140 differ from those of RoCE, RDMA intercept 130 can select path selection or congestion management available in reliable transport protocol 140…” paragraphs 0011/0019/0020); and sending, by the host device, the one or more network packets to a corresponding destination based on the network protocol stack (“…At 404, network interface device can transmit one or more packets to the destination receiver device in accordance with the second reliable transport protocol, which differs from the protocol utilized by the sender process…” paragraph 0041). Connor’057 is silent with reference to a second OS running a network protocol stack and providing, by the first OS, a description of the one or more network packets to the second OS via a shared guest physical memory of the VM. Connor’421 teaches a second OS (Second Virtual Machine 116/ receiving VM/VM2) running a network protocol stack (“…Packet transfers, unlike software copies in the protocol stack, may be designed to be sent to peripheral devices via DMA operations. The stack may be designed for packet transfer processes to be asynchronous. The transmitting VM may thus continue to do productive work while the packet is queued and transferred. Similarly, a receiving VM may be available for tasks during the transfer and may become aware of the received packet only after the transfer is complete. Advantageously, the CPU, which may be used for other operations, may not be kept busy copying the packet and thus be available for the other operations…” paragraph 0022) and providing, by the first OS (First Virtual Machine 114/VM1), a description (descriptors,… the descriptors can be used to define the data and control for the packet and elements such address, length, and required processing) of the one or more network packets to the second OS (Second Virtual Machine 116/VM2) via a shared guest physical memory of the VM (queue) (“…In block 210, a request to transmit a packet from a first virtual machine (VM1) to a second virtual machine (VM2) is received. A transmission (TX) packet for transmission is provided to the first virtual machine VM1 and a virtual network interface controller (vNIC) driver of VM1…In block 220, the vNIC driver of VM1 (VM<b>1</b>-vNIC) queues the TX packet to be transmitted. In some examples, the protocol stack can send a scatter-gather list to the vNIC driver with instructions for processing. For example, the processing may include a TCP checksum offload. In some examples, the vNIC driver can read the processing instructions and prepare descriptors for each element of the scatter-gather list. For example, the descriptors can be used to define the data and control for the packet and elements such address, length, and required processing. In some examples, after the descriptors are complete, the descriptors can be enqueued for transmission. For example, in the case of a physical NIC, the descriptors can be used for DMA operations. In case of vNIC to vSwitch environments, however, the descriptors can be used to inform the vSwitch of the packet location and control information…In block 230, a virtual switch (vSwitch) driver reads a transmission (TX) queue of VM1. In some examples, the vSwitch driver can monitor traffic that is within the network. The vSwitch driver can then detect that the TX packet that has been queued up in memory and recognizes that the packet has another destination within the system…In block 240, the vSwitch driver recognizes and determines the destination of the packet, which is another VM on the computer system, VM2. For example, the vSwitch driver may perform some discovery, read the VM1 transmission (TX) queue, and determine that the packet that is stored in VM1 memory is to be copied to VM2 memory…” paragraphs 0022/0026-0029). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 with the teaching of Connor’421 because the teaching of Connor’421 would improve the system of Connor’057 by providing a queuing mechanism for inter-process communication. As to claim 2, Connor’057 teaches the method of claim 1, wherein the remote memory access is based on Remote Direct Memory Access (RDMA), and wherein the transaction command includes an RDMA transaction (RDMA application program interface (API) semantics (e.g., RDMA Verbs)). As to claim 11, see the rejection of claim 1, expect for a non-transitory computer-readable storage medium, a processor of a host device and operating a virtual machine (VM) on a virtual machine manager (VMM) (hypervisor) running on the host device. Connor’057 teaches a non-transitory computer-readable storage medium (hard disk), a processor of a host device (Processors 102) and operating a virtual machine (VM) on a virtual machine manager (VMM) (hypervisor) running on the host device (“…Processors 102 can execute an operating system (OS), driver, and/or processes. In some examples, an OS can include Linux®, Windows® Server, FreeBSD®, Android®, MacOS®, iOS®, or any other operating system. One or more of processors 102 can execute processes 104. Processes 104 can include one or more of: applications, virtual machines (VMs), microVMs, containers, microservices, serverless applications, and so forth…A VM can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can include an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the client or server's CPU, memory, hard disk, network, and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from another, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host…” paragraphs 0013/0014). As to claim 12, see the rejection of claim 2 above. As to claim 20, Connor’057 teaches a computer system, comprising: a processor (Processors 102); a memory device (hard disk); a set of ports (Port Number 4791); and a network interface controller (NIC) (Network Interface Device 120); control circuitry comprising: a virtualization logic block is to run a virtual machine (VM) on the computer system (hypervisor); an operating system (OS) logic block operating system (OS) is to execute, within the VM (virtual machines (VMs)), a first OS running a client application (applications) (“…Processors 102 can execute an operating system (OS), driver, and/or processes. In some examples, an OS can include Linux®, Windows® Server, FreeBSD®, Android®, MacOS®, iOS®, or any other operating system. One or more of processors 102 can execute processes 104. Processes 104 can include one or more of: applications, virtual machines (VMs), microVMs, containers, microservices, serverless applications, and so forth…A VM can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can include an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the client or server's CPU, memory, hard disk, network, and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from another, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host…” paragraphs 0013/0014); and a transaction logic block is to: receive, at the first OS, a transaction command from the client application, wherein the transaction command is associated with a remote memory access (RDMA application program interface (API) semantics (e.g., RDMA Verbs)) (“…At least to provide support for reliable transport protocol APIs utilized by an application and utilize a CSP's reliable transport protocol, translation of communications based on a reliable transport protocol utilized by an application to or from CSP's proprietary reliable transport protocol can occur. For example, applications could utilize RDMA application program interface (API) semantics (e.g., RDMA Verbs), whereas traffic could be transmitted by a cloud native reliable transport such as a CSP's proprietary packet transmission protocol. A translator circuitry can receive or intercept RDMA Verbs API commands from an application, driver, or OS and translate the commands to a format consistent with commands of a CSP's reliable transport protocol. For example, the translation circuitry can convert RoCE format API semantics (or other API semantics) to utilize different proprietary protocols. For example, the translation circuitry can indicate receipt of packets on a proprietary protocol using RoCE format API semantics (or other API semantics). Accordingly, datacenter applications written to utilize RDMA or other reliable transport protocol can utilize RDMA APIs and do not need to be re-written to utilize cloud native datacenter reliable transport protocols…” paragraphs 0011/0019); convert, at the first OS, the transaction command to one or more network packets (“…At least to provide support for reliable transport protocol APIs utilized by an application and utilize a CSP's reliable transport protocol, translation of communications based on a reliable transport protocol utilized by an application to or from CSP's proprietary reliable transport protocol can occur. For example, applications could utilize RDMA application program interface (API) semantics (e.g., RDMA Verbs), whereas traffic could be transmitted by a cloud native reliable transport such as a CSP's proprietary packet transmission protocol. A translator circuitry can receive or intercept RDMA Verbs API commands from an application, driver, or OS and translate the commands to a format consistent with commands of a CSP's reliable transport protocol. For example, the translation circuitry can convert RoCE format API semantics (or other API semantics) to utilize different proprietary protocols. For example, the translation circuitry can indicate receipt of packets on a proprietary protocol using RoCE format API semantics (or other API semantics). Accordingly, datacenter applications written to utilize RDMA or other reliable transport protocol can utilize RDMA APIs and do not need to be re-written to utilize cloud native datacenter reliable transport protocols…In some examples, RDMA intercept 130 can translate a RoCE transmit request to a packet consistent with reliable transport protocol 140. For example, where RoCE operations are the same as operations of reliable transport protocol 140, RDMA intercept 130 can apply features specified by RoCE and available from reliable transport protocol 140. For reliable transport protocol 140 operations that RoCE does not provide, RDMA intercept 130 can select available reliable transport protocol 140 operations. For example, where path selection or congestion management of reliable transport protocol 140 differ from those of RoCE, RDMA intercept 130 can select path selection or congestion management available in reliable transport protocol 140…” paragraphs 0011/0019/0020); and wherein the NIC is to send the one or more network packets to a corresponding destination based on the network protocol stack (“…At 404, network interface device can transmit one or more packets to the destination receiver device in accordance with the second reliable transport protocol, which differs from the protocol utilized by the sender process…” paragraph 0041). Connor’057 is silent with reference to a second OS running a network protocol stack and provide, from the first OS, a description of the one or more network packets to the second OS via a shared guest physical memory of the VM. Connor’421 teaches a second OS (Second Virtual Machine 116/ receiving VM/VM2) running a network protocol stack (“…Packet transfers, unlike software copies in the protocol stack, may be designed to be sent to peripheral devices via DMA operations. The stack may be designed for packet transfer processes to be asynchronous. The transmitting VM may thus continue to do productive work while the packet is queued and transferred. Similarly, a receiving VM may be available for tasks during the transfer and may become aware of the received packet only after the transfer is complete. Advantageously, the CPU, which may be used for other operations, may not be kept busy copying the packet and thus be available for the other operations…” paragraph 0022) and provide, from the first OS (First Virtual Machine 114/VM1), a description of the one or more network packets (descriptors,…the descriptors can be used to define the data and control for the packet and elements such address, length, and required processing) to the second OS (Second Virtual Machine 116/VM2) via a shared guest physical memory of the VM (queue) (“…In block 210, a request to transmit a packet from a first virtual machine (VM1) to a second virtual machine (VM2) is received. A transmission (TX) packet for transmission is provided to the first virtual machine VM1 and a virtual network interface controller (vNIC) driver of VM1…In block 220, the vNIC driver of VM1 (VM<b>1</b>-vNIC) queues the TX packet to be transmitted. In some examples, the protocol stack can send a scatter-gather list to the vNIC driver with instructions for processing. For example, the processing may include a TCP checksum offload. In some examples, the vNIC driver can read the processing instructions and prepare descriptors for each element of the scatter-gather list. For example, the descriptors can be used to define the data and control for the packet and elements such address, length, and required processing. In some examples, after the descriptors are complete, the descriptors can be enqueued for transmission. For example, in the case of a physical NIC, the descriptors can be used for DMA operations. In case of vNIC to vSwitch environments, however, the descriptors can be used to inform the vSwitch of the packet location and control information…In block 230, a virtual switch (vSwitch) driver reads a transmission (TX) queue of VM1. In some examples, the vSwitch driver can monitor traffic that is within the network. The vSwitch driver can then detect that the TX packet that has been queued up in memory and recognizes that the packet has another destination within the system…In block 240, the vSwitch driver recognizes and determines the destination of the packet, which is another VM on the computer system, VM2. For example, the vSwitch driver may perform some discovery, read the VM1 transmission (TX) queue, and determine that the packet that is stored in VM1 memory is to be copied to VM2 memory…” paragraphs 0022/0026-0029). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 with the teaching of Connor’421 because the teaching of Connor’421 would improve the system of Connor’057 by providing a queuing mechanism for inter-process communication. Claims 3, 4, 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) as applied to claims 1 and 11 above, and further in view of U.S. Pub. No. 2016/0048464 A1 to Nakajima et al. As to claim 3, Connor’057 as modified by Connor’421 teaches the method of claim 1, however it is silent with reference to wherein a guest physical memory presented to the VM is partitioned into a plurality of segments comprising a first memory segment used by the first OS, a second memory segment used by the second OS, and the shared guest physical memory accessible by the first OS and the second OS. Nakajima teaches wherein a guest physical memory presented to the VM is partitioned into a plurality of segments comprising a first memory segment (memory buffers) used by the first OS (Source Virtual Machine 206), a second memory segment (memory buffers) used by the second OS (Target Virtual Machine 204) (Source Virtual Machine 206), and the shared guest physical memory accessible by the first OS and the second OS (“…The buffer ownership module 222 of the target virtual machine 204 and/or the source virtual machine 206 is configured to coordinate the transfer of ownership of memory buffers from the source virtual machine 206 to the target virtual machine 204, using the SVCS 216 established by the VMM 202. In particular, ownership of memory buffers within the secure view 228 may be transferred from the source virtual machine 206 to the target virtual machine 204, and the buffers may be processed by the target virtual machine 204 after it receives ownership of those buffers. When the secure view 228 has been filled beyond a predefined capacity, the source virtual machine 206 may reclaim buffers that have already been processed by the target virtual machine 204, and the VMM 202 may clear the secure view 228 and invalidate the EPT 134… Referring now to FIG. 7, the diagram 700 illustrates one potential embodiment of a secure view control structure (SVCS) 216 used to transfer ownership of memory buffers between the source virtual machine 206 and the target virtual machine 204. As shown, the SVCS 216 is associated with an illustrative secure view 228 that includes several buffers 404a through 404e. Those buffers 404 may be shared memory segments produced by the source virtual machine 206. For example, each buffer 404 may include a receive queue, a transmit queue, or any other I/O buffer of the source virtual machine 206…” paragraphs 0032/0062). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of Nakajima because the teaching of Nakajima would improve the system of Connor’057 and Connor’421 by providing a locking or latching mechanism for controlling computing resource use. As to claim 4, Connor’057 as modified by Connor’421 teaches the method of claim 1, however it is silent with reference to wherein the VM provides a plurality of virtual processing units, wherein the first OS executes on a first subset of the plurality of virtual processing units, and wherein the second OS executes on a second subset of the plurality of virtual processing units. Nakajima teaches wherein the VM provides a plurality of virtual processing units, wherein the first OS executes on a first subset of the plurality of virtual processing units (operating systems executing in the source virtual machine 206) (“…In block 610, the computing device 100 switches to the secure view 228. After switching to the secure view 228, the shared memory segments may be accessible in the virtual address space of one or more guest applications 220 or operating systems executing in the source virtual machine 206…” paragraph 0059), and wherein the second OS executes on a second subset of the plurality of virtual processing units (operating systems the target virtual machine 204) (“…In block 514, the computing device 100 accesses the shared memory segment. For example, applications 220 and/or operating systems the target virtual machine 204 may read data from or write data to the shared memory segment…” paragraph 0054). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of Nakajima because the teaching of Nakajima would improve the system of Connor’057 and Connor’421 by providing a virtualized execution of operating systems. As to claim 13, see the rejection of claim 3 above. As to claim 14, see the rejection of claim 4 above. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) and further in view of U.S. Pub. No. 2016/0048464 A1 to Nakajima et al. as applied to claims 4, and 14 above, and further in view of C.N. No. 1497469 A to Shultz et al. As to claim 5, Connor’057 as modified by Connor’421 and Nakajima teaches the method of claim 4, however it is silent with reference to wherein providing, by the first OS, the one or more network packets further comprises: issuing, by the first OS, an inter-processor interrupt; and obtaining, by the second OS, the one or more network packets via the shared guest physical memory based on the interrupt. Shultz teaches wherein providing, by the first OS, the one or more network packets (message/data) further comprises: issuing, by the first OS (Virtual Machine 12), an inter-processor interrupt ("wake-up" interrupt (step 88)); and obtaining, by the second OS (Virtual Machine 14), the one or more network packets via the shared guest physical memory (Queue 26b) based on the interrupt (“…FIG. 3 shows virtual machine (e.g., virtual machine 12) operates when it wants to another virtual machine (e.g., virtual machine 14) sends a message/data. In step 80, the virtual machine 12 calls its write function 32a to the data written into the shared memory 21. such as hereinbefore explained, each virtual machine can directly access the shared memory by providing appropriate address. Thus, the virtual machine 12 of the write function 32a by the given address to be written and provides data required to write the data into the shared memory. Next, the virtual machine 12 in the work queue management function ("WQMF") 81a through the writing virtual machine 14 a work item of the work queue 26b so that the work queue 26b adds a work item (step 82). because the work queue in shared memory, so this need not invoke CP. Next, WQMF 81a by checking table 24 determine the current virtual machine 14 is idle (decision 84). If not idle, then virtual machine 12 does not do any thing to finish the communication, and the communication process of any point CP is not invoked (termination step 86). According to the present invention, virtual machine 12 without removing the interrupt to virtual machine 14, which is due to the overhead involved in interrupting the virtual machine process of. interpretation as hereinbefore described with reference to FIG. 2, when the virtual machine 14 to finish its current work item, it will automatically transfer (/call) invokes the scheduler check its work queue for another work item (decision 48 and step 50). at that time, it will see the work item from the virtual machine 12. reference again decision 84, virtual machine 14 is idle if, according to the invention, the virtual machine 12 to the virtual machine 14 sends a "wake-up" interrupt (step 88). This need to invoke CP. the awakening interrupt alarm/calling virtual machine 14 to notify it with a work item in its queue 26b. virtual machine 12 to send the interrupt to finish the part of a data communication. the "wake-up" interrupt automatically causes virtual machine 14 starts the dispatcher 22b (decision 48 FIG. 2) looking for work items to check its work queue. then the scheduler 22b shown in FIG. 2 step to check its work queue 26b (step 50 and decision 52), then read the data (step 54)…”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057, Connor’421 and Nakajima with the teaching of Shultz because the teaching of Shultz would improve the system of Connor’057, Connor’421 and Nakajima by providing a signal that temporarily halts a CPU's current task to service an event needing immediate attention. As to claim 15, see the rejection of claim 5 above. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) and further in view of U.S. Pub. No. 2016/0048464 A1 to Nakajima et al. and further in view of C.N. No. 1497469 A to Shultz et al. as applied to claims 5, and 15 above, and further in view of U.S. Pub. No. 2016/0380848 A1 to Ramey. As to claim 6, Connor’057 as modified by Connor’421, Nakajima and Shultz teaches the method of claim 5, however it is silent with reference to enqueueing, by the first OS, the description of the one or more network packets in a queue in the shared guest physical memory; and dequeuing, by the second OS, the description from the queue. Ramey teaches enqueueing, by the first OS, the description (address pointers) of the one or more network packets in a queue in the shared guest physical memory (the method includes using receive queues within the vNICs for the service chain VMs to store receive address pointers for packet data within the shared memory to be processed by the service chain VMs) (“…In additional embodiments, the method includes providing a virtual network interface controller (vNIC) for each of the plurality of VMs and using the vNICs to communicate the pointers. In further embodiments, the method includes using receive queues within the vNICs for the service chain VMs to store receive address pointers for packet data within the shared memory to be processed by the service chain VMs, and using transmit queues within the vNICs for the service chain VMs to store transmit address pointers for packet data within the shared memory that has been processed by the service chain VMs. In still further embodiments, the method includes controlling the predetermined order with the packet manager VM by controlling storage of the receive address pointers using the packet manager VM…” paragraph 0011); and dequeuing, by the second OS, the description (address pointers) from the queue (“… a shared memory included within the packet manager VM configured to store packet data for packets being processed by the service chain VMs where the service chain VMs are configured to use address pointers to access the packet data within the shared memory without copying the packet data to memory associated with the service chain VMs, and where the packet manager VM is further configured to provide processed packet data to another destination…” paragraph 0016). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057, Connor’421, Nakajima and Shultz with the teaching of Ramey because the teaching of Ramey would improve the system of Connor’057, Connor’421, Nakajima and Shultz by providing an object in many programming languages that stores a memory address and references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As to claim 16, see the rejection of claim 6 above. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) as applied to claims 1, and 11 above, and further in view of U.S. Pub. No. 2016/0188527 A1 to Cherian et al. As to claim 7, Connor’057 as modified by Connor’421 teaches the method of claim 1, however it is silent with reference to running, by the first OS, a virtual network interface controller (NIC) operable based on the remote memory access; and receiving, by the virtual NIC, the transaction command from the client application by emulating the remote memory access. Cherian teaches running, by the first OS (VM), a virtual network interface controller (NIC) (VNIC functionality) operable based on the remote memory access (RDMA guest device 160-162) (“…As described below, the RDMA guest device (or RDMA paravirtualized device) 160-162 provides VNIC functionality as well as interfacing with the RDMA stack 175…The VNIC functionality in a VM is responsible for exchanging packets between the VM and the network virtualization layer of the host virtualization software 115 through an associated VNIC emulator (not shown)…” paragraphs 0038/0039); and receiving, by the virtual NIC (VNIC functionality), the transaction command from the client application (to send and receive data to and from the VMs) by emulating the remote memory access (“…The VNIC functionality in a VM is responsible for exchanging packets between the VM and the network virtualization layer of the host virtualization software 115 through an associated VNIC emulator (not shown). Each VNIC emulator interacts with VNIC drivers in the VMs to send and receive data to and from the VMs. In some embodiments, the virtual NICs are software abstractions of physical NICs implemented by virtual NIC emulators. For instance, the code for requesting and obtaining a connection ID resides in components of VNIC emulators in some embodiments. In other words, the VNIC state is implemented and maintained by each VNIC emulator in some embodiments. Virtual devices such as VNICs are software abstractions that are convenient to discuss as though part of VMs, but are actually implemented by virtualization software using emulators. The state of each VM, however, includes the state of its virtual devices, which is controlled and maintained by the underlying virtualization software…” paragraph 0039). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of Cherian because the teaching of Cherian would improve the system of Connor’057 and Connor’421 by providing an software to mimic another system's hardware by interpreting its machine code instruction-by-instruction in order to enable a computer system, which is called the host system, to operate on a different system, which is known as the guest system. As to claim 17, see the rejection of claim 7 above. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) and further in view of U.S. Pub. No. 2016/0188527 A1 to Cherian et al. as applied to claims 7, and 17 above, and further in view of U.S. Pub. No. 2008/0005441 A1 to Droux et al. As to claim 8, Connor’057 as modified by Connor’421 teaches the method of claim 7, however it is silent with reference to determining, by the virtual NIC, an address of the corresponding destination based on the transaction command; and generating, by the virtual NIC, the one or more network packets by incorporating the address. Droux teaches determining, by the virtual NIC (Sending VNIC), an address of the corresponding destination based on the transaction command (figure 8, ST202-ST222); and generating, by the virtual NIC, the one or more network packets by incorporating the address (figure 8, ST204/ST224-ST228). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of Droux because the teaching of Droux would improve the system of Connor’057 and Connor’421 by providing a method for sending a packet from a VNIC on a virtual switch to a packet destination not associated with the virtual switch (Droux paragraph 0071). Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) as applied to claims 1 and 11 above, and further in view of U.S. Pat. No. 7,630,398 B2 to Blum. As to claim 9, Connor’057 as modified by Connor’421 teaches the method of claim 1, however it is silent with reference to providing, using a NIC driver running on the second OS, the one or more network packets from the network protocol stack to a physical NIC of the host device, wherein the physical NIC transmits the one or more network packets. Blum teaches providing, using a NIC driver running on the second OS (driver), the one or more network packets from the network protocol stack to a physical NIC of the host device (second NIC), wherein the physical NIC transmits the one or more network packets (“…A method for bridging a first network and a second network, comprising: receiving a network packet from a first network interface card (NIC) of a first computer system coupled to a second computer system, the network packet received from the second computer system; invoking a protocol application programming interface (API) operation, the protocol API operation operable to deliver the network packet to an application program executing on the first computer system; and translating the protocol API operation into a miniport API operation associated with a second NIC of the first computer system coupled to a third computer system to deliver the network packet to the second NIC and thereafter to the third computer system without first delivering the network packet to the application program, wherein the translating includes using functions provided in a network driver interface specification (NDIS), the NDIS to provide drivers associated with the first and second NICs and a driver associated with the protocol API with a standardized interface with which to communicate, and the translating is performed by a subnet independent (SI) bridge software program (SI bridge)…” Claim 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of Blum because the teaching of Blum would improve the system of Connor’057 and Connor’421 by providing a NIC driver for communicating with a NIC card. As to claim 19, see the rejection of claim 9 above. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20220279057 A1 to Connor et al. (hereinafter referred to as Connor’057) in view of U.S. Pub. No. 2018/0181421 A1 to Connor et al. (hereinafter referred to as Connor’421) as applied to claims 7, and 17 above, and further in view of U.S. Pub. No. 2021/0226892 A1 to FU et al. As to claim 10, Connor’057 as modified by Connor’421 teaches the method of claim 1, however it is silent with reference to wherein the corresponding destination includes a virtual NIC operable based on the remote memory access. FU teaches wherein the corresponding destination includes a virtual NIC operable based on the remote memory access (RNIC 220/RNIC 240) (“…FIG. 2 is a schematic diagram of a system architecture according to an embodiment of this application. Three VMs are deployed on a compute node 210 shown in FIG. 2 a VM 211, a VM 212, and a VM 213. Three vRNICs are deployed on an RNIC 220, a vRNIC 221, a vRNIC 222, and a vRNIC 223. Two VMs are deployed on a compute node 230, a VM 231 and a VM 232. Two vRNICs are deployed on an RNIC 240, a vRNIC 241 and a vRNIC 242. The RNIC 220 is an RNIC of the compute node 210, and the RNIC 240 is an RNIC of the compute node 230. In other words, the compute node 210 is a host of the RNIC 220, and the compute node 230 is a host of the RNIC 240…destination remote direct memory access (RDMA) network interface card (RNIC)…” paragraph 0058, Claim 9). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Connor’057 and Connor’421 with the teaching of FU because the teaching of FU would improve the system of Connor’057 and Connor’421 by providing a software-based network adapter that allows virtual machines (VMs) or instances to connect to a network, acting like a physical NIC but existing in a virtualized environment. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pub. No. 2022/0318184 A1 to Zhu et al. and directed to a method for intercepting a command from an application in a container to establish an RDMA connection with a remote container on a virtual network. U.S. Pub. No. 20150370586 A1 to Cooper et al. and directed to methods, software, and apparatus for implementing local service chaining (LSC) with virtual machines (VMs) or virtualized containers in Software Defined Networking (SDN). U.S. Pub. No. 2008/0148281 A1 to Magro et al. and directed to method for determining whether a message (306) is placed in a send buffer. U.S. Pub. No. 2022/0092021 A1 to Cherian et al. and directed to a remote direct memory access method for receiving a remote direct memory access (RDMA) request that a virtual machine (VM) sends directly to a physical network interface controller (PNIC) of a host computer. U.S. No. 2011/0314469 A1 to Qian et al. and directed to a computing system, sharing a physical NIC device among multiple virtual machines. U.S. Pub. No. 2022/0204542 A1 to Bernat et al. and directed to a system and method for shared memory for intelligent network interface cards. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §103
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month