Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,085

CONTEXT SWITCH REDUCTION FOR VIRTUAL MACHINE EXITS

Non-Final OA §101§102§103
Filed
Aug 30, 2023
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Red Hat Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This office action is in response to claims filed 30 August 2023. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Regarding claims 18-20, it comprises “a non-volatile computer-readable memory”. The specification provides examples as to the meaning of this term, but does not describe whether this term covers either transitory or non-transitory media, or both. Thus, applying the broadest reasonable interpretation in light of the specification and taking into account the meaning of the words in their ordinary usage as they would be understood by one of ordinary skill (MPEP 2111), the claim as a whole covers a computer program product that is stored in memory that includes transitory media. A transitory medium does not fall into any of the 4 categories of invention (process, machine, manufacture, or composition of matter). The claims may be amended to read “non-transitory computer-readable memory”, thus excluding that portion of the scope covering transitory signals. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 10, and 16-20, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ANG et al. Pub. No.: US 2020/0028785 A1 (hereafter ANG). Regarding claim 1, ANG teaches: A method, comprising: executing a supervisor; executing a first virtual machine on the supervisor ([0022] Hypervisors 109A and 109B (i.e., “supervisors”) are software layers or components that support the execution of multiple VMs, such as VM1A-VM1B); detecting a first exit from the first virtual machine ([0027] In an embodiment, an approach includes offloading the fast path processing from a VM (i.e., “first VM”) onto an offload destination, and allowing the less frequent slow path processing to be still performed by the VM. [0073] In step 604, the hypervisor determines whether the network function packet processing can be offloaded (i.e., a determination that packet processing will be offloaded to the hypervisor causes the packet processing to “exit” the virtual machine)); responsive to detecting the first exit, loading a userspace context without loading a supervisor context ([0051] Hypervisor 109A offloads network function packet processing module 210 onto user space 304 (i.e., user space “context”), and saves it as an offloaded network function packet processing module 215B, or an offload 215B, in user space 304. Offload 215B is downloaded to user space 304 to facilitate network function packet processing of a packet 201B along a path 410B, and to produce a resulting, processed packet 216B (i.e., processing the packet in user space does not use, or “load” other offload destinations including kernel space)); executing a second virtual machine on the supervisor ([0022] Hypervisors 109A and 109B are software layers or components that support the execution of multiple VMs, such as VM1A-VM1B (i.e., at least a “second VM”)); detecting a second exit from the second virtual machine ([0027] In an embodiment, an approach includes offloading the fast path processing from a VM onto an offload destination, and allowing the less frequent slow path processing to be still performed by the VM. [0073] In step 604, the hypervisor determines whether the network function packet processing can be offloaded (i.e., a determination that packet processing will be offloaded to the hypervisor causes the packet processing to “exit” the virtual machine. A decision to offload processing of a second packet of a data flow (see [0027]) represents a “second exit”)); and responsive to detecting the second exit, loading the supervisor context ([0050] Hypervisor 109A offloads network function packet processing module 210 onto kernel space 302 (i.e., “supervisor context”), and saves it as an offloaded network function packet processing module 215A, also called an offload 215A, in kernel space 302. Offload 215A is downloaded to kernel space 302 to facilitate network function packet processing of a packet 201A along a path 410A, and to produce a resulting, processed packet 216A). Regarding claim 2, ANG further teaches: the first virtual machine and the second virtual machine are a same virtual machine (FIGS 4A and 4B illustrate the same VM (VM1 107A) offloading packets to both the kernel space 302, in 4A, or the user space 304, in 4B). Regarding claim 3, ANG further teaches: inspecting a first exit status to determine whether a supervisor context switch is required for the first exit ([0074] If, in step 606, the hypervisor determines that the network function packet processing can be offloaded, then, in step 610, the hypervisor determines an offload destination. [0005] The offload itself and the offload's destination are usually determined based on availability of PNICs, capacities of the available PNICs, hardware capabilities of the PNIC and capabilities of the hypervisor (i.e., any of the factors used to determine the offload’s destination may be considered “first exit status”)); and responsive to the first exit not requiring the supervisor context switch, loading the userspace context without loading the supervisor context ([0075] An offload destination is a hypervisor kernel space (612), a hypervisor user space (618), and/or a PNIC (614). The offload destination may be selected based on the factors and considerations described above. [0051] Hypervisor 109A offloads network function packet processing module 210 onto user space 304 (i.e., user space “context”), and saves it as an offloaded network function packet processing module 215B, or an offload 215B, in user space 304. Offload 215B is downloaded to user space 304 to facilitate network function packet processing of a packet 201B along a path 410B, and to produce a resulting, processed packet 216B (i.e., processing the packet in user space does not use, or “load” other offload destinations including kernel space)). Regarding claim 4, ANG further teaches: inspecting a second exit status to determine whether a supervisor context switch is required for the second exit ([0074] If, in step 606, the hypervisor determines that the network function packet processing can be offloaded, then, in step 610, the hypervisor determines an offload destination. [0005] The offload itself and the offload's destination are usually determined based on availability of PNICs, capacities of the available PNICs, hardware capabilities of the PNIC and capabilities of the hypervisor (i.e., any of the factors used to determine the offload’s destination may be considered “second exit status”)); and responsive to the second exit requiring the supervisor context switch, loading the supervisor context ([0075] An offload destination is a hypervisor kernel space (612), a hypervisor user space (618), and/or a PNIC (614). The offload destination may be selected based on the factors and considerations described above. [0050] Hypervisor 109A offloads network function packet processing module 210 onto kernel space 302 (i.e., “supervisor context”), and saves it as an offloaded network function packet processing module 215A, also called an offload 215A, in kernel space 302. Offload 215A is downloaded to kernel space 302 to facilitate network function packet processing of a packet 201A along a path 410A, and to produce a resulting, processed packet 216A). Regarding claim 10, it comprises limitations similar to those of claims 1, 3, and 4. They are therefore Regarding claim 16, ANG further teaches: checking whether an exit status is of the first type; and responsive to determining that the exit status is not of the first type, determining that the exit status is of the second type ([0074] If, in step 606, the hypervisor determines that the network function packet processing can be offloaded, then, in step 610, the hypervisor determines an offload destination. [0005] The offload itself and the offload's destination are usually determined based on availability of PNICs, capacities of the available PNICs, hardware capabilities of the PNIC and capabilities of the hypervisor (i.e., any of the factors used to determine the offload’s destination may be considered “first exit status” used to determine whether packet processing is of a type requiring offload to a kernel space (first type of exit status, which is not a second type) or of a type requiring offload to a user space (second type of exit status, which is not a first type))). Regarding claim 17, ANG further teaches: checking whether an exit status is of the second type; and responsive to determining that the exit status is not of the second type, determining that the exit status is of the first type ([0074] If, in step 606, the hypervisor determines that the network function packet processing can be offloaded, then, in step 610, the hypervisor determines an offload destination. [0005] The offload itself and the offload's destination are usually determined based on availability of PNICs, capacities of the available PNICs, hardware capabilities of the PNIC and capabilities of the hypervisor (i.e., any of the factors used to determine the offload’s destination may be considered “second exit status” used to determine whether packet processing is of a type requiring offload to a kernel space (first type of exit status, which is not a second type) or of a type requiring offload to a user space (second type of exit status, which is not a first type))). Regarding claim 18, it comprises limitations similar to claims 1, 3, and 4, and is therefore rejected for similar rationale. Regarding claims 19 and 20, they comprise limitations similar to claims 16 and 17 respectively, and are therefore rejected for similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over ANG, as applied to claims 1, and 10 above, and in further view of LEMAY et al. Pub. No.: US 2024/0220423 A1 (hereafter LEMAY). Regarding claim 5, while ANG discusses executing exits from VMs, ANG does not explicitly teach: the second exit status is a page fault or timer interrupt. However, in analogous art that similarly executes VMexits, LEMAY teaches: the second exit status is a page fault or timer interrupt ([0050] An event may be received to be handled by the kernel at block 412. The processor is the first entity to receive the event. Events don't necessarily result in switching processes. Sometimes the same process will be resumed after the event has been processed. In other cases, processing the event results in a process switch event. [0051] A process switch event may include a system call (syscall), an interrupt, or an exception. Control of execution is switched to kernel 117 at block 414 to handle the process switch event. [0056] FIG. 7 illustrates transferring control 700 from a compartment to the kernel 117 according to an implementation. When usermode execution exits to the OS 116 (e.g., due to issuing a syscall, generating an exception, or being interrupted), VMM 120 detects this event and switches to the appropriate EPT view for supervisor execution (i.e., interrupts cause VM exit to a supervisor kernel mode)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined LEMAY’s teaching of exiting a VM to a supervisor kernel mode due to an interrupt, with ANG’s teaching of exiting a VM to a supervisor kernel mode, to realize, with a reasonable expectation of success, a system that exits a VM to a supervisor kernel mode, as in ANG, based on receiving an operation that requires the kernel to be used, such as an interrupt, as in LEMAY. A person having ordinary skill would have been motivated to make this combination to enable events requiring supervisory interaction, such as interrupts, syscalls, and exceptions, to be properly processed while maintaining memory security (LEMAY [0001]). Regarding claim 11, it comprises limitations similar to claim 5, and is therefore rejected for similar rationale. Claims 6, 7, 12, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over ANG, as applied to claims 1, and 10 above, and in further view of XIAO et al. Pub. No.: US 2021/0334125 A1 (hereafter XIAO). Regarding claim 6, while ANG discusses causing virtual machine exits, ANG does not explicitly teach: saving a first guest context of the first virtual machine to storage; and saving a second guest context of the second virtual machine to storage. However, in analogous art that similarly teaches causing virtual machine exits, XIAO teaches: saving a first guest context of the first virtual machine to storage; and saving a second guest context of the second virtual machine to storage ([0006] The method may include suspending, by the guest operating system, running of the application, and causing an exit to the virtual machine monitor, then storing, by the virtual machine monitor, resumption information of the application, where the resumption information may include a context of the application or a storage address of the context, and next, restoring, by the virtual machine monitor, the context of the application based on the resumption information, and causing an entry to the guest operating system, so that running of the application may be resumed). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined XIAO’s teaching of saving application context when causing virtual machine exits, with ANG’s teaching of causing virtual machine exits to perform packet processing tasks, to realize, with a reasonable expectation of success, a system that causes virtual machine exits to perform packet processing tasks, as in ANG, which involves saving application context, as in XIAO. A person having ordinary skill would have been motivated to make this combination to ensure that applications may be resumed with a relative faster resumption speed (XIAO [0004]). Regarding claim 7, XIAO further teaches: loading the first guest context or the second guest context into memory ([0006] The method may include suspending, by the guest operating system, running of the application, and causing an exit to the virtual machine monitor, then storing (i.e., in “memory”), by the virtual machine monitor, resumption information of the application, where the resumption information may include a context of the application or a storage address of the context, and next, restoring, by the virtual machine monitor, the context of the application based on the resumption information, and causing an entry to the guest operating system, so that running of the application may be resumed). Regarding claims 12 and 13, they comprise limitations similar to claims 6-7, and are therefore rejected for similar rationale. Claims 8, 9, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over ANG, as applied to claims 1, and 10 above, and in further view of TAOKORO et al. Pub. No.: US 2021/0294528 A1 (hereafter TAOKORO). Regarding claim 8, while ANG discusses switching between user space and kernel space, ANG does not explicitly teach: saving the userspace context to storage. However, in analogous art that similarly discusses switching between user space and kernel space, TADOKORO teaches: saving the userspace context to storage ([0069] The user space thread 103 includes a plurality of coroutines (coroutines (1) to (3) in the example of FIG. 2). A coroutine (also described as a small execution row) is a type of programing structure. The coroutine allows processing execution to be suspended and resumed during execution. The host 2 can execute only one coroutine simultaneously. By using a function called context switch, the host 2 can switch from one coroutine to another coroutine. The coroutine is a light thread in which a context switch is possible in a few ns. During a context switch, the user space thread 103 saves, in a stack, a memory device (register) (i.e., “userspace context” saved in “memory”) inside the host CPU 20, and switches a stack pointer). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined TADOKORO’s teaching of saving userspace context in memory, with ANG’s teaching of switching between userspace and kernel, to realize, with a reasonable expectation of success, a system that switches between userspace and kernel, as in ANG, and in doing so, saves userspace context, as in TADOKORO. A person having ordinary skill would have been motivated to make this combination to enable user space coroutines to properly resume after switching using stored context. Regarding claim 9, TADOKORO further teaches: the userspace context is stored on a userspace stack ([0069] The user space thread 103 includes a plurality of coroutines (coroutines (1) to (3) in the example of FIG. 2). A coroutine (also described as a small execution row) is a type of programing structure. The coroutine allows processing execution to be suspended and resumed during execution. The host 2 can execute only one coroutine simultaneously. By using a function called context switch, the host 2 can switch from one coroutine to another coroutine. The coroutine is a light thread in which a context switch is possible in a few ns. During a context switch, the user space thread 103 saves, in a stack, a memory device (register) (i.e., “userspace context” saved in “memory”) inside the host CPU 20, and switches a stack pointer). Regarding claims 14-15, they comprise limitations similar to claims 8-9, and are therefore rejected for similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Dec 09, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month