Prosecution Insights
Last updated: April 19, 2026
Application No. 18/088,955

JOB SUBMISSION ALIGNMENT WITH WORLD SWITCH

Final Rejection §103
Filed
Dec 27, 2022
Examiner
NGUYEN, VAN H
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Ati Technologies Ulc
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
759 granted / 851 resolved
+34.2% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
18 currently pending
Career history
869
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
24.0%
-16.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the amendment filed 12/08/2025. Claims 1-20 are pending in this application. Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kovacevic (US 20200409732) in view of Cheng et al. (US 20180113731). It is noted that any citations to specific, pages, columns, paragraphs, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. As to claim 1: Kovacevic teaches a method (claim 13: a method) comprising: assigning, at a host executing at a parallel processor, a first time slice to a first virtual function of a plurality of virtual functions ([0016]: Processing units such as graphics processing units (GPUs) support virtualization that allows multiple virtual machines to use the hardware resources of the GPU…The virtual environment implemented on the GPU provides virtual functions to other virtual components implemented on a physical machine. A single physical function implemented in the GPU is used to support one or more virtual functions. The physical function allocates the virtual functions to different virtual machines on the physical machine on a time-sliced basis. For example, the physical function allocates a first virtual function to a first virtual machine in a first time interval and a second virtual function to a second virtual machine in a second, subsequent time interval; [0112]: The hypervisor schedules the time slices to the running VM-VFs on the GPU. The selection of a guest VM to run subsequent to a currently executing guest VM, i.e. a GPU switch, is achieved either by hypervisor or by a GPU scheduling switch. When a virtual function obtains its time slice on the GPU, the corresponding guest VM owns the GPU resource and the graphics driver which is running within this guest VM behaves as if it owns the GPU solely. The guest VM response to all command submission and register accesses during its allocated time slice); and sending a signal from a kernel mode driver to a user mode driver for the first virtual function, wherein the signal indicates when an application executing at the first virtual function is to start generating rendering jobs at a central processing unit (CPU) for a next frame ([0019]: A scheduler in the GPU schedules the guest VM to execute the virtual function at a scheduled time…A world switch is performed at the scheduled time to switch contexts from a context defined for a previously executing guest VM to a context for the current guest VM, e.g., as defined in the context registers in the subset of the registers for the current guest VM… After the world switch is complete, the current guest VM begins executing the virtual function to perform hardware acceleration operations on the frames in the frame buffer registers. As discussed herein, examples of the hardware acceleration operations include multimedia decoding, multimedia encoding, video decoding, video encoding, audio decoding, audio encoding, and the like. The scheduler schedules the guest VM for a time interval and the guest VM has exclusive access to the virtual function and the subset of registers during the time interval; [0113]: In processing units that do not contain Multimedia Scheduler (MMSCH), programming of multimedia engines and their lifecycle control is accomplished by the main x64 or x86 CPU. In such mode, video encode, and/or video decode firmware loading and initialization is accomplished by the virtual function driver, at the time when it is initially loaded. At run time, each loaded virtual function instance has its own firmware image and performs firmware and register context restore, retrieval of only one job from its own queue, encodes a full frame and performs context save. When the virtual function instance reaches the idle time, it notifies the hypervisor that the hypervisor may load the next virtual function; [0151]: The second portion 1300 illustrates messages exchanged between a video BIOS (VBIOS), a hypervisor (HV), a kernel mode driver topology translation layer for a physical function (TTL-PF), a multimedia UMD for a virtual function, a kernel mode driver TTL for the virtual function (TTL-VF), and a kernel mode driver (KMD) for the virtual function; [0152]: During normal runtime operation, a multimedia application (e.g., the UMD) in a selected time interval submits an encode or decode job request to TTL-VF (via the message 1305), which notifies an appropriate node to submit and execute the requested job by transmitting the message 1310 to the KMD; [0154]: Upon completion of one submitted job for a virtual function, the TTL-VF signals the multimedia scheduler that a job has been executed on the virtual function. The multimedia scheduler deactivates the virtual function. The multimedia scheduler then performs a world switch to a next active virtual function). Kovacevic, however, does not explicitly teach the following additional limitations: Cheng teaches sending a signal from a kernel mode driver to a user mode driver for the first virtual function prior to a world switch between the first time slice and a second time slice assigned to a second virtual function (FIGS. 1-4, [0006]: after a new virtual function finishes initializing, a GPU scheduler triggers world switches between all already active VFs (e.g., previously initialized VFs) which have already finished initialization such that each VF is allocated GPU time to handle any accumulated commands. Allowing the previously initialized VFs to process accumulated commands and perform already scheduled computing tasks helps to prevent the VFs from being labeled as inactive or hung, thereby avoiding unnecessary resets of the VFs; [0032]:the VFs 314 is improperly identified as being hung even with additional time periods allocated before TDR if the system 300 has more than one new VF to be initialized. The guest OS of VMs 302 will eventually trigger TDR cycles if the VF drivers 310 cannot process accumulated commands before allocated time periods run out. Unnecessary TDR cycles are avoided by giving the already initialized VFs GPU time to perform its computing tasks. For example, after a new virtual function finishes initializing, the GPU scheduler 318 triggers world switches between all already active VFs 314 which have already finished initialization such that each VF 314 are allocated GPU time to handle any accumulated commands before reporting the last completed fence instruction ID back to the guest OS. During the world switches, the hypervisor 308 uses PF configuration space registers to switch the GPU from one VF (e.g., VF(1)) to another (e.g., VF(2))). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Kovacevic with Cheng because it would have provided the enhanced capability for avoiding unnecessary resets of the already initialized VFs. As to claim 2: Kovacevic teaches assigning the first time slice is based on a number of the plurality of virtual functions and a target frame rate of the application executing at the first virtual function ([0112], [0145-0147] and [0154]). As to claim 3: Kovacevic teaches calculating a delay between consecutive time slices assigned to the first virtual function; and sending the signal based on the delay ([0119] and [0125-0127]). As to claim 4: Kovacevic teaches sending the signal is at an predetermined offset from the world switch between a first time slice assigned to the first virtual function and a second time slice assigned to a second virtual function ([0019] and [0123-0125]). As to claim 5: Kovacevic teaches the predetermined offset is based on a history of job preparation durations for previous frames submitted by an application executing at the first virtual function to the parallel processor ([0144], [0148], and [0150]). As to claim 6: Kovacevic teaches the job preparation durations are measured by a job start latency, the job start latency comprising a duration from a first time of a start of work at the CPU a central processing unit (CPU) for a frame to a second time when work for the frame is ready to be sent to the parallel processor ([0021], [0037], and [0084-0086]). As to claim 7: Kovacevic teaches a number of previous frames included in the history of job preparation durations is set by a user ([0086] and [0150]). As to claim 8: Kovacevic teaches the predetermined offset is further based on a bias reflecting a variation in job preparation durations between frames submitted by the application ([0026] and [0056]). As to claim 9: Kovacevic teaches a method (claim 13: a method), comprising: setting a world switch ([0019]: A world switch) between a first time slice assigned to a first virtual function of a plurality of virtual functions and a second time slice assigned to a second virtual function of the plurality of virtual functions based on a target frame rate for applications executing at the first virtual function and the second virtual function and a number of the plurality of virtual functions ([0019]: A world switch is performed at the scheduled time to switch contexts from a context defined for a previously executing guest VM to a context for the current guest VM; [0112]: The hypervisor schedules the time slices to the running VM-VFs on the GPU. The selection of a guest VM to run subsequent to a currently executing guest VM, i.e. a GPU switch, is achieved either by hypervisor or by a GPU scheduling switch. When a virtual function obtains its time slice on the GPU, the corresponding guest VM owns the GPU resource and the graphics driver which is running within this guest VM behaves as if it owns the GPU solely. The guest VM response to all command submission and register accesses during its allocated time slice; [0118]: Multimedia World Switch means switching between a currently running multimedia VF instance to the next multimedia VF instance. Multimedia World Switch is accomplished with the several commands exchanges between MMSCH firmware and UVD/VCE/VCN firmware of the currently running and next to run multimedia firmware instance; [0123]: gpu_context_switch (fcn_id, nxt_fcn_id)—the MMSCH waits for the MM engine to finish processing a job on function VFID=fcn_id and switches to process the job on the next function specified by nxt_fcn_id argument; [0124]: gpu_enable_hw_autoscheduling (active_functions)—this command notifies the MMSCH to perform a world switch between the VM functions which are listed in the register array. During the MM engine world switch, each function in the list remains active for the time slice specified by register; [0145]: When an application on a guest OS/VM running on a virtual function loads a multimedia driver for either decode or encode use case, the loaded multimedia driver becomes aware of the current encode or decode profile and sends a request to a TTL layer of a KMD driver (in message 1206). This request can be formulated as either: [0146] 1) A current resolution of decode or encode operation indicating horizontal and vertical size and refresh rate of source (say 720p24, 108030, etc.) or [0147] 2) A total number of macroblocks in encoded frames or in compressed bitstream content that needs to be decoded; [0154]: Upon completion of one submitted job for a virtual function, the TTL-VF signals the multimedia scheduler that a job has been executed on the virtual function. The multimedia scheduler deactivates the virtual function. The multimedia scheduler then performs a world switch to a next active virtual function. Some embodiments of the multimedia scheduler use a round robin scheduler to activate and serve virtual functions. Other embodiments of the multimedia scheduler use dynamic priority-based scheduling where priorities are evaluated based on a type of a queue used by the corresponding virtual function. In yet other embodiments, the multimedia scheduler implements a rate monotonic scheduler serving guest VMs that have decode or encode jobs of lower resolutions (e.g., shorter job intervals) than the guest VMs that are using the priority based queue system, e.g., a time critical queue for an encode job for a Skype application with a minimal latency, or a real time queue for encode job for a wireless display session, a general purpose encode queue for a non-real time video transcoding, or a general purpose decode queue); and aligning submission of a job from the first virtual function to a parallel processor with a start of the first time slice ([0019]: A scheduler in the GPU schedules the guest VM to execute the virtual function at a scheduled time… After the world switch is complete, the current guest VM begins executing the virtual function to perform hardware acceleration operations on the frames in the frame buffer registers…. The scheduler schedules the guest VM for a time interval and the guest VM has exclusive access to the virtual function and the subset of registers during the time interval; [0152]: During normal runtime operation, a multimedia application (e.g., the UMD) in a selected time interval submits an encode or decode job request to TTL-VF (via the message 1305), which notifies an appropriate node to submit and execute the requested job by transmitting the message 1310 to the KMD; [0154]: Upon completion of one submitted job for a virtual function, the TTL-VF signals the multimedia scheduler that a job has been executed on the virtual function. The multimedia scheduler deactivates the virtual function. The multimedia scheduler then performs a world switch to a next active virtual function). Kovacevic, however, does not explicitly teach the following additional limitations: Cheng teaches generation of the job at a central processing unit (CPU) precedes the start of the first time slice ([0007-0008]: The system 100 comprises multiple virtual machines (VMs) 102 that are configured in memory 104 on a host system. Resources from physical devices of the host system are shared with the VMs 102. The resources can include, for example, a graphics processor resource from GPU 106, a central processing unit resource from a CPU, a memory resource from memory, a network interface resource from network interface controller, or the like. The VMs 102 use the resources for performing operations on various data (e.g., video data, image data, textual data, audio data, display data, peripheral device data, etc.)...The hypervisor 108 controls interactions between the VMs 102 and the various physical hardware devices, such as the GPU 106. The hypervisor 108 includes software components for managing hardware resources and software components for virtualizing or emulating physical devices to provide virtual devices, such as virtual disks, virtual processors, virtual network interfaces, or a virtual GPU as further described herein for each virtual machine 102). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Kovacevic with Cheng because it would have provided the enhanced capability for avoiding unnecessary resets of the already initialized VFs. As to claim 10: Kovacevic teaches aligning comprises: sending a signal indicating when an application executing at the first virtual function is to begin generating rendering jobs for a next frame ([0019], [0151-0152], and [0154]). As to claim 11: Kovacevic teaches calculating a delay between consecutive time slices assigned to the first virtual function; and sending the signal based on the delay ([0119] and [0125-0127]). As to claim 12: Kovacevic teaches sending the signal is at a predetermined offset from the world switch ([0019] and [0123-0125]). As to claim 13: Kovacevic teaches the offset is based on a history of job preparation durations for previous frames submitted by the application executing at the first virtual function to the parallel processor ([0144], [0148], and [0150]). As to claim 14: Kovacevic teaches the job preparation durations are measured by a job start latency, the job start latency comprising a duration from a first time of a start of work at the CPU a central processing unit (CPU) for a frame to a second time when work for the frame is ready to be sent to the parallel processor ([0021], [0037], [0084-0086], and [0113]). As to claim 15: Kovacevic teaches a number of previous frames included in the history of job preparation durations is set by a user ([0086] and [0150]). As to claim 16: Kovacevic teaches the predetermined offset is further based on a bias reflecting a variation in job preparation durations between frames submitted by the application ([0026] and [0056]). As to claims 17-19: Note the rejection of claims 1, 2, and 4 above, respectively. Claims 17-19 are the same as claims 1, 2, and 4, except claims 17-19 are method claims and claims 1, 2, and 4 are device claims. As to claim 20: Kovacevic teaches the predetermined offset is based on a history of job preparation durations at a central processing unit for previous frames submitted by the application to the parallel processor ([0144], [0148], and [0150]) and a bias reflecting a variation in job preparation durations between frames submitted by the application ([0026] and [0056]). Response to Arguments 3. Applicant's arguments filed 12/08/2025 have been fully considered but are deemed to be moot in view of the new ground(s) of rejection necessitated by Applicant's amendments. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information 4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN H. NGUYEN whose telephone number is (571) 272-3765. The examiner can normally be reached on Monday- Friday from 9:00AM to 5:30 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LEWIS BULLOCK, can be reached at telephone number (571) 272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /VAN H NGUYEN/Primary Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Dec 27, 2022
Application Filed
Aug 19, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602262
SHARED RESOURCE POOL WITH PERIODIC REBALANCING IN A MULTI-CORE SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12591467
SYSTEM AND METHOD FOR HALTING PROCESSING CORES IN A MULTICORE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12591456
METHOD AND APPARATUS FOR CONTROLLING HARDWARE ACCELERATOR
2y 5m to grant Granted Mar 31, 2026
Patent 12591468
DYNAMIC MANAGEMENT OF FEATURES FOR PROCESSES EXECUTABLE ON AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585496
METHOD, APPARATUS AND COMPUTER PROGRAM FOR ACTIVATING A SCHEDULING CONFIGURATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+18.4%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month