Prosecution Insights
Last updated: April 19, 2026
Application No. 17/564,166

WORKLOAD AWARE VIRTUAL PROCESSING UNITS

Non-Final OA §103
Filed
Dec 28, 2021
Examiner
ONAT, UMUT
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
5 (Non-Final)
79%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
415 granted / 523 resolved
+24.3% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
35 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 2, 8, are 10-13 are amended. Claim 9 is cancelled. Claims 1-8 and 10-21 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Notes The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/12/2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-8, 10, 12, and 14-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kwon et al. (US 2018/0210530 A1; from IDS filed on 11/12/2022; hereinafter Kwon) in view of Bieswanger et al. (US 2010/0037038 A1; from IDS filed on ; hereinafter Bieswanger), Maciesowicz et al. (US 2010/0321395 A1; hereinafter Maciesowicz), and Garg et al. (US 2021/0011773 A1; hereinafter Garg). With respect to claim 1, Kwon teaches: A method comprising: in response to identifying a first workload to be executed at a processing unit (see e.g. Kwon, Fig. 1: “Graphics Processing Unit 102”; paragraph 17: “identify processing workloads at the GPU 102… monitors the graphics pipeline 114 for its utilization (e.g., how busy it is) and generates a busy percentage metric representing, for example, a percentage level of activity of the CUs 122”; and paragraph 25: “average activity percentage metric (which may be expressed in percentage terms ranging from 0-100%) represents, for example, a percentage level of activity of compute units in the GPU”): configuring the processing unit to operate in a first power mode (see e.g. Kwon, paragraph 22: “performs dynamic power level management by periodically determining a new GPU setting (i.e., power level) based on the past behavior of the processing system 100 and a current and/or upcoming workload”; and paragraph 26: “After the calculated average activity percentage metric reaches the pre-determined activity threshold (i.e., indicative of a higher level of processing activity at the GPU), the firmware 204 determines whether to adjust the power level of the GPU”), in which a first subset of processing elements (see e.g. Kwon, paragraph 11: “active… compute units”) of the processing unit (see e.g. Kwon, Fig. 1: “Graphics Processing Unit 102”, “Compute Units 122”; paragraph 25: “a percentage level of activity of compute units in the GPU”; and paragraph 26: “After the calculated average activity percentage metric reaches the pre-determined activity threshold (i.e., indicative of a higher level of processing activity at the GPU), the firmware 204 determines whether to adjust the power level of the GPU”) operate in a low-power mode (see e.g. Kwon, paragraph 27: “adjusting the power level of the GPU by decreasing the operating frequency and/or voltage supplied to the GPU”; and paragraph 34: “a power savings oriented setting in which power levels are allowed to decrease”), and Note that, activity percentage of the GPU in power savings oriented setting indicates a percentage of the CUs 122 are active in the power saving mode (i.e. operating in low power mode). Kwon does not but Bieswanger teaches: one or more additional processing elements (see e.g. Bieswanger, paragraph 40: “a smaller set of cores”; paragraph 51: “active processor cores”; and Fig. 2B: “C1 241-C5 245”) operate in a higher-power mode than the processing elements (see e.g. Bieswanger, paragraph 40: “cores with little or no work”; paragraph 51: “cores 246, 247, and 248 may be executing no instructions”; and Fig. 2B: “C6 246-C8 248”) of the first subset (see e.g. Bieswanger, paragraph 40: “dynamic core pool management, virtual machine manager 114 may detect that the total load on cores 142, 143, 152, and 153 is sufficiently low, consolidate the execution of instructions onto a smaller set of cores, and switch the cores with little or no work from high power states into low power states”; paragraph 51: “in the transition from FIG. 2A to FIG. 2B, virtual machine manager 230 may shift the loads from cores 246, 247, and 248 over to cores 241-245. As a result of shifting the loads and remapping the virtual processing units, cores 246, 247, and 248 may be executing no instructions. Consequently, virtual machine manager 230 may conserve power by switching cores 246, 247, and 248 from high power states to low power states”; and Fig. 2B); and wherein the first identifier specifies (see e.g. Bieswanger, paragraph 51: “remapping the virtual processing units”; and paragraph 48: “mapping of virtual processors and virtual processing units to different processor cores”) a set of available resources (see e.g. Bieswanger, Fig. 2B: “C1 241-C5 245”; and paragraph 43: “Processor 250 has cores 241 and 242; processor 252 has cores 243 and 244; processor 254 has cores 245”) that includes only the one or more additional processing elements that operate in the higher-power mode (see e.g. Bieswanger, paragraph 51: “active processor cores… shift the loads from cores 246, 247, and 248 over to cores 241-245. As a result of shifting the loads and remapping the virtual processing units, cores 246, 247, and 248 may be executing no instructions. Consequently, virtual machine manager 230 may conserve power by switching cores 246, 247, and 248 from high power states to low power states”; and Fig. 2B). Note that, the mappings between the VPUs and physical processors (see e.g. Bieswanger, paragraphs 44-45) inherently discloses VPU identifiers (i.e. a first identifier) and physical processor identifiers for the mapping. Kwon and Bieswanger are analogous art because they are in the same field of endeavor: power management associated with processing elements (e.g. compute units, processor cores, etc.). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Bieswanger. The motivation/suggestion would be to improve power management associated with the processing units (see e.g. Bieswanger, paragraph 40). Furthermore, Kwon does not but Maciesowicz teaches: exposing the processing unit in the first power mode (see e.g. Maciesowicz, paragraph 66: “low-power GPU 30b”) to a device driver (see e.g. Maciesowicz, Fig. 12: “200”; paragraph 65: “driver 200”) as a first virtual processing unit (see e.g. Maciesowicz, Fig. 12: “Virtual GPU 202”; paragraph 65: “driver 200 of the present embodiment may incorporate and provide the functions of the virtual frame buffer drivers (e.g., element 156 of FIG. 8). Further, due to the daisy-chained configuration, as shown in FIG. 11, each of virtual display devices 182, 184, and 186 may be driven by the single virtual GPU 202, which may interpret and route function calls to GPU 30”; and paragraph 66: “If less intensive graphics processing application is being run, virtual GPU 202 may route function calls to low-power GPU 30b”) Kwon and Maciesowicz are analogous art because they are in the same field of endeavor: power management associated with processing units, such as GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Maciesowicz. The motivation/suggestion would be to improve resource utilization by implementing virtualized resources. Even further, Kwon does not but Garg teaches: using a first identifier that corresponds to the first virtual processing unit (see e.g. Garg, paragraph 21: “an identifier or name of each virtual machine 118”) and is different from a second identifier that corresponds to the processing unit (see e.g. Garg, paragraph 21: “an identifier or location of a GPU 115”), Kwon and Garg are analogous art because they are in the same field of endeavor: managing and allocating virtual devices, such as virtual GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Garg. The motivation/suggestion would be to improve robustness of resource allocations. With respect to claim 3, Kwon as modified teaches: The method of claim 1, further comprising: identifying the first workload based on metadata (see e.g. Kwon, paragraph 10: “inputs such as identification of the particular processing workloads”; and paragraph 8: “data representative of the current computing environment (e.g., type of workload”) provided by an application associated with the first workload (see e.g. Kwon, paragraph 10: “different workloads for different applications”; and paragraph 8: “type of workload requested”). With respect to claim 4, Kwon as modified teaches: The method of claim 3, wherein the metadata indicates at least one of a number of draw calls, a number of thread dispatches (see e.g. Kwon, paragraph 11: “expected future processing workload is identified based on a number of threads scheduled for execution at the GPU”), a number of graphics primitives, a number of workgroups, and a number of shader instructions to be executed at the processing unit. With respect to claim 5, Kwon as modified teaches: The method of claim 3, wherein identifying the first workload comprises identifying the first workload (see e.g. Kwon, paragraph 8: “particular workloads that the GPU is currently processing. In each measurement cycle, the GPU driver takes input from both measured hardware performance metrics (e.g., average utilization, temperature, and power that accumulated in the previous cycle)”; and paragraph 10: “consideration of metrics including run-time hardware performance and inputs such as identification of the particular processing workloads”) based on an average of the metadata (see e.g. Kwon, paragraph 23: “an average busy percentage metric… an average temperature metric… an average accumulated power consumed”) provided by the application over time (see e.g. Kwon, paragraph : “in each measurement cycle, the hardware signals from the GPU performance module 118 are used to calculate one or more of an average busy percentage metric from the utilization monitor 124, an average temperature metric from the one or more temperature sensors 126, and an average accumulated power consumed during the measurement cycle from the one or more power sensors 128. Other inputs such as calculated performance measurements (e.g., FPS, throughput, submissions per unit time)”; and paragraph 10: “different workloads for different applications”). With respect to claim 6, Kwon as modified teaches: The method of claim 1, further comprising: identifying the first workload based on a stored profile (see e.g. Kwon, paragraph 9: “previous measurement cycle”; and paragraph 22: “past behavior of the processing system 100”) of an application associated with the first workload (see e.g. Kwon, paragraph 10: “different workloads for different applications”; paragraph 9: “identifying a first performance metric associated with processing workloads at the processing system for a consecutive number of measurement cycles. The consecutive number of measurement cycles includes… at least one previous measurement cycle”; paragraph 17: “monitoring of performance characteristics at the graphics pipelines 114 and at the scheduler 116 to identify processing workloads at the GPU 102”; and paragraph 22: “based on the past behavior of the processing system 100 and a current and/or upcoming workload”). With respect to claim 7, Kwon as modified teaches: The method of claim 1, further comprising: identifying the first workload based on a runtime profile (see e.g. Kwon, paragraph 10: “consideration of metrics including run-time hardware performance and inputs such as identification of the particular processing workloads”; and paragraph 36: “an identification of the type of workloads/use cases being processed by the GPU 102 (e.g., low-activity workloads such as general compute functions or high-activity workloads such as analytics, visualization, 3D image rendering, artificial intelligence processing, etc.)”) of an application associated with the first workload (see e.g. Kwon, paragraph 10: “different workloads for different applications”; and paragraph 36: “an identification of the type of workloads/use cases being processed by the GPU 102 (e.g., low-activity workloads such as general compute functions or high-activity workloads such as analytics, visualization, 3D image rendering, artificial intelligence processing, etc.)”; paragraph 9: “a current measurement cycle”). With respect to claim 8, Kwon as modified teaches: The method of claim 1, further comprising: selecting the first subset of processing elements based on a software request (see e.g. Kwon, paragraph 16: “scheduler 116 buffers each received request until one or more of the CUs 122 is available to execute the thread. When one or more of the CUs 122 is available to execute a thread, the scheduler 116 initiates execution of the thread by, for example, providing an address of an initial instruction of the thread to a fetch stage of the one or more of the CU (e.g., CU 122(1))”) received from the device driver (see e.g. Kwon, paragraph 14: “GPU driver 110 supplies graphics workloads to the graphics pipeline 114 for processing”). With respect to claim 10, Kwon as modified teaches: A method, comprising: setting a processing unit (see e.g. Kwon, Fig. 1: “Graphics Processing Unit 102”) to a first configuration (see e.g. Kwon, paragraph 22: “performs dynamic power level management by periodically determining a new GPU setting (i.e., power level) based on the past behavior of the processing system 100 and a current and/or upcoming workload”; and paragraph 26: “After the calculated average activity percentage metric reaches the pre-determined activity threshold (i.e., indicative of a higher level of processing activity at the GPU), the firmware 204 determines whether to adjust the power level of the GPU”) based on a first workload to be executed at the processing unit (see e.g. Kwon, paragraph 17: “identify processing workloads at the GPU 102… monitors the graphics pipeline 114 for its utilization (e.g., how busy it is) and generates a busy percentage metric representing, for example, a percentage level of activity of the CUs 122”; and paragraph 25: “average activity percentage metric (which may be expressed in percentage terms ranging from 0-100%) represents, for example, a percentage level of activity of compute units in the GPU”), the first configuration associated with a first subset of processing elements (see e.g. Kwon, paragraph 11: “active… compute units”) of the processing unit (see e.g. Kwon, Fig. 1: “Graphics Processing Unit 102”, “Compute Units 122”; paragraph 25: “a percentage level of activity of compute units in the GPU”; and paragraph 26: “After the calculated average activity percentage metric reaches the pre-determined activity threshold (i.e., indicative of a higher level of processing activity at the GPU), the firmware 204 determines whether to adjust the power level of the GPU”) operating in a low-power mode (see e.g. Kwon, paragraph 27: “adjusting the power level of the GPU by decreasing the operating frequency and/or voltage supplied to the GPU”; and paragraph 34: “a power savings oriented setting in which power levels are allowed to decrease”) and Note that, activity percentage of the GPU in power savings oriented setting indicates a percentage of the CUs 122 are active in the power saving mode (i.e. operating in low power mode). Kwon does not but Bieswanger teaches: one or more additional processing elements (see e.g. Bieswanger, paragraph 40: “a smaller set of cores”; paragraph 51: “active processor cores”; and Fig. 2B: “C1 241-C5 245”) of the processing unit operating in a higher-power mode than the processing elements (see e.g. Bieswanger, paragraph 40: “cores with little or no work”; paragraph 51: “cores 246, 247, and 248 may be executing no instructions”; and Fig. 2B: “C6 246-C8 248”) of the first subset (see e.g. Bieswanger, paragraph 40: “dynamic core pool management, virtual machine manager 114 may detect that the total load on cores 142, 143, 152, and 153 is sufficiently low, consolidate the execution of instructions onto a smaller set of cores, and switch the cores with little or no work from high power states into low power states”; paragraph 51: “in the transition from FIG. 2A to FIG. 2B, virtual machine manager 230 may shift the loads from cores 246, 247, and 248 over to cores 241-245. As a result of shifting the loads and remapping the virtual processing units, cores 246, 247, and 248 may be executing no instructions. Consequently, virtual machine manager 230 may conserve power by switching cores 246, 247, and 248 from high power states to low power states”; and Fig. 2B); and wherein the first identifier specifies (see e.g. Bieswanger, paragraph 51: “remapping the virtual processing units”; and paragraph 48: “mapping of virtual processors and virtual processing units to different processor cores”) a set of available resources (see e.g. Bieswanger, Fig. 2B: “C1 241-C5 245”; and paragraph 43: “Processor 250 has cores 241 and 242; processor 252 has cores 243 and 244; processor 254 has cores 245”) that includes only the one or more additional processing elements that operate in the higher-power mode (see e.g. Bieswanger, paragraph 51: “active processor cores… shift the loads from cores 246, 247, and 248 over to cores 241-245. As a result of shifting the loads and remapping the virtual processing units, cores 246, 247, and 248 may be executing no instructions. Consequently, virtual machine manager 230 may conserve power by switching cores 246, 247, and 248 from high power states to low power states”; and Fig. 2B). Note that, the mappings between the VPUs and physical processors (see e.g. Bieswanger, paragraphs 44-45) inherently discloses VPU identifiers (i.e. first identifiers) and physical processor identifiers for the mapping. Kwon and Bieswanger are analogous art because they are in the same field of endeavor: power management associated with processing elements (e.g. compute units, processor cores, etc.). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Bieswanger. The motivation/suggestion would be to improve power management associated with the processing units (see e.g. Bieswanger, paragraph 40). Furthermore, Kwon does not but Maciesowicz teaches: exposing, to a device driver (see e.g. Maciesowicz, paragraph 65: “reference frame buffer driver 200”), the processing unit in the first configuration (see e.g. Maciesowicz, paragraph 66: “If less intensive graphics processing application is being run, virtual GPU 202 may route function calls to low-power GPU 30b”) as a first virtual processing unit (see e.g. Maciesowicz, paragraph 66: “virtual GPU 202”; and paragraph 65: “reference driver 200 of the present embodiment may incorporate and provide the functions of the virtual frame buffer drivers (e.g., element 156 of FIG. 8). Further, due to the daisy-chained configuration, as shown in FIG. 11, each of virtual display devices 182, 184, and 186 may be driven by the single virtual GPU 202”) Kwon and Maciesowicz are analogous art because they are in the same field of endeavor: power management associated with processing units, such as GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Maciesowicz. The motivation/suggestion would be to improve resource utilization by implementing virtualized resources. Even further, Kwon does not but Garg teaches: using a first identifier that corresponds to the first virtual processing unit (see e.g. Garg, paragraph 21: “an identifier or name of each virtual machine 118”) and is different from a second identifier that corresponds to the processing unit (see e.g. Garg, paragraph 21: “an identifier or location of a GPU 115”), Kwon and Garg are analogous art because they are in the same field of endeavor: managing and allocating virtual devices, such as virtual GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Garg. The motivation/suggestion would be to improve robustness of resource allocations. With respect to claims 12 and 14-19: Claims 12 and 14-19 are directed to a processing unit comprising a set of processing elements, a power control module, and a scheduler configured to implement active functions corresponding to the method disclosed in claims 1 and 3-8; please see the rejections directed to claims 1 and 3-8 above which also cover the limitations recited in claims 12 and 14-19. Note that, Kwon also discloses a Graphic Processing Unit 102 comprising a set of compute units 122, a power and clock controller 120, and a scheduler 116 (see e.g. Kwon, Fig. 1) to implement the method disclosed in claims 1 and 3-8. With respect to claim 20, Kwon as modified teaches: The processing unit of claim 12, wherein the scheduling circuit is configured to: select the first subset of processing elements (see e.g. Kwon, paragraph 16: “scheduler 116 buffers each received request until one or more of the CUs 122 is available to execute the thread. When one or more of the CUs 122 is available to execute a thread, the scheduler 116 initiates execution of the thread by, for example, providing an address of an initial instruction of the thread to a fetch stage of the one or more of the CU (e.g., CU 122(1))”) Kwon does not but Maciesowicz teaches: from a set of programmable virtual processing unit profiles (see e.g. Maciesowicz, paragraph 42: “selection of a particular display profile”; and paragraph 45: “selected display profile may be used to configure a generic virtual display port based upon the selection. For instance, if a selected display profile corresponds to a VGA-type display, a reference frame buffer driver, a virtual frame buffer driver, and the virtual display interface are configured to simulate a VGA port connected toa simulated VGA display device”). Kwon and Maciesowicz are analogous art because they are in the same field of endeavor: power management associated with processing units, such as GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Maciesowicz. The motivation/suggestion would be to improve resource utilization by implementing virtualized resources. With respect to claim 21, Kwon as modified teaches: The method of claim 1, Kwon does not but Garg teaches: wherein the first identifier is a device ID corresponding to the first virtual processing unit (see e.g. Garg, paragraph 21: “an identifier or name of each virtual machine 118”) and the second identifier is a device ID corresponding to the processing unit (see e.g. Garg, paragraph 21: “an identifier or location of a GPU 115”). Kwon and Garg are analogous art because they are in the same field of endeavor: managing and allocating virtual devices, such as virtual GPUs. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Kwon with the teachings of Garg. The motivation/suggestion would be to improve robustness of resource allocations. Response to Arguments Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive. In detail: (i) Regarding Applicant’s arguments with respect to claim 1, note that at least Bieswanger discloses mapping only the active cores (i.e. processing elements that are in high-power mode) to virtual processing units (VPUs). Specifically, in Fig. 2B, processing cores C1-C5 are mapped to VPUs 232-240 (see e.g. Bieswanger, paragraphs 44, 51; Fig. 2B), whereas inactive processing cores C6- C8 (i.e. processing elements in low-power mode) are not mapped to any VPUs (see e.g. Bieswanger, paragraph 51). Further note that, such a mapping between the VPUs and the processing cores inherently discloses VPU identification information for establishing such mapping. As such, at least Bieswanger teaches the limitation “wherein the first identifier specifies a set of available resources that includes only the one or more additional processing elements that operate in the higher-power mode” as recited in claim 1. For more details, please see the corresponding rejection above. Allowable Subject Matter Claims 2, 11, and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not explicitly disclose “exposing the processing unit in the second power mode to the device driver as a second virtual processing unit using a third identifier that corresponds to the second virtual processing unit and is different from the first identifier and the second identifier, wherein the second virtual processing unit is different from the first virtual processing unit, wherein the third identifier specifies a set of available resources that includes only the one or more additional processing elements that operate in the higher-power mode” as recited in claims 2, 11, and 13. CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wu et al. (US 2021/0089423 A1) discloses graphics cores that are configured between different performance levels at runtime to accommodate different virtual machines, the changes in performance levels resulting in different power usage levels for the graphics cores (see paragraphs 29-30). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UMUT ONAT/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Dec 28, 2021
Application Filed
Apr 20, 2024
Non-Final Rejection — §103
Jun 21, 2024
Interview Requested
Jun 27, 2024
Applicant Interview (Telephonic)
Jun 27, 2024
Examiner Interview Summary
Jul 19, 2024
Response Filed
Oct 21, 2024
Final Rejection — §103
Feb 18, 2025
Response after Non-Final Action
Apr 15, 2025
Request for Continued Examination
Apr 20, 2025
Response after Non-Final Action
May 02, 2025
Non-Final Rejection — §103
Jul 09, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Nov 13, 2025
Interview Requested
Nov 20, 2025
Examiner Interview Summary
Nov 20, 2025
Applicant Interview (Telephonic)
Dec 15, 2025
Response after Non-Final Action
Jan 12, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602271
NON-BLOCKING RING EXCHANGE ALGORITHM
2y 5m to grant Granted Apr 14, 2026
Patent 12572397
REAL-TIME EVENT DATA REPORTING ON EDGE COMPUTING DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12572645
SYSTEMS AND METHODS FOR MANAGING SETTINGS BASED UPON USER PERSONA USING HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 10, 2026
Patent 12566647
System And Method for Implementing Micro-Application Environments
2y 5m to grant Granted Mar 03, 2026
Patent 12547481
SYSTEMS, METHODS, AND DEVICES FOR ACCESSING A COMPUTATIONAL DEVICE KERNEL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+28.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month