Prosecution Insights
Last updated: April 19, 2026
Application No. 18/070,084

APPLICATION PROGRAMMING INTERFACE TO CAUSE PERFORMANCE OF ACCELERATOR OPERATIONS

Final Rejection §102§103
Filed
Nov 28, 2022
Examiner
ONAT, UMUT
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
415 granted / 523 resolved
+24.3% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
35 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
42.1%
+2.1% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§102 §103
DETAILED ACTION Claims 1-20 are amended. Claims 1-20 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner’s Notes The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Specification Amendments to paragraphs [0155], [0167], [0169], [0359], and [0383] are fully considered and are satisfactory to overcome the objections directed to the specification in the previous Office Action. Amendments to claims 1 and 8 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. 112(b) directed to claims 1-14 in the previous Office Action. Amendments to claims 1, 8, and 15 are fully considered and are satisfactory to overcome the rejections under 35 U.S.C. 101 directed to claims 1-20 in the previous Office Action. Claim Objections Claims 2-7 and 17 are objected to because of the following informalities: Claims 2-7: “The processor” (line 1) should have been –The one or more processors—. Claim 17: “the stream” (line 3) should have been –a stream—. Appropriate corrections are required. Applicant is advised to review the entire claims for further needed corrections. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-6, 8-10, 12-17, 19, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pavlidakis et al. (“Arax: A runtime framework for decoupling applications from heterogeneous accelerators”; Nov. 7, 2022; from IDS filed on 06/05/2024; hereinafter Pavlidakis). With respect to claim 1, Pavlidakis teaches: A processor (see e.g. page 3, column 1, paragraph 4: “GPUs, FPGAs, and CPUs”) comprising: circuitry (see e.g. page 3, column 1, paragraph 4: “GPUs, FPGAs, and CPUs”) to, in response to an invocation of an application programming interface (API) (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators” and paragraph 4: “demonstrate and evaluate Arax in an accelerator rich server environment, using GPUs, FPGAs, and CPUs”), submit one or more first operations (see e.g. page 3, column 1, paragraph 6: “application tasks”) into a queue (see e.g. page 3, column 2, paragraph 4: “Task queues: Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) to be performed by a first type of accelerator (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”; page 1, column 1, paragraph 2: “considering physical details, including the number and type of accelerators”) within a heterogenous processor (see e.g. page 2, column 1, paragraph 1: “heterogeneous setups with multiple accelerators”; and page 3, column 1, paragraph 1: “a mechanism for spatial sharing of heterogeneous accelerators”) and to submit one or more second operations (see e.g. page 3, column 1, paragraph 6: “application tasks”) into the queue to be performed by a second type of accelerator (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; page 1, column 1, paragraph 2: “considering physical details, including the number and type of accelerators”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”) within the heterogeneous processor (see e.g. page 2, column 1, paragraph 1: “heterogeneous setups with multiple accelerators”). Pavlidakis discloses utilizing an Arax API (i.e. invoking the functions of the Arax API) to assign application tasks from a task queue to different accelerators within a heterogenous processing environment. Note that, GPUs, FPGAs, and CPUs inherently disclose corresponding GPU, FPGA, and CPU circuitry. With respect to claim 2, Pavlidakis teaches: The processor of claim 1, wherein the API is to cause the circuitry to indicate the queue to which the one or more first operations and the one or more second operations are to be submitted (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”; page 3, column 2, paragraph 4: “Task queues: Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”; and page 3, column 2, paragraph 4: “Task queues. Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”; and page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”), wherein the queue is to be performed, at least in part, by the first type of accelerator and the second type of accelerator within the heterogeneous processor (see e.g. page 3, column 2, paragraph 4: “issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”; and page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”). With respect to claim 3, Pavlidakis teaches: The processor of claim 1, wherein the API is to cause the circuitry to indicate, to a parallel computing environment (see e.g. page 4, column 1, paragraph 2: “spatial sharing mechanism of Arax is based on streams/command queues and host-threads (Arax accelerator threads). In particular, to execute kernels in parallel, the server spawns multiple threads per physical accelerator”; and page 10, column 2, paragraph 1: “Arax allows applications to execute in parallel in the FPGA”), the queue (see e.g. page 2, column 2, paragraph 2: “issue tasks to GPU streams and FPGA command queues”; and page 3, column 1, paragraph 5: “Applications use the Arax API to access available accelerators, regardless of their types. Applications create task queues and issue tasks… assigns dynamically and asynchronously application tasks to accelerators, managing accelerator streams and command queues”), and the one or more first operations and the one or more second operations are to be submitted, in response to the API, to the queue (see e.g. page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”; and page 3, column 1, paragraph 5: “Applications use the Arax API to access available accelerators, regardless of their types. Applications create task queues and issue tasks… assigns dynamically and asynchronously application tasks to accelerators, managing accelerator streams and command queues”). With respect to claim 5, Pavlidakis teaches: The processor of claim 1, wherein the API is to receive, as input, a list of the one or more first operations (see e.g. page 3, column 2, paragraph 4: “issue tasks to task queues”) to be performed by the first type of accelerator within the heterogenous processor in response to one or more first instructions (see e.g. page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… Arax is responsible for assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”, paragraph 2: “A compute task is an accelerator kernel, while a transfer task is a data transfer between the host and the accelerator. Both tasks are executed”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”), and the API is further to receive, as input, a list of the one or more second operations to be performed by the second type of accelerator within the heterogeneous processor in response to one or more second instructions (see e.g. page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… Arax is responsible for assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”, paragraph 2: “A compute task is an accelerator kernel, while a transfer task is a data transfer between the host and the accelerator. Both tasks are executed”; and page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”). With respect to claim 6, Pavlidakis teaches: The processor of claim 1, wherein the queue comprises a stream of instructions (see e.g. page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… Each task queue holds tasks with dependencies” and paragraph 2: “Tasks”), and one or more first instructions are to be submitted, in response to the API, to the stream of instructions to be performed (see e.g. page 2, column 2, paragraph 2: “issue tasks to GPU streams and FPGA command queues”; page 3, column 1, paragraph 5: “Applications create task queues and issue tasks… assigns dynamically and asynchronously application tasks to accelerators, managing accelerator streams and command queues”; and page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”), at least in part, by the first type of accelerator within the heterogenous processor (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”), and one or more second instructions are to be submitted, in response to the API, to the stream of instructions to be performed (see e.g. page 2, column 2, paragraph 2: “issue tasks to GPU streams and FPGA command queues”; page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”), at least in part, by the second type of accelerator within the heterogenous processor (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”). With respect to claim 8: Claim 8 is directed to a system comprising one or more processors implementing active functions corresponding to the functions implemented by the processor disclosed in claim 1; please see the rejection directed to claim 1 above which also covers the limitations recited in claim 8. With respect to claim 9, Pavlidakis teaches: The system of claim 8, wherein the API is to cause the one or more processors to indicate one or more first portions of the queue (see e.g. page 2, column 2, paragraph 2: “GPU streams and FPGA command queues”; and page 3, column 2, paragraph 4: task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) comprising the one or more first operations to be performed by the first type of accelerator within the heterogeneous processor (see e.g. page 3, column 1, paragraph 5: “assigns dynamically and asynchronously application tasks to accelerators”; page 3, column 2, paragraph 4: “issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”) and one or more second portions of the queue comprising the one or more second operations to be performed by the second type of accelerator within the heterogeneous processor (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; page 5, column 1, paragraph 4: “After the selection of the physical accelerator, the thread of that accelerator gets a task from the task queue” and paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”). With respect to claims 10, 12, and 13: Claims 10, 12, and 13 are directed to a system comprising one or more processors implementing active functions corresponding to the functions implemented by the one or more processor disclosed in claims 3, 5, and 6, respectively; please see the rejection directed to claims 3, 5, and 6 above which also cover the limitations recited in claims 10, 12, and 13. With respect to claim 14, Pavlidakis teaches: The system of claim 8, wherein the API is to cause the one or more processors to indicate one or more first portions of the queue (see e.g. page 2, column 2, paragraph 2: “GPU streams and FPGA command queues”; and page 3, column 2, paragraph 4: task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) to be performed by the first type of accelerator within the heterogeneous processor (see e.g. page 3, column 1, paragraph 5: “assigns dynamically and asynchronously application tasks to accelerators”; page 3, column 2, paragraph 4: “issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”), and the API is further to cause the one or more processors to indicate one or more second portions of the queue to be performed by the second type of accelerator within the heterogeneous processor (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; page 5, column 1, paragraph 4: “After the selection of the physical accelerator, the thread of that accelerator gets a task from the task queue” and paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”). With respect to claims 15, teaches: A method comprising: receiving an invocation of application programming interface ("API") (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators”), the invocation including parameters (see e.g. page 5, column 1, paragraph 5: “When receiving a compute task, the accelerator thread uses the kernel name—passed as a task parameter— to find the appropriate kernel program and loads it to the physical accelerator for execution”) indicating at least a queue of operations (see e.g. page 3, column 2, paragraph 4: “Task queues: Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”), one or more first operations (see e.g. page 3, column 2, paragraph 2: “Tasks. A task can be either a compute or a transfer task… compute task takes the kernel name and its corresponding arguments as parameters, i.e., inputs, outputs, and arguments required from a kernel… parameters for a transfer task include the task buffers provided by Arax and any data from the application address space”), one or more second operations (see e.g. page 3, column 2, paragraph 2: “Tasks. A task can be either a compute or a transfer task… compute task takes the kernel name and its corresponding arguments as parameters, i.e., inputs, outputs, and arguments required from a kernel… parameters for a transfer task include the task buffers provided by Arax and any data from the application address space”), a first type of accelerator (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”; page 1, column 1, paragraph 2: “considering physical details, including the number and type of accelerators”) within a heterogenous processor (see e.g. page 2, column 1, paragraph 1: “heterogeneous setups with multiple accelerators”; and page 3, column 1, paragraph 1: “a mechanism for spatial sharing of heterogeneous accelerators”), and a second type of accelerator within the heterogenous processor (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; page 1, column 1, paragraph 2: “considering physical details, including the number and type of accelerators”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”); and in response to the API invocation, submitting the one or more first operations into the queue to be performed by the first type of accelerator (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”; and page 3, column 2, paragraph 4: “Task queues: Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”), and submitting the one or more second operations into the queue to be performed by the second type of accelerator (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”; and page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”). With respect to claim 16: Claim 16 is directed to a method corresponding to the functions implemented by the one or more processors disclosed in claim 9; please see the rejection directed to claim 9 above which also covers the limitations recited in claim 16. With respect to claim 17, Pavlidakis teaches: The method of claim 15, further comprising indicating, in response to the API invocation, the queue (see e.g. page 2, column 2, paragraph 2: “GPU streams and FPGA command queues”; and page 3, column 2, paragraph 4: task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) to a parallel computing environment (see e.g. page 4, column 1, paragraph 2: “spatial sharing mechanism of Arax is based on streams/command queues and host-threads (Arax accelerator threads). In particular, to execute kernels in parallel, the server spawns multiple threads per physical accelerator”; and page 10, column 2, paragraph 1: “Arax allows applications to execute in parallel in the FPGA”), wherein the stream is to be performed, in part, by the first type of accelerator within the heterogeneous processor (see e.g. page 3, column 1, paragraph 5: “assigns dynamically and asynchronously application tasks to accelerators”; and page 3, column 2, paragraph 4: “issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… assigning them to one or more accelerators at runtime (§2.2), while ensuring that asynchronous tasks will be executed in-order”) and, in part, by the second type of accelerator within the heterogeneous processor (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”). With respect to claims 19 and 20: Claims 19 and 20 are directed to a method corresponding to the functions implemented by the processor disclosed in claims 5 and 6, respectively; please see the rejection directed to claims 5 and 6 above which also cover the limitations recited in claims 19 and 20. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pavlidakis in view of Cook et al. (US 2017/0308504 A1; hereinafter Cook). With respect to claim 4, Pavlidakis teaches: The processor of claim 1, the queue includes a stream of one or more first instructions (see e.g. Pavlidakis, page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues… Each task queue holds tasks with dependencies” and paragraph 2: “Tasks”) to be performed by the first type of accelerator (see e.g. Pavlidakis, page 2, column 2, paragraph 2: “issue tasks to GPU streams and FPGA command queues”; page 3, column 1, paragraph 5: “assigns dynamically and asynchronously application tasks to accelerators”; and page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) and one or more second instructions to be performed by the second type of accelerator within the heterogenous processor (see e.g. Pavlidakis, page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; and page 3, column 2, paragraph 4: “Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”), and … the stream (see e.g. Pavlidakis, page 2, column 2, paragraph 2: “GPU streams and FPGA command queues”; and page 3, column 2, paragraph 4: task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”) to store the one or more first instructions and the one or more second instructions (see e.g. Pavlidakis, page 2, column 2, paragraph 2: “issue tasks to GPU streams and FPGA command queues”; page 3, column 1, paragraph 5: “Applications use the Arax API to access available accelerators, regardless of their types. Applications create task queues and issue tasks… assigns dynamically and asynchronously application tasks to accelerators, managing accelerator streams and command queues”; and page 3, column 2, paragraph 4: “Task queues. Applications issue tasks to task queues, similar to existing programming models, e.g., CUDA/ROCm streams and OpenCL command queues”). Pavlidakis does not but Cook teaches: the API (see e.g. Cook, paragraph 45: “output API layer”) is to receive, as input, an identifier indicating (se e.g. Cook, paragraph 45: “each tuple of data and append/tag 312 an attribute that identifies which parallel stream it was from (e.g., stream 1, stream 2, stream 3)… process the tuples as they arrive, then emit them to the output API layer. The output API layer (e.g., via OP process 10) may route the tuples back to their correct parallel stream using the stream identifying attribute”) Pavlidakis and Cook are analogous art because they are in the same field of endeavor: managing stream processing associated with accelerators. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Pavlidakis with the teachings of Cook. The motivation/suggestion would be to improve the stream processing accuracy and reliability. With respect to claim 11: Claim 11 is directed to a system comprising one or more processors implementing active functions corresponding to the functions implemented by the processor disclosed in claim 4; please see the rejection directed to claim 4 above which also covers the limitations recited in claim 11. With respect to claim 18: Claim 18 is directed to a method corresponding to the functions implemented by the processor disclosed in claim 4; please see the rejection directed to claim 4 above which also covers the limitations recited in claim 18. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Pavlidakis in view of McClure (US 2020/0341812 A1). With respect to claim 7, Pavlidakis teaches: The processor of claim 1, wherein the first type of accelerator is a Graphics Processing Unit (GPU) (see e.g. Pavlidakis, page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs”) and Pavlidakis does not but McClure teaches: the second type of accelerator is an application-specific accelerator (see e.g. McClure, paragraph 18: “application specific integrated circuit (ASIC) may be able to be used a compute accelerator 203”). Pavlidakis and McClure are analogous art because they are in the same field of endeavor: operational management of hardware accelerators. Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to modify Pavlidakis with the teachings of McClure. The motivation/suggestion would be to increase the supported types of accelerators; thus improving the overall system flexibility. Response to Arguments Applicant's arguments filed 08/21/2025 have been fully considered but they are not persuasive. In detail: (i) Regarding claim 1, Applicant argues that Pavlidakis fails to teach the limitations “submit one or more first operations into a queue to be performed by a first type of accelerator within a heterogeneous processor and to submit one or more second operations into the queue to be performed by a second type of accelerator within the heterogeneous processor” as recited (Remarks, pages 13-14). However, note that Pavlidakis discloses an Arax API to access and operate different types of accelerators, such as NVIDIA GPUs, Intel Altera FPGAs, AMD GPUs, etc. within a heterogeneous computing environment and to assign tasks to these accelerators from a task queue. More specifically, Pavlidakis discloses applications submitting tasks into a task queue (see e.g. page 3, column 2, paragraph 4: “Task queues: Applications issue tasks to task queues”). These tasks are then distributed to accelerators to be performed by the accelerators (see e.g. page 3, column 1, paragraph 6: “use the Arax API to access available accelerators… assigns dynamically and asynchronously application tasks to accelerators”). Pavlidakis further discloses utilizing different types of accelerators, such as NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, AMD GPUs, etc., and distributing tasks from a task queue to different accelerators (see e.g. page 4, column 2, paragraph 3: “Individual tasks from the same task queue can be assigned to different accelerators”; and page 5, column 1, paragraph 5: “support different accelerator types… Arax supports NVIDIA GPUs using CUDA, Intel Altera FPGAs using OpenCL, and AMD GPUs using ROCm”). That is, Pavlidakis discloses submitting tasks into a task queue to be performed by different types of accelerators. Therefore, Pavlidakis teaches the limitations “submit one or more first operations into a queue to be performed by a first type of accelerator within a heterogeneous processor and to submit one or more second operations into the queue to be performed by a second type of accelerator within the heterogeneous processor” as recited in claim 1, and the Examiner maintains the rejection directed to claim 1. For more details, please see the corresponding rejection above. (ii) Applicant’s arguments with respect to claims 2-20 are fully considered; however, in view of the above discussion (i), they are not found to be persuasive. Consequently, the Examiner maintains the rejections directed to claims 2-20. For details, please see the corresponding rejections above. CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Krishnamurthy et al. (US 2009/0217275 A1) discloses distributing works stored in queues to different accelerators (see paragraph 31). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Umut Onat whose telephone number is (571)270-1735. The examiner can normally be reached M-Th 9:00-7:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin L Young can be reached at (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UMUT ONAT/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
May 17, 2025
Non-Final Rejection — §102, §103
Jun 16, 2025
Interview Requested
Jun 25, 2025
Applicant Interview (Telephonic)
Jun 26, 2025
Examiner Interview Summary
Aug 21, 2025
Response Filed
Nov 24, 2025
Final Rejection — §102, §103
Feb 19, 2026
Applicant Interview (Telephonic)
Feb 19, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602271
NON-BLOCKING RING EXCHANGE ALGORITHM
2y 5m to grant Granted Apr 14, 2026
Patent 12572397
REAL-TIME EVENT DATA REPORTING ON EDGE COMPUTING DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12572645
SYSTEMS AND METHODS FOR MANAGING SETTINGS BASED UPON USER PERSONA USING HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 10, 2026
Patent 12566647
System And Method for Implementing Micro-Application Environments
2y 5m to grant Granted Mar 03, 2026
Patent 12547481
SYSTEMS, METHODS, AND DEVICES FOR ACCESSING A COMPUTATIONAL DEVICE KERNEL
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+28.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month