Prosecution Insights
Last updated: April 19, 2026
Application No. 17/809,300

Apparatus, Device, Method and Computer Program for Controlling the Execution of a Computer Program by a Computer System

Final Rejection §103
Filed
Jun 28, 2022
Examiner
NAHRA, SELENA SABAH
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
12 granted / 16 resolved
+20.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
12 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
22.0%
-18.0% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In view of Applicant's amendments, the objection to claims is withdrawn. The amendment integrates the abstract idea into a practical application by imposing a meaningful limit on the abstract idea. In view of Applicant’s amendments, the rejection under 35 USC § 101 is withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 20, and 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak (U.S. Patent Application Publication US 20160232127 A1) in view of Seo (U.S. Patent Application Publication No. US 20190180028 A1). With regard to claim 1, Potkonjak discloses: An apparatus for controlling execution of a computer program by a computer system comprising two or more different Processing Units (XPUs) (“For instance, CMP 200 may include a graphene-containing processor core 201 and two or more other processor cores 202 that include either no graphene-containing computing elements or relatively fewer graphene-containing computing elements than graphene-containing processor core 201.”, para [0027], fig 2), the apparatus comprising interface circuitry, machine-readable instructions and processing circuitry to execute the machine-readable instructions to (“FIG. 8 is a block diagram illustrating an example computing device 800 that is arranged for managing programmable logic circuits in a chip multiprocessor, in accordance with at least some embodiments of the present disclosure. In a very basic configuration 802, computing device 800 typically includes one or more processors 804 and a system memory 806. A memory bus 808 may be used for communicating between processor 804 and system memory 806.”, para [0082], “Computing device 800 may also include an interface bus 840 for facilitating communication from various interface devices (e.g., output devices 842, peripheral interfaces 844, and communication devices 846) to basic configuration 802 via bus/interface controller 830.”, para [0087], “However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure.”, para [0092], fig 8): obtain the computer program, wherein at least a portion of the computer program is based on one or more compute kernels to be executed by the two or more different XPUs (“As noted above, CMP 300 can use a different group of processors to execute specific portions of a software application. Thus, a software application being executed by CMP 300 may be divided into distinct portions, such as blocks of code, where each block can be executed by a group of processors selected to perform with relatively optimal time delay and/or leakage energy.”, para [0045], “For instance, CMP 200 may include a graphene-containing processor core 201 and two or more other processor cores 202 that include either no graphene-containing computing elements or relatively fewer graphene-containing computing elements than graphene-containing processor core 201.”, para [0027], “For example, one such group of the other processor cores 202 in CMP 200 may comprise graphics processing units (GPUs).”, para [0032], “graphene-containing processor core” 201, fig 2, “other processor core” 202, fig 2); determine, for each XPU, an energy-related metric for executing the one or more compute kernels on the respective XPU (“In block 601, the task manager or other instruction-scheduling entity associated with the CMP determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a first block of instructions from the software application.”, para [0072], “In block 603, the task manager or other instruction-scheduling entity determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a second block of instructions from the application.”, para [0074], fig 6, fig 2) assign the execution of the one or more compute kernels to the two or more different XPUs based on the respective energy-related metric (“In block 602, the task manager or other instruction-scheduling entity selects a first of the multiple processor groups to execute the first block of instructions. Generally, the selection is based on at least one of the determined time, energy cost, thermal cost, and/or other cost(s) determined in block 602.”, para [0072], “In block 604, the task manager or other instruction-scheduling entity selects a second of the multiple processor groups to execute the second block of instructions”, para [0075], fig 6, fig 2). Potkonjak does not disclose however, Seo discloses: execute the computer program in a sandboxed evaluation environment (“A security environment 110 may be implemented on the processor 101. The processor 101 may cause a program to be executed inside the security environment 110 which cannot access external resources. That is, the security environment 110 may be implemented in a sandbox manner.”, para [0110]); determine, (“As another type of dynamic analysis, an executable code of an encryption algorithm may be executed in a security environment, and data related to power consumption of a CPU during execution of the executable code may be extracted as the feature data.”, para [0069], “That is, the security environment 110 may be implemented in a sandbox manner.”, para [0110]). Both the systems of Potkonjak and Seo deal with executing computer programs. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak in view of Seo to improve security. With regard to claim 2, Potkonjak as modified discloses the apparatus of claim 1. Potkonjak further discloses: wherein the energy-related metric comprises at least one of an estimated power consumption and an estimated thermal impact of the execution of the respective compute kernel on the respective XPU (“In block 601, the task manager or other instruction-scheduling entity associated with the CMP determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a first block of instructions from the software application.”, para [0072], fig 6, fig 2). With regard to claim 3, Potkonjak as modified discloses the apparatus of claim 1. Potkonjak further discloses: wherein the machine-readable instructions comprise instructions to assign the execution of the one or more compute kernels such that an energy-related goal is achieved (“Dynamic programming process 400 facilitates the execution of a software program using the various groups of processors in CMP 300 in a way that satisfies one or more specified operational constraints. For example, in executing a software program, dynamic programming process 400 can be used to minimize or otherwise reduce energy cost or time delay associated with executing the software program. Alternatively or additionally, dynamic programming process 400 can be used to minimize energy cost for executing the software application while completing the execution of the software program in less than a specified maximum time period.”, para [0047], fig 4). With regard to claim 4, Potkonjak as modified discloses the apparatus of claim 3. Potkonjak further discloses: wherein the energy-related goal is pre-defined, or wherein the energy-related goal is defined by a service-level agreement associated with execution of the computer program (“Dynamic programming process 400 facilitates the execution of a software program using the various groups of processors in CMP 300 in a way that satisfies one or more specified operational constraints. For example, in executing a software program, dynamic programming process 400 can be used to minimize or otherwise reduce energy cost or time delay associated with executing the software program. Alternatively or additionally, dynamic programming process 400 can be used to minimize energy cost for executing the software application while completing the execution of the software program in less than a specified maximum time period.”, para [0047], fig 4). With regard to claim 20, Potkonjak as modified discloses the apparatus of claim 1. Potkonjak further discloses: wherein the two or more XPUs selected from the group consisting of a Central Processing Unit, CPU, a Graphics Processing Unit, GPU, a Field-Programmable Gate Array, FPGA, an Artificial Intelligence, Al, accelerator, and a communication processing offloading unit (“For instance, CMP 200 may include a graphene-containing processor core 201 and two or more other processor cores 202 that include either no graphene-containing computing elements or relatively fewer graphene-containing computing elements than graphene-containing processor core 201.”, para [0027], “For example, one such group of the other processor cores 202 in CMP 200 may comprise graphics processing units (GPUs).”, para [0032], “graphene-containing processor core” 201, fig 2, “other processor core” 202, fig 2, “Processor 804 may include programmable logic circuits, such as, without limitation, FPGA, patchable ASIC, CPLD, and others. Processor 804 may be similar to CMP 200 or in FIG. 2 or CMP 300 in FIG. 3.”, para [0083], fig 8). With regard to claim 23, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak further discloses: wherein the energy-related metric is based on the one or more compute kernels being active and based on the one or more compute kernels being idle (“In block 601, the task manager or other instruction-scheduling entity associated with the CMP determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a first block of instructions from the software application.”, para [0072], “In block 603, the task manager or other instruction-scheduling entity determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a second block of instructions from the application.”, para [0074], fig 6, fig 2). With regard to claim 24, Potkonjak discloses: A method for controlling execution of a computer program by a computer system comprising two or more different Processing Units (XPUs) (“In accordance with at least some embodiments of the present disclosure, a method to schedule instructions to be processed by a chip multiprocessor that includes graphene-containing computing elements arranged in multiple processor groups comprises determining at least one of a time cost, an energy cost, and a thermal cost for one or more of the multiple processor groups to execute a first block of instructions from an application and determining at least one of a time cost, an energy cost, and a thermal cost for one or more of the multiple processor groups to execute a second block of instructions from the application.”, para [0004, “For instance, CMP 200 may include a graphene-containing processor core 201 and two or more other processor cores 202 that include either no graphene-containing computing elements or relatively fewer graphene-containing computing elements than graphene-containing processor core 201.”, para [0027], “For example, one such group of the other processor cores 202 in CMP 200 may comprise graphics processing units (GPUs).”, para [0032], “graphene-containing processor core” 201, fig 2, “other processor core” 202, fig 2), the method comprising: obtain the computer program, wherein at least a portion of the computer program is based on one or more compute kernels to be executed by the two or more different XPUs (“As noted above, CMP 300 can use a different group of processors to execute specific portions of a software application. Thus, a software application being executed by CMP 300 may be divided into distinct portions, such as blocks of code, where each block can be executed by a group of processors selected to perform with relatively optimal time delay and/or leakage energy.”, para [0045], “For instance, CMP 200 may include a graphene-containing processor core 201 and two or more other processor cores 202 that include either no graphene-containing computing elements or relatively fewer graphene-containing computing elements than graphene-containing processor core 201.”, para [0027], “For example, one such group of the other processor cores 202 in CMP 200 may comprise graphics processing units (GPUs).”, para [0032], “graphene-containing processor core” 201, fig 2, “other processor core” 202, fig 2); determine, for each XPU, an energy-related metric for executing the one or more compute kernels on the respective XPU (“In block 601, the task manager or other instruction-scheduling entity associated with the CMP determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a first block of instructions from the software application.”, para [0072], “In block 603, the task manager or other instruction-scheduling entity determines at least one of a time cost, an energy cost, a thermal cost and/or other cost(s) for each of the multiple processor groups to execute a second block of instructions from the application.”, para [0074], fig 6, fig 2) assign the execution of the one or more compute kernels to the two or more different XPUs based on the respective energy-related metric (“In block 602, the task manager or other instruction-scheduling entity selects a first of the multiple processor groups to execute the first block of instructions. Generally, the selection is based on at least one of the determined time, energy cost, thermal cost, and/or other cost(s) determined in block 602.”, para [0072], “In block 604, the task manager or other instruction-scheduling entity selects a second of the multiple processor groups to execute the second block of instructions”, para [0075], fig 6, fig 2). Potkonjak does not disclose however, Seo discloses: execute the computer program in a sandboxed evaluation environment (“A security environment 110 may be implemented on the processor 101. The processor 101 may cause a program to be executed inside the security environment 110 which cannot access external resources. That is, the security environment 110 may be implemented in a sandbox manner.”, para [0110]); determine, (“As another type of dynamic analysis, an executable code of an encryption algorithm may be executed in a security environment, and data related to power consumption of a CPU during execution of the executable code may be extracted as the feature data.”, para [0069], “That is, the security environment 110 may be implemented in a sandbox manner.”, para [0110]). Both the systems of Potkonjak and Seo deal with executing computer programs. Thus, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak in view of Seo to improve security. With regard to claim 25, Potkonjak as modified discloses the method of claim 24. Potkonjak further discloses: A non-transitory machine-readable storage medium including program code (“non-transitory computer readable medium 708”, para [0081], fig 7, “Example computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.”, para [0085]). Claims 6 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1 above, and further view of Liu et al. (U.S. Patent Application Publication No. US 20080313640 A1, hereinafter “Liu”). With regard to claim 6, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Liu discloses: wherein the machine-readable instructions comprise instructions to determine a task graph of the computer program, with the one or more compute kernels being part of the task graph (“an application specification 116 representing application 114a as one or more task graphs (each comprising a series of asynchronous tasks)”, para [0029], fig 1, “Furthermore, each of the asynchronous components in a task graph can be referred to as a task.”, para [0026], fig. 5), and to determine the energy-related metric based on the task graph (“Task/resource mapping specification(s) 212 task-to-processor mapping specification in particular) includes a list containing each task 602 in an application and a list containing each discrete processor operating mode 604 for each processor available on a computing platform responsible for executing the application. In this regard, each task can be mapped to each discrete operating mode for each processor to provide mappings 606 between each task and each discrete processor operating mode.”, para [0051], fig 6, “By virtue of the fact that the operating mode parameters (and thus processing speed and power consumption costs) for each discrete operating mode, and each transition between modes, have been ascertained (at block 306), each potential combination of task/resource mappings can be identified as having a certain processing speed and power consumption cost. This allows the combination of mappings (i.e., processing option) associated with the lowest overall power consumption cost to be identified and selected by utilizing an optimization algorithm, such as algorithm/solver 222 above for example.”, para [0042], fig 3, “With respect to one or more task/resource mapping specifications 212, each combination of individual mappings (e.g., without limitation the task-to-processor mapping specification and task communication event-to-bus specification) can be scrutinized, in light of the information provided by one or more resource specifications 108 and application specification 116 to identify and select the combination of mappings associated with the lowest overall power consumption cost for executing all of the tasks in application 114a.”, para [0053]). Both the systems of Potkonjak and Liu deal with multi-processor systems. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Liu “to reduce a computing platform's power consumption when executing an application within a given amount of time.” (Lui, para [0005]). With regard to claim 18, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Liu discloses: wherein the machine-readable instructions comprise instructions to discover capabilities of the two or more XPUs of the computer system (“At block 306, operating mode parameters for each discrete mode are ascertained. In this regard, recall that each of the discrete modes ascertained at block 304 are defined by specific operational settings--such as a certain computing/execution speed with processors and a certain communication transfer speed with buses. As such, each discrete mode is associated with certain operating mode parameters, such as a processing speed parameter and power consumption parameter.”, para [0039], fig 3, fig 1), and to determine the energy-related metric and/or to assign the execution based on the discovered capabilities (“Each task-to-processor mapping reflects a potential discrete operating mode (for each processor) that can be used to execute a particular task in the application.”, para [0041], “By virtue of the fact that the operating mode parameters (and thus processing speed and power consumption costs) for each discrete operating mode, and each transition between modes, have been ascertained (at block 306), each potential combination of task/resource mappings can be identified as having a certain processing speed and power consumption cost. This allows the combination of mappings (i.e., processing option) associated with the lowest overall power consumption cost to be identified and selected by utilizing an optimization algorithm, such as algorithm/solver 222 above for example.”, para [0042], fig 3). Both the systems of Potkonjak and Liu deal with multi-processor systems. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Liu “to reduce a computing platform's power consumption when executing an application within a given amount of time.” (Lui, para [0005]). With regard to claim 19, Potkonjak as modified discloses the apparatus according to claim 18. Potkonjak as modified does not disclose however, Liu further discloses: wherein the capabilities comprise one or more of a compute capability, a memory capability, and an interconnect capability of the respective XPU (“At block 306, operating mode parameters for each discrete mode are ascertained. In this regard, recall that each of the discrete modes ascertained at block 304 are defined by specific operational settings--such as a certain computing/execution speed with processors and a certain communication transfer speed with buses. As such, each discrete mode is associated with certain operating mode parameters, such as a processing speed parameter and power consumption parameter.”, para [0039], fig 3, fig 1). Both the systems of Potkonjak and Liu deal with multi-processor systems. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Liu “to reduce a computing platform's power consumption when executing an application within a given amount of time.” (Lui, para [0005]). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo and Liu, as applied to claim 6, and further in view of Crutchfield et al. (U.S. Patent Application Publication No. US 20070294512 A1, hereinafter “Crutchfield”). With regard to claim 7, Potkonjak as modified discloses the apparatus according to claim 6. Potkonjak as modified does not disclose however, Crutchfield discloses: wherein the assigning the execution of the one or more compute kernels comprises re-partitioning the task graph, so that at least one of the one or more compute kernels is split into two or more compute kernels, with the two or more compute kernels being assigned to the two or more XPUs (“In some embodiments, the operations comprising the failed kernel are split into two kernels using a process different from the original scheduling process, such as graph partitioning.”, para [0034], “A compiled program is also referred to as a "compute kernel" that includes executable binary code for one of the processing elements of the parallel-processing computer system. Therefore, a compiled program sequence corresponds to a sequence of compute kernels, which may correspond to one or more processing elements.”, para [0045]). Both the systems of Potkonjak and Crutchfield deal with task scheduling. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Crutchfield to “enables the execution of the same program on any types of parallel-processing computer system” (Crutchfield, para [0012]). Claim 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1 above, and further in view of Crutchfield et al. (U.S. Patent Application Publication No. US 20070294512 A1, hereinafter “Crutchfield”). With regard to claim 8, Potkonjak as modified the apparatus according to claim 1. Potkonjak as modified not disclose however, Crutchfield discloses: wherein the machine-readable instructions comprise instructions to generate or re-generate the one or more compute kernels based on the assignment of the execution of the one or more compute kernels to the two or more XPUs (“In some embodiments, the operations comprising the failed kernel are split into two kernels using a process different from the original scheduling process, such as graph partitioning.”, para [0034], “A compiled program is also referred to as a "compute kernel" that includes executable binary code for one of the processing elements of the parallel-processing computer system. Therefore, a compiled program sequence corresponds to a sequence of compute kernels, which may correspond to one or more processing elements.”, para [0045]). Both the systems of Potkonjak and Crutchfield deal with task scheduling. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Crutchfield to “enables the execution of the same program on any types of parallel-processing computer system” (Crutchfield, para [0012]). With regard to claim 11, Potkonjak as modified the apparatus according to claim 8. Potkonjak as modified not disclose however, Crutchfield further discloses: wherein the machine-readable instructions comprise instructions to generate or re-generate a task graph of the computer program based on the assignment of the execution of the one or more compute kernels to the two or more XPUs, and to generate or re-generate the one or more compute kernels based on the task graph (“In some embodiments, if an initial attempt by the ProgGen 600 to compile or assemble a program sequence for a processing element fails because of e.g., hardware limitations, the ProgGen 600 may re-generate the program sequence with a set of tighter constraints on code fusion and/or program size.”, para [0340], “In some embodiments, the operations comprising the failed kernel are split into two kernels using a process different from the original scheduling process, such as graph partitioning.”, para [0340], “A compiled program is also referred to as a "compute kernel" that includes executable binary code for one of the processing elements of the parallel-processing computer system. Therefore, a compiled program sequence corresponds to a sequence of compute kernels, which may correspond to one or more processing elements.”, para [0045]). Both the systems of Potkonjak and Crutchfield deal with task scheduling. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Crutchfield to “enables the execution of the same program on any types of parallel-processing computer system” (Crutchfield, para [0012]). Claims 9 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo and Crutchfield, as applied to claim 8, and further in view of Varbanescu et al. (“Heterogeneous computing with accelerators: an overview with examples,” 2016 Forum on Specification and Design Languages (FDL), Bremen, Germany, 2016, hereinafter “Varbanescu”). With regard to claim 9, Potkonjak as modified discloses the apparatus according to claim 8. Potkonjak as modified does not disclose however, Varbanescu discloses: wherein the machine-readable instructions comprise instructions to generate or re-generate the one or more compute kernels based on a monitoring of an execution of the computer program in a sandboxed environment or by the two or more XPUs (“Dynamic partitioning, on the other hand, starts with a best-effort initial partition and keeps adjusting it, at run-time, based on observed behavior and the underlying assumption that the application is repetitive in its behavior. An illustration of the two approaches is presented in Figure 2”, page 4, left column, first paragraph, “CPU”, fig 2, “GPU” fig 2). PNG media_image1.png 278 450 media_image1.png Greyscale Both the systems of Potkonjak and Varbanescu deal with program partitioning. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Varbanescu to improve processor utilization. Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo and Crutchfield, as applied to claim 8, and further in view of Weber et al. (U.S. Patent Application Publication No. US 20240256333 A1, hereinafter “Weber”). With regard to claim 10, Potkonjak as modified discloses the apparatus according to claim 8. Potkonjak as modified does not disclose however, Weber discloses: wherein the one or more compute kernels are generated and/or regenerated in advance of the assignment, or wherein the one or more compute kernels are generated and/or regenerated just-in-time after the assignment (“Additionally, the invention provides the ability to migrate jobs or parts of jobs between different accelerator hardware types at runtime. To this end, a computational graph is used that splits jobs into atomic tasks where each task can be individually programmed and scheduled for execution on a set of heterogeneous machines and accelerators.”, para [0017]). Both the systems of Potkonjak and Weber deal with program partitioning. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Weber to “improve the fine-grained usage and billing of compute resources and thus lower the barrier to adapt heterogeneous computing in the cloud sector” (Weber, para [0077]). Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo and Crutchfield, as applied to claim 11, and further in view of Che et al. (U.S. Patent Application Publication No. US 20200249998 A1, hereinafter “Che”). With regard to claim 12, Potkonjak as modified discloses the apparatus according to claim 11. Potkonjak as modified does not disclose however, Che discloses: wherein the machine-readable instructions comprise instructions to generate or re-generate the task graph based on a static analysis of the computer program (“Graph generator 211 can compile a source code for a machine-learning model or neural network model to generate a computation graph representing the source code.”, para [0033], fig 3). Both the systems of Potkonjak and Che deal with task graphs. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Che to “allow efficient usage of resources of the computing system.” (Che, para [0015]). With regard to claim 13, Potkonjak as modified discloses the apparatus according to claim 12. Potkonjak as modified does not disclose however, Che further discloses: wherein the machine-readable instructions comprise instructions to generate or re-generate the task graph based on a dynamic analysis of the computer program based on a real-world current data flow and/or a real-world past data flow (“In some embodiments, the graph optimizer 212 may refer to database 217 to optimize a computation graph. The database 217 may store various information including: 1) system and target device information, 2) operation profiling information per target device, and 3) subgraph profiling information per target device.”, para [0036], “The operation profiling information can be estimated by simulations or obtained by previous experiments on each of target devices.”, para [0036], fig 3). Both the systems of Potkonjak and Che deal with task graphs. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Che to “allow efficient usage of resources of the computing system.” (Che, para [0015]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1, and further in view of Brill (U.S. Patent Application Publication No. US 20170308411 A1). With regard to claim 14, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Brill discloses: wherein the machine-readable instructions comprise instructions to determine the energy-related metric by estimating the energy-related metric (“the scheduler 401 also uses one or more heuristic techniques to quickly estimate the costs to complete all remaining tasks in order to determine a total estimated cost for each assignment scenario.”, para [0056], “In still other embodiments, the cost of each assignment may be based on other or additional characteristics, such as memory usage or bandwidth. In embodiments where multiple characteristics are used, the multiple characteristics may be weighted or prioritized according to greatest importance (e.g., execution time may be of greater importance than power consumed, thus execution time is given a greater weighting in determining an overall cost).”, para [0055]). Both the systems of Potkonjak and Brill deal with multi-processor systems. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Brill to “maximizes performance while minimizing data movement and power consumption” (Brill, para [0020]). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1 above, and further in view of Kavanagh et al. ("Energy-Aware Self-Adaptation for Application Execution on Heterogeneous Parallel Architectures," in IEEE Transactions on Sustainable Computing, vol. 5, no. 1, pp. 81-94, hereinafter “Kavanagh”). With regard to claim 17, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Kavanagh discloses: wherein the machine-readable instructions comprise instructions to generate synthetic data to be used by the computer program, and to determine the energy-related metric based on the synthetic data (“Application power consumption cannot directly be measured and is synthetic in nature, based upon attributing power consumption to an application dependent upon workload. Adaptation of applications based upon power consumption therefore requires a model to attribute this power.”, “The energy modeller (EM) considers the major power consumers such as CPUs and other accelerators. In order to do this it has various models that may be used to attribute power consumption to an application. Two models have been specifically designed for physical hosts with accelerators, namely the CpuAndAcceleratorEnergyPredictor that utilises neural networks to apply a fit to the available calibration data and the CpuAndBiModalAcceleratorEnergy Predictor that determines power usage of an accelerator assuming an unutilised and heavily utilised state.”, page 6, left column, first and second paragraph). Both the systems of Potkonjak and Kavanagh deal with multi-processor systems. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Kavanagh to improve task scheduling. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1 above, and further in view of Chang et al. (U.S. Patent Application Publication No. US 20220300324 A1, hereinafter “Chang”). With regard to claim 21, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Chang discloses: wherein the machine-readable instructions comprise instructions to provide a runtime environment for execution of the computer program, wherein the determination of the energy-related metric and the assignment of the execution is performed by the runtime environment (“For example, the monitor module 110 may gather the present temperature, the environmental temperature, the power consumption, the amount of workload, the operations executed on the processor, the execution time, powered on or off, active or idle, the operating frequency and voltage, and the like, of each processor 130 at runtime.”, para [0031], “Embodiments of the invention provide a runtime scheduling mechanism for a multiprocessor system to perform thermal-aware task scheduling”, para [0020], “Referring to FIG. 2 and FIG. 4, the thermal predictor module 220 applies an operator fp to the input to generate a time series of predicted temperature values TempP(i) for P1, where i is a running index representing time.”, para [0039]). Both the systems of Potkonjak and Chang deal with an energy-related metric. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Chang to “decrease leakage power, reduce cooling costs, and improve system performance and reliability.” (Chang, para [0020]). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Potkonjak in view of Seo as applied to claim 1 above, and further in view of Che et al. (U.S. Patent Application Publication No. US 20200249998 A1, hereinafter “Che”). With regard to claim 22, Potkonjak as modified discloses the apparatus according to claim 1. Potkonjak as modified does not disclose however, Che discloses: wherein the assignment of the execution of the one or more compute kernels to the two or more different XPUs is limited by one or more policies related to one or more of a deprecated instruction or deprecated instruction set, a prohibited instruction or prohibited instruction set and code execution within an XPU by out-of-band fleet management (“In some embodiments, before taking an action, the task allocation optimizer 215 may refer to database 217 to check whether there is any constraints or preferences on task allocation from prior knowledge. A certain target device may be specialized in executing certain operations or a certain target device may not be proper to execute certain operations.”, para [0050]”). Both the systems of Potkonjak and Che deal with task graphs. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Potkonjak as modified in view of Che to prevent execution faults. Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to claims 1-14 and 17-25 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhao (Chinese Patent Application Publication No. CN 1863192 A) discloses “3) making negotiation decisions according to the service parameter information and the online negotiation result information; if the judgment is true, then generating a service contract, if the generating is successful, entering step 4), if the failed generation, then cancelling the service contract negotiation and contract number and informs the client; if the judgment is not satisfied, then cancelling the negotiation and the contract number and informs the client; 4) the service contract stored in SLA database, establishment of connection service according to service request information to perform SLA, if successfully established, network management, storing the service information and informs the client; if establishment failure to network management, revocation service contract. negotiation and the contract number and informs the client.” (Zhao, page 3, first paragraph). Abulkhair ("Automated Negotiation using Parallel Particle Swarm Optimization for Cloud Computing Applications," 2017 International Conference on Computer and Applications (ICCA)) discloses “The SLA negotiation, can involve a single objective (e.g., the cost) or multiple objectives (e.g., cost, response time, and deadline). In single objective, the fitness function is based only on the single objective, while in the multiple objective case the utility is dependent on a set of objectives. In this paper, the SLA negotiation model is based on three objectives: data center current load, network bandwidth, and cost. Thus, the selection is based on minimizing current load, network bandwidth, and cost.” (Abulkhair, page 27, right column, first full paragraph) Mahbub (“Proactive SLA Negotiation for Service Based Systems," 2010 6th World Congress on Services) discloses “The negotiation broker is the component that manages and executes the negotiation process on behalf of a service consumer (i.e., the composite service) or a service provider. Our architecture assumes that a separate instance of the negotiation broker is associated with each of the two sides (the service provider and consumer) that participate in the negotiation process. Negotiation brokers are responsible for negotiating and agreeing the guarantee terms of an SLA. The negotiation process can be either reactive or proactive. In proactive negotiation, the negotiation process is carried out according to a two-phase protocol that may result in a provisionally agreed but not activated SLA (see Pre-agreed SLA in Figure 1) or negotiation failure.” (Mahbub, page 520, right column, first full paragraph). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELENA SABAH NAHRA whose telephone number is (571)272-6115. The examiner can normally be reached Monday-Thursday 7:00 AM -5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung Sough can be reached at (571) 272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.S.N./Examiner, Art Unit 2192 /S. Sough/SPE, Art Unit 2192
Read full office action

Prosecution Timeline

Jun 28, 2022
Application Filed
Aug 10, 2022
Response after Non-Final Action
Sep 03, 2025
Non-Final Rejection — §103
Dec 04, 2025
Response Filed
Feb 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554531
IMPROVING PROCESSOR UTILIZATION
2y 5m to grant Granted Feb 17, 2026
Patent 12554550
Real Time Optimization Apparatus Using Quantum Non-Fungible Token Contract Ranking for Dynamic Code Evolution
2y 5m to grant Granted Feb 17, 2026
Patent 12536047
Dynamic Core Allocation Among Containers on a Host
2y 5m to grant Granted Jan 27, 2026
Patent 12530212
METHOD AND APPARATUS FOR ISOLATED EXECUTION OF COMPUTER CODE WITH A NATIVE CODE PORTION
2y 5m to grant Granted Jan 20, 2026
Patent 12436793
Virtual Machine Management
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+66.7%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month