DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to Applicant’s Amendment and Remarks filed on 12 November 2025.
Claims 1-20 are pending in this application.
Claim objections
Claims 1, 8 and 15 are objected to because of the following informalities:
In claim 1, lines 24-25 and 28, it recites “the power usage information”. It should be amended as “the power usage” (see claim 1, lines 19-20, “storage of power usage”).
(same applies to claims 8 (lines 17 and 21) and claim 15 (lines 20 and 24)).
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1, Statutory Category: Yes, the claim 1 is an apparatus that recites a series of steps and therefore falls in the statutory category of a machine.
Step 2A- Prong 1: Judicial Exception Recited: Yes, the claim recites: “assign the first requested task to a destination FPGA; and assign a second requested task to the destination FPGA based on the stored indication that the destination FPGA is already configured with the accelerator image, a priority of the second requested task, and the power usage information, frequencies of use of a plurality of accelerator images including the accelerator image of the destination FPGA, and a determination of whether the power usage information meets a power usage threshold”. As drafted, the claim as a whole recites a method including steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily judging/evaluating/planning/scheduling/assigning the first task to FPGA, and assigning/scheduling the second task to the FPGA based on the stored indication of that FPGA is already configured with matching accelerator image, a priority of the second requested task, power usage, determination of the frequency of usages of different accelerator images, and determining whether the power usage information meets a power usage threshold. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion).
Therefore, yes, the claims do recite judicial exceptions.
Step 2A- Prong 2: Integrated into a practical Application: No, this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “network interface circuitry to obtain task parameters of a first requested task, the task parameters to indicate an accelerator image to be used by a field programmable gate array (FPGA) in performance of the first requested task” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). In addition, the limitation of “network interface circuitry, computer readable instructions; and at least one processor circuit to be programed by the computer readable instructions to” which is directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Moreover, the limitation of “cause reimaging of the destination FPGA with the accelerator image” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)). The claim does not define any particular machine to “cause” this “reimaging,” other than a generic machine such as the “processor circuit,” and no details what so ever on how the claimed function will occur. Further, the limitation of “cause transmission of an identification of the destination FPGA to a requesting device, the requesting device to communicate with the destination FPGA to cause the destination FPGA to perform the first requested task” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g). Furthermore, “cause storage of an indication that the destination FPGA is configured with the accelerator image; cause storage of power usage by the destination FPGA during execution of the first requested task” which are insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea.
Step 2B: Claim provides an Inventive Concept: No. The additional element “network interface circuitry, computer readable instructions; and at least one processor circuit to be programed by the computer readable instructions to” which is directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, the limitation of “cause reimaging of the destination FPGA with the accelerator image” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)). The claim does not define any particular machine to “cause” this “reimaging” other than a generic machine such as the “processor circuit,” and no details what so ever on how the claimed function will occur. Moreover, “network interface circuitry to obtain task parameters of a first requested task, the task parameters to indicate an accelerator image to be used by a field programmable gate array (FPGA) in performance of the first requested task” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). Further, the limitation of “cause transmission of an identification of the destination FPGA to a requesting device, the requesting device to communicate with the destination FPGA to cause the destination FPGA to perform the first requested task” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g). Furthermore, “cause storage of an indication that the destination FPGA is configured with the accelerator image; cause storage of power usage by the destination FPGA during execution of the first requested task” which are insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)) and they are well understood, routine, conventional activity (see MPEP § 2106.05(d)). Courts have identified “receiving and transmitting data, storing and retrieving information”, et cetera as well understood, routine, conventional. These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B.
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the “obtain” and “transmission” steps were considered to be extra-solution activity in Step 2A as insignificant data gathering and communication, and the “cause storage” steps were considered to be extra-solution activity in Step 2A are insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)) and they are well understood, routine, conventional activity in the field. The “obtain” and “transmission” steps are for the purpose of “communication” and “transmitting the data” and these can be reached on one of court case (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) see MPEP § 2106.05(d) II). The steps of “cause storage” steps are for purpose of merely data storing, and this can be reached on one of court case (Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; see MPEP §2106.05(d)(II) iv.). Accordingly, a conclusion that “obtain” and “transmission” and “cause storage” are well understood, routine, conventional activity is supported under Berkheimer options 2.
For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible.
Independent claims 8 and 15 are rejected for the same reason as claim 1 above. Claim 8 further recites “A non-transitory computer readable medium comprising instruction which, when executed, cause at least one processor”. Claim 15 further recites “A method, comprising” and “executing an instruction with processor circuitry”. These additional elements are merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea, thus is not a practical application under Prong 2, or amount to significantly more than the judicial exception under Step 2B (See MPEP 2106.05(f)).
With respect to the dependent claim 2, the claim elaborates that wherein the task parameters include an indication of the accelerator image to be used in performance of the second requested task; wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that an instance of the accelerator image is available in the destination FPGA (“task parameters include an indication of the accelerator image to be used” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that an instance of the accelerator image is available in the destination FPGA” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 3, the claim elaborates that wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task; wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA (“wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 4, the claim elaborates that wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task; wherein to determine the destination FPGA includes to determine the destination FPGA based on space available for the accelerator image in the destination FPGA after a defragmentation of the destination FPGA. (“an indication of an accelerator image” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “wherein to determine the destination FPGA includes to determine the destination FPGA based on space available for the accelerator image in the destination FPGA after a defragmentation of the destination FPGA” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 5, the claim elaborates that wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task, wherein the at least one processor circuit is to store a plurality of accelerator images, wherein the plurality of accelerator images includes the accelerator image to be used in performance of the second requested task; and wherein the network interface circuitry is further to send the accelerator image to the destination FPGA in response to receive the indication of the accelerator image to be used in performance of the second requested task. (“an indication of an accelerator image to be used” and “wherein the plurality of accelerator images includes the accelerator image to be used in performance of the second requested task” are directed to Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, “store a plurality of accelerator images” which is insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)). And “wherein the network interface circuitry is further to send the accelerator image to the destination FPGA in response to receive the indication” is directed to insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g)).
With respect to the dependent claim 6, the claim elaborates that wherein the assignment of the second requested task is based on FPGA usage information that includes at least one of (i) accelerator images deployed on each of a plurality of FPGAs, (ii) whether each accelerator image deployed on each of the plurality of FPGAs is permitted to be shared, (iii) how much free space is in each of the plurality of FPGAs, (iv) a power usage of each of the plurality of FPGAs, and (v) an indication of a last time of use of an accelerator image of at least one of the plurality of FPGAs (“wherein the assignment of the second requested task is based on FPGA usage information that includes…” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 7, the claim elaborates that wherein to determine the destination FPGA of a plurality of FPGAs includes to determine the destination FPGA based on at least one of (i) accelerator images deployed on each of the plurality of FPGAs, (ii) whether each accelerator image deployed on each of the plurality of FPGAs is permitted to be shared, (iii) how much free space is in the at least one of the plurality of FPGAs, (iv) a power usage of each of the plurality of FPGAs, and (v)the indication of a last time of use of the accelerator image of at least one of the plurality of FPGAs (“determine the destination FPGA of a plurality of FPGAs includes to determine the destination FPGA based on” all the information are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
Dependent claims 9-12 recite the same features as applied to claims 2-5 respectively above, therefore they are also rejected under the same rationale.
With respect to the dependent claim 13, the claim elaborates that wherein the assignment of the second requested task is based on FPGA usage information that includes how much free space is in each of a plurality of FPGAs (“assignment of the second requested task is based on FPGA usage information” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
With respect to the dependent claim 14, the claim elaborates that wherein to determine the destination FPGA of a plurality of FPGAs includes to determine the destination FPGA based an indication of a last time of use of the accelerator image of at least one of the plurality of FPGAs (“determine the destination FPGA based an indication of a last time of use of the accelerator image of at least one of the plurality of FPGAs” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)).
Dependent claims 16-19 and 20 recite the same features as applied to claims 2-5 and 14 respectively above, therefore they are also rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-8, 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wu et al. (US Pub. 2020/0293345 A1) in view of IZENBERG et al. (US Pub. 2017/0195173 A1) and further in view of Putnam et al. (US Pub. 2016/0299553 A1), OKADA et al. (US Pub. 2018/0260257 A1) and CHEN et al. (US Pub. 2019/0056942 A1).
Wu, IZENBERG, Putnam and OKADA were cited in the previous Office Action.
CHEN was cited in the IDS filed on 03/19/2024.
As per claim 1, Wu teaches the invention substantially as claimed including An apparatus (Wu, Fig. 2):
network interface circuitry to obtain task parameters of a first requested task, the task parameters to indicate target accelerator type to be used in performance of the first requested task (Wu, Fig. 2, 101, 102, 103, 104 and 105 (as whole as network interface circuitry); [0072] lines 1-12, The acceleration management node 100 in this embodiment of the present application may be a management program running on a physical host. The physical host may include a processor, a memory, and an input/output (I/O) interface…the receiving unit 101 may be a software I/O interface, and the acceleration node may use various communications tools (for example, a communications tool Rabbit MQ) between software I/O interfaces to remotely invoke the software I/O interface for communication; [0073] lines 1-5, The obtaining unit 103 is configured to obtain an invocation request from a client 300. The invocation request is used to invoke an acceleration device to accelerate a service of the client 300, and the invocation request includes a target acceleration type and a target algorithm type (as obtain task parameters of a first requested task which indicate target accelerator type to be used in performance of the first requested task (i.e., service of the client));
computer readable instructions; and at least one processor circuit to be programmed by the computer readable instructions to (Wu, [0072] lines 1-5, The acceleration management node…running on a physical host. The physical host may include a processor, a memory, and an input/output (I/O) interface; Claim 20, lines 1-3, A non-transitory storage medium, storing instructions which, when executed by one or more processors of an acceleration management device):
assign the first requested task to a destination FPGA (Wu, Fig. 2, instruction unit 105 (see arrow assigned to acceleration node 200c); [0080] lines 1-15, after the client 300 requests the acceleration management node 100 to invoke an acceleration device, the acceleration management node 100 determines, by querying the acceleration device information obtained from the acceleration nodes 200a, 200b, and 200c, that a target acceleration device of required for meeting the invocation request of the client 300 is located on the acceleration node 200c; [0076] lines 1-5, After obtaining the invocation request from the client 300, the allocation unit 104 searches acceleration device information of all acceleration nodes that is stored in a storage unit 102, for a target acceleration device that meets the target acceleration type and the target algorithm type required by the invocation request; [0088] lines 2-5, the allocation unit 104 determines a target acceleration device from the at least one candidate acceleration device; also see [0003] lines 2-4, some services (or functions) in the program may be allocated to a hardware acceleration device for execution);
assign a second requested task to the destination FPGA (Wu, [0003] lines 2-4, some services (or functions) in the program may be allocated to a hardware acceleration device for execution (as including second requested task;); see [0073] lines 1-5, The obtaining unit 103 is configured to obtain an invocation request from a client 300. The invocation request is used to invoke an acceleration device to accelerate a service of the client 300); [0088] lines 2-5, determines a target acceleration device from the at least one candidate acceleration device;).
Wu fails to specifically teach cause transmission of an identification of the destination FPGA to a requesting device, the requesting device to communicate with the destination FPGA to cause the destination FPGA to perform the first requested task.
However, IZENBERG teaches cause transmission of an identification of the destination FPGA to a requesting device, the requesting device to communicate with the destination FPGA to cause the destination FPGA to perform the first requested task (IZENBERG, Fig. 5, 520 resource manager send 533 recommended FPGA-enabled CL category to the client 580 (as requesting device), 534 CL request send to the resource manager; Fig. 3, category FPGA-A; [0025] lines 1-9, the client may indicate a target application which is to be run using VCS resources…Based on the description of the application provided by the client, the resource manager may recommend a particular FPGA-enabled instance category (as identification of the destination FPGA) to the client. The client may then request an instance of the recommended category, and the resource manager may perform the necessary configuration operations to establish the instance on behalf of the client (as the requesting device to communicate with the destination FPGA via the resource manager to cause the destination FPGA to perform the first requested task); also see [0028] lines 5-7, select desired applications, and request FPGA-enabled compute instances on which the selected applications can be run).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu with IZENBERG because IZENBERG’s teaching of sending the recommendation/Identification of the FPGA back to client and let client to request the target FPGA for performing the workload based on the recommendation would have provided Wu’s system with the advantage and capability to allow the user to controlling the workload assignment based on the system recommendation in order to improve the system performance and user experiences.
Wu and IZENBERG fail to specifically teach when assign a second request task, it is based on a priority of the second requested task, and the power usage information, and a determination of whether the power usage information meets a power usage threshold.
However, Putnam teaches when assign a second request task, it is based a priority of the second requested task (Putnam, Fig. 2, workflow 270, 282 portion of the workflow is assigned to Hardware accelerator 222; [0002] lines 29-34, the processing of at least a portion of the workflow from generalized central processing units to the targeted processing units of hardware accelerators, a determination that can be informed by current power consumption, anticipated power consumption, available power routing implementations, workload or job priority) and
the power usage information, and a determination of whether the power usage information meets a power usage threshold (Putnam, Fig. 2, 290 aggregate power consumption 290, 291 hardware accelerator power consumption (as power usage information), maximum (as power usage threshold); [0017] 1-6, exemplary hardware accelerator…Field Programmable Gate Arrays (FPGAs); [0034] 1-16, the utilization of a hardware accelerator, such as the exemplary hardware accelerator 222, can exceed the capacity of power provisioning devices...For example, as illustrated in FIG. 2, the execution of computer-executable instructions associated with the workflow 270, by the CPU 221, can result in a power consumption amount 292 that can be below a maximum rated power, such as one or more of the power provisioning devices 260. However, utilization of the hardware accelerator 222 to, for example, execute at least a portion 271 of the workflow 270, can result in an additional power consumption amount 291 such that the aggregate power consumption 290 can exceed a maximum rated power; also see [0035] lines 10-12, transfer 289 of the processing of a portion of the workflow 270, such as the exemplary portion 271, to a different hardware accelerator (as assignment of the second requested task based on the FPGA usage information includes determining whether the power usage information meets a power usage threshold (i.e., exceeding the maximum, if exceed, it can assigned to a different accelerator)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu and IZENBERG with Putnam because Putnam’s teaching of assigning the portion of the workload to a different accelerator based on the determination of the destination FPGA power consumption will exceeding the maximum power level would have provided Wu and IZENBERG’s system with the advantage and capability to allow the system to optimizing the resource/power utilization between different hardware accelerators which improving the system efficiency and performance.
Wu, IZENBERG and Putnam fail to specifically teach the target accelerator type indicated in the task parameters is an accelerator image to be used by a field programable gate array (FPGA), cause reimaging of the destination FPGA with the accelerator image, and when assigning the second requested task, it is also based on frequencies of use of a plurality of accelerator images including the accelerator image of the destination FPGA.
However, OKADA teaches the target accelerator type indicated in the task parameters is an accelerator image to be used by a field programable gate array (FPGA), cause reimaging of the destination FPGA with the accelerator image (OKADA, Fig. 10, FPGA#1, FPGA#2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12), and
when assigning the second requested task, it is also based on frequencies of use of a plurality of accelerator images including the accelerator image of the destination FPGA (OKADA, Fig. 10, FPGA#1, FPGA#2 (as plurality of FPGAs), 121 configuration data; Fig. 12, 500 FPGA management information; [0075] lines 1-7, there are three kinds of FPGA configuration data 802 including FPGA configuration data #A for Join processing, FPGA configuration data #B for sequential scanning, and FPGA configuration data #C for aggregating calculation. Further, the memory 11 includes FPGA configuration information 801 indicating the information of the configuration data #A through #C; [0077] lines -5, as there are two FPGAs 12, both the FPGA usage status 501 and the FPGA device information 502 exist for each FPGA 12. That is, there is FPGA usage statuses #1 and #2 corresponding to FPGAs #1 and #2; [0104] lines 2-11, the FPGA management unit 104 may cache the FPGA configuration data 1102 once used, collect a usage frequency for each FPGA configuration data 1102, analyze the collected information, and, when the FPGA 12 is not in use, load the FPGA configuration data 1102 that has the highest likelihood of being used next into the FPGA 12 in advance (as based on frequencies of use of a plurality of accelerator images including the accelerator image of the destination FPGA). In this case, if the prediction is correct, as the FPGA configuration data 1102 of a usage target has already been loaded by the time the FPGA 12 is utilized, the loading time may be shortened).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG and Putnam with OKADA because OKADA’s teaching of determining the usage frequency of each FPGA configuration data (as image) and preloading the FPGA image that is highest likelihood of being used next into the FPGA in advance for assignment of tasks would have provided Wu, IZENBERG and Putnam’s system with the advantage and capability to allow the system to shorting the FPGA image loading time in order to improving the system efficiency and performance (see OKADA, [0104]).
Wu, IZENBERG, Putnam and OKADA fail to specifically teach cause storage of an indication that the destination FPGA is configured with the accelerator image; cause storage of power usage by the destination FPGA during execution of the first requested task, and when assign a second requests task, it is based on the stored indication that the destination FPGA is already configured with the accelerator image.
However, CHEN teaches cause storage of an indication that the destination FPGA is configured with the accelerator image (CHEN, Fig. 6, 164 accelProfileID (see Fig. 5, 130 (as accelerator image); [0037] lines 1-3, FIG. 5 is a schematic diagram of a data structure representing hardware accelerator configurations; [0068] lines 1-10, FIG. 6 depicts example records of a data structure 160 maintained by resource management service 126. Data structure 160 contains records for each processing resource (processor 110, accelerated processor 111, GPU 119) in each resource server 104 within computing system 100. As shown, each record has a resource ID field 162 with a value uniquely identifying the specific resource, i.e. the specific resource server 104 or processing resource therein. Each record further contains an accelerator profile field 164 with a value corresponding to a profile ID 132 of data structure 130);
cause storage of power usage by the destination FPGA during execution of the first requested task (CHEN, Fig. 6, 172 TDP; [0070] lines 1-4, Power consumption field 172 contains a value indicative of the amount of power used by the resource when under load. The power consumption may be a design rating, such as a thermal rating, or a measured consumption value; also see [0071] lines 6-12, indicating that the resource is busy with an existing workload (as during execution of the first requested task). In other embodiments, availability field 178 may also reflect the degree of utilization of resources. For example, the availability may be expressed as a percentage, with 100% indicating that the resource is fully idle and ready for a new workload), and
when assign a second requests task, it is based on the stored indication that the destination FPGA is already configured with the accelerator image (CHEN, [0074] lines 1-12, Based on the resource availability information received from RMS 126 and catalog 124, and the resource request information received from cloud infrastructure manager 122, ARO 128 analyzes function requirements and resource availability, then allocates types and quantities of resources to particular requests. Specifically, ARO 128 is configured to filter available resources based on parameters associated with the requests, rank the candidate resources, and match resources to workloads. ARO 128 is further configured to construct a provisioning request and send the provisioning request to cloud infrastructure manager 122 for provisioning of resources. [0075] lines 1-4, Each request defines a workload to be executed and is accompanied by a policy defining requirements or preferences for the execution of the workload; [0077] lines 1-10, Function category 182 and function type 184 correspond to function category 144 and function 142 of records 130 (FIG. 5). In the depicted example, policy 180 is for a compression function. In some examples, multiple functions may be specified. Thus, as shown, function type 184 may be an array specifying multiple types of functions with which the application is compatible; [0086] lines 1-9, ARO 128 filters the candidate accelerators and function implementations. Specifically, ARO 128 retains candidate accelerators and function implementations that match one another and eliminates unmatched accelerators and function implementations…“matching” accelerators and function implementations are those corresponding accelerators and function implementations that are compatible with the same accelerator profile. (as when assigning the second request task, it is based on the stored indication that the destination FPGA is already configured with the accelerator image (i.e., matching”).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam and OKADA with CHEN because CHEN’s teaching of determining the destination/resource for the request based on the matching of the request requirement with stored record would have provided Wu, IZENBERG, Putnam and OKADA’s system with the advantage and capability to allow the system to easily determining appropriate processing node/FPGA based on the matching accelerator profile which improving the system performance and resource utilization (see CHEN, [0056] “Execution of functions using appropriate accelerators generally provides improved efficiency or performance”]).
As per claim 7, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. CHEN further teaches wherein to determine the destination FPGA of a plurality of FPGAs includes to determine the destination FPGA based on at least one of (i) accelerator images deployed on each of the plurality of FPGAs, (ii) whether each accelerator image deployed on each of the plurality of FPGAs is permitted to be shared, (iii) how much free space is in the at least one of the plurality of FPGAs, (iv) a power usage of each of the plurality of FPGAs, and (v) the indication of a last time of use of the accelerator image of at least one of the plurality of FPGAs (Chen, [0074] lines 1-9, Based on the resource availability information received from RMS 126 and catalog 124, and the resource request information received from cloud infrastructure manager 122, ARO 128 analyzes function requirements and resource availability, then allocates types and quantities of resources to particular requests. Specifically, ARO 128 is configured to filter available resources based on parameters associated with the requests, rank the candidate resources, and match resources to workloads; also see [0068] [0081] lines 1-3, Policy 180 further includes a set of rules 200 to selectively rank candidate resources for deployment of the function; [0070] lines 1-4, Power consumption field 172 contains a value indicative of the amount of power used by the resource when under load. The power consumption may be a design rating, such as a thermal rating, or a measured consumption value (as a power usage; also see [0081] lines 8-11, Rules 200-1, 200-2, 200-3 are applied in order, i.e. resources are to be sorted first according to performance, second according to TDP, and third according to the available function).
As per claim 8, it is a non-transitory computer readable medium claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above (i.e., all the claimed limitations of claim 8 are included in the claim 1).
As per claim 13, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. CHEN further teaches wherein the assignment of the second requested task is based on FPGA usage information that includes how much free space is in each of a plurality of FPGAs (CHEN, [0071] lines 1-20, Availability field 176 contains values indicating the availability of the resource for new workloads…In other embodiments, availability field 178 may also reflect the degree of utilization of resources. For example, the availability may be expressed as a percentage, with 100% indicating that the resource is fully idle and ready for a new workload, and 0% indicating that the resource is fully utilized in execution of an existing workload. It should be understood that not all embodiments require all the fields of the data structure 160, and that in some embodiments only a subset of these fields is required. In some embodiments additional fields may be present in the data structure (as how much free spaces (i.e., percentage)).
As per claim 15, it is a method claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above (i.e., all the claimed limitations of claim 15 are included in the claim 1).
Claims 2, 6, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, IZENBERG, Putnam, OKADA and CHEN, as applied to claims 1, 8 and 15 respectively above, and further in view of OOHIRA et al. (US Pub. 2019/0050248 A1).
OOHIRA was cited in the IDS filed on 03/19/2024.
As per claim 2, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. OKADA further teaches wherein the task parameters include an indication of the accelerator image to be used in performance of the second requested task; (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12),
Wu, IZENBERG, Putnam, OKADA and CHEN fail to explicitly wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that an instance of the accelerator image is available in the destination FPGA.
However, OOHIRA teaches wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that an instance of accelerator image is available in the destination FPGA (OOHIRA, [0085] lines 1-4, the HWA resource information includes at least one among: usage state indicating whether or not the hardware accelerator is being used, loading ID indicating a program loaded to the hardware accelerator (as an instance of the accelerator image is available; see Fig. 6, HWA configuration information (As accelerator usage information),PM1-3…; loading ID, P1; [0113] lines 1-3, The program ID is ID information of a program in which a hardware accelerator is used within a program; Fig. 8, S204 Are programs the same, yes to S205 and yes to HWA is usable).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with OOHIRA because OOHIRA’s teaching of task parameters which indicating different configurations/settings/programs/accelerator image would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to easily determining the hardware capability of different accelerators in order to improving the task scheduling performance and resource utilization.
As per claim 6, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. Wu, IZENBERG, Putnam, OKADA and CHEN fail to specifically teach wherein the assignment of the second requested task is based on FPGA usage information that includes at least one of (i) accelerator images deployed on each of a plurality of FPGAs, (ii) whether each accelerator image deployed on each of the plurality of FPGAs is permitted to be shared, (iii) how much free space is in each of the plurality of FPGAs, (iv) a power usage of each of the plurality of FPGAs, and (v) an indication of a last time of use of an accelerator image of at least one of the plurality of FPGAs.
However, OOHIRA teaches wherein the assignment of the second requested task is based on FPGA usage information that includes at least one of (i) accelerator images deployed on each of a plurality of FPGAs, (ii) whether each accelerator image deployed on each of the plurality of FPGAs is permitted to be shared, (iii) how much free space is in each of the plurality of FPGAs, (iv) a power usage of each of the plurality of FPGAs, and (v) an indication of a last time of use of an accelerator image of at least one of the plurality of FPGAs (OOHIRA, Fig. 6, HWA configuration information, HWA resource information, loading ID associated with device ID; [0123] lines 1-3, The loading ID is ID information related to a program written to the hardware accelerator (a program executed in the hardware accelerator) (as accelerator images deployed on each of a plurality of FPGAs)).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with OOHIRA because OOHIRA’s teaching of resource information for the hardware accelerator that including the image/program/setting ID that currently loaded would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to easily determining the hardware capability of different accelerators in order to improving the task scheduling performance and resource utilization.
As per claim 9, it is a non-transitory computer readable medium claims of claim 2 above. Therefore, it is rejected for the same reason as claim 2 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
As per claim 16, it is a method claim of claim 2 above. Therefore, it is rejected for the same reason as claim 2 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, IZENBERG, Putnam, OKADA and CHEN, as applied to claims 1, 8 and 15 respectively above, and further in view of Tanaka (US Pub. 2015/0373225 A1).
As per claim 3, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. OKADA further teaches wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12),
Wu, IZENBERG, Putnam, OKADA and CHEN fail to specifically teach wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA.
However, Tanaka teaches wherein to determine the destination FPGA includes to determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA (Tanaka, Fig. 6, S601, S602 and S603; [0058] lines 2-10, based on information from the reconfiguration management unit 205, the CPU 101 determines whether or not there is a free partial reconfiguration unit (a partial reconfiguration unit not configured with any circuit configuration, that is to say, a partial reconfiguration unit substantially not operating as a circuit) among the partial reconfiguration units of the FPGA 140. If the CPU 101 determined in step S601 that there is a free partial reconfiguration unit among the partial reconfiguration units of the FPGA 140, the procedure moves to step S602 (as determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA, and then configuring).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with Tanaka because Tanaka’s teaching of determining for reconfiguration based on the free space available for accelerator image/configuration would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to dynamic partial reconfiguring, rather than rewriting the entirety of the configuration memory during dynamic reconfiguring which improving the system performance and system efficiency (see Tanaka, [0006] “Using this dynamic partial reconfiguring technique enables implementing multiple logic circuits in one region of the FPGA, thus making it possible to realize a logic circuitry in which hardware resources are time-division multiplexed. As a result, various functions corresponding to various applications can be flexibly realized with few hardware resources, while maintaining high hardware operation performance”).
As per claim 10, it is a non-transitory computer readable medium claim 3 above. Therefore, it is rejected for the same reason as claim 3 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
As per claim 17, it is a method claim of claim 3 above. Therefore, it is rejected for the same reason as claim 3 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
Claims 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, IZENBERG, Putnam, OKADA and CHEN, as applied to claims 1, 8 and 15 respectively above, and further in view of Tanaka (US Pub. 2015/0373225 A1) and Fender et al. (US Patent. 9,698,794 B1).
Fender was cited in the previous Office Action.
As per claim 4, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. OKADA further teaches wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
Wu, IZENBERG, Putnam, OKADA and CHEN fail to specifically teach wherein to determine the destination FPGA includes to determine the destination FPGA based on space available for the accelerator image in the destination FPGA after a defragmentation of the destination FPGA.
However, Tanaka teaches wherein to determine the destination FPGA includes to determine the destination FPGA based on space available for the accelerator image in the destination FPGA (Tanaka, Fig. 6, S601, S602 and S603; [0058] lines 2-10, based on information from the reconfiguration management unit 205, the CPU 101 determines whether or not there is a free partial reconfiguration unit (a partial reconfiguration unit not configured with any circuit configuration, that is to say, a partial reconfiguration unit substantially not operating as a circuit) among the partial reconfiguration units of the FPGA 140. If the CPU 101 determined in step S601 that there is a free partial reconfiguration unit among the partial reconfiguration units of the FPGA 140, the procedure moves to step S602 (as determine the destination FPGA based on an indication that the destination FPGA has space available for the accelerator image in the destination FPGA, and then configuring).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with Tanaka because Tanaka’s teaching of determining for reconfiguration based on the free space available for accelerator image/configuration would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to dynamic partial reconfiguring, rather than rewriting the entirety of the configuration memory during dynamic reconfiguring which improving the system performance and system efficiency (see Tanaka, [0006] “Using this dynamic partial reconfiguring technique enables implementing multiple logic circuits in one region of the FPGA, thus making it possible to realize a logic circuitry in which hardware resources are time-division multiplexed. As a result, various functions corresponding to various applications can be flexibly realized with few hardware resources, while maintaining high hardware operation performance”).
Wu, IZENBERG, Putnam, OKADA, CHEN and Tanaka fail to specifically teach space available is after a defragmentation of the destination FPGA.
However, Fender teaches space available is after a defragmentation of the destination FPGA (Fender, Fig. 3, defragment for FPGA; Col 2, lines 20-21, region defragmentation 300 of a running virtualized FPGA system using live region migration within the FPGA; Col 4, lines 13-20, region defragmentation 300 of a running virtualized FPGA system using live region migration within the FPGA. At 301, which is similar to the fragmented FPGA 206 in FIG. 2 after function A has been released, defragmentation can be implemented by migrating function B to a new region such that the original subregion associated with function B can be released to coalesce with other available subregions).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA, CHEN and Tanaka with Fender because Fender’s teaching of space availability after defragmentation process would have provided Wu, IZENBERG, Putnam, OKADA, CHEN and Tanaka’s system with the advantage and capability to improve the resource utilization by performing the defragmentation process which improving the system resource utilization and system efficiency.
As per claim 11, it is a non-transitory computer readable medium claim of claim 4 above. Therefore, it is rejected for the same reason as claim 4 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
As per claim 18, it is a method claim of claim 4 above. Therefore, it is rejected for the same reason as claim 4 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, IZENBERG, Putnam, OKADA and CHEN, as applied to claims 1, 8 and 15 respectively above, and further in view of KRUGLICK (US Pub. 2015/0339130 A1) and TSAI et al. (US Pub. 2017/0010821 A1).
KRUGLICK and TSAI were cited in the previous Office Action.
As per claim 5, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 1 above. OKADA further teaches wherein the task parameters include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12),
Wu, IZENBERG, Putnam, OKADA and CHEN fail to specifically teach wherein the at least one processor circuit is to store a plurality of accelerator images, wherein the plurality of accelerator images includes the accelerator image to be used in performance of the second requested task; and wherein the network interface circuitry is further to send the accelerator image to the destination FPGA in response to receive the indication of the accelerator image to be used in performance of the second requested task.
However, KRUGLICK teaches the wherein the at least one processor circuit is to store a plurality of accelerator images, wherein the plurality of accelerator images includes the accelerator image to be used in performance of the second requested task (KRUGLICK, Fig. 1, 150, 151A-153A (as plurality of accelerator images stored); [0024] lines 1-3, field-programmable logic circuits 121-123 are programmed with hardware accelerator images 151A-153A; [0027] lines 5-12, hardware accelerator packages 151-158 is configured to program a suitable field-programmable logic circuit in CMP 100 with a specific hardware accelerator image, such as hardware accelerator images 151A, 152A, and 153A. Each of hardware accelerator images 151A, 152A, and 153A may be designed for running the computationally intensive software code of a particular software application or family of related applications).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with KRUGLICK because KRUGLICK’s teaching of accelerator image for running the software applications would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to designating specific accelerator image for executing particular workload which improving the system performance and efficiency.
Wu, IZENBERG, Putnam, OKADA, CHEN and KRUGLICK fail to specifically teach wherein the network interface circuitry is further to send the accelerator image to the destination FPGA in response to receive the indication of the accelerator image to be used in performance of the second requested task.
However, TSAI teaches wherein the network interface circuitry is further to send the accelerator image to the destination FPGA in response to receive the indication of the accelerator image to be used in performance of the second requested task (TSAI, Fig. 1, 131 (as accelerator images); [0017] lines 2-15, updating firmware of storage device, the method applied to a firmware updating process between a host device and at least one storage device, the host device comprising a processor and a host storage, the host storage storing at least one first firmware, each first firmware defining an operation behavior, respectively… transmitting (as send) the first firmware to the storage device via the host device; receiving the first firmware and loading the first firmware into the first storage unit via the controller of the storage device; and executing at least one operation action according to the operation behavior defined by the first firmware via the controller of the storage device; [0024] lines 18-22, host device 100 of the present invention may further provide various types of first firmware 131 having different purposes, and transmit those first firmware 131 to the storage device 200, so that the storage device 200 is able to execute various types of application processes; please note: FPGA was taught by Wu).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA, CHEN and KRUGLICK with TSAI because TSAI’s teaching of sending the firmware (as accelerator image) to destination based on the updating would have provided Wu, IZENBERG, Putnam, OKADA, CHEN and KRUGLICK’s system with the advantage and capability to allow the system to processing the different types of application processes which improving the system performance and efficiency.
As per claim 12, it is a non-transitory computer readable medium claim of claim 5 above. Therefore, it is rejected for the same reason as claim 5 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
As per claim 19, it is a method claim of claim 5 above. Therefore, it is rejected for the same reason as claim 5 above. In addition, OKADA further teaches wherein the assignment of the first task is based on task parameters that include an indication of an accelerator image to be used in performance of the second requested task (OKADA, Fig. 2, application #2 (as second requested task); Fig. 10, FPGA#1, FPGA#2; [0064] lines 1-3, The application #1 and the application #2 issue a usage request 510 to the FPGA management unit; also see Fig. 4, 1101 and 1101 FPGA configuration information and data for application 1 and application 2; [0066] lines 2-14, the usage request 510 may include an address of the FPGA configuration data (as accelerator image) managed by the request source application. In particular, for example, when the application #1 is the request source, the usage request 510 may include a configuration address (information indicating the address of the FPGA configuration data #1 ) corresponding to the application #1. When the FPGA management unit 103 receives such a usage request 510, if the FPGA 12 is not in use, the FPGA management unit 103 loads the FPGA configuration data #1 into the configuration data storage area 121 of the FPGA 12 from the configuration address (address of the memory 11) indicated by the received usage request 510; also see [0003] lines 1-3, load FPGA configuration data in the FPGA at the time of device power-on, and use it as dedicated hardware; [0036] lines 2-4, loading the FPGA configuration data into a configuration data storage area 121 in the FPGA 12, it is possible to flexibly modify the operation within the FPGA 12).
Claims 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu, IZENBERG, Putnam, OKADA and CHEN, as applied to claims 8 and 15 respectively above, and further in view of ZHANG (US Pub. 2018/0196699 A1).
As per claim 14, Wu, IZENBERG, Putnam, OKADA and CHEN teach the invention according to claim 8 above. CHEN teaches wherein to determine the destination FPGA of a plurality of FPGAs includes to determine the destination FPGA based use of the accelerator image of at least one of the plurality of FPGAs (CHEN, [0075] lines 1-4, Each request defines a workload to be executed and is accompanied by a policy defining requirements or preferences for the execution of the workload; [0077] lines 1-10, Function category 182 and function type 184 correspond to function category 144 and function 142 of records 130 (FIG. 5). In the depicted example, policy 180 is for a compression function. In some examples, multiple functions may be specified. Thus, as shown, function type 184 may be an array specifying multiple types of functions with which the application is compatible; [0086] lines 1-9, ARO 128 filters the candidate accelerators and function implementations. Specifically, ARO 128 retains candidate accelerators and function implementations that match one another and eliminates unmatched accelerators and function implementations…“matching” accelerators and function implementations are those corresponding accelerators and function implementations that are compatible with the same accelerator profile. (as when assigning the second request task, it is based on the stored indication that the destination FPGA is already configured with the accelerator image (i.e., matching”)).
Wu, IZENBERG, Putnam, OKADA and CHEN fail to specifically teach when determining, it is based an indication of a last time of use of the accelerator image.
However, ZHANG teaches when determining, it is based an indication of a last time of use of the accelerator image (ZHANG, [0040] lines 1-3, LRU: least recently used algorithm, which is used to select a corresponding operation engine for an operation task; [0066] lines 6-16, one or a plurality of operation engines OE are selected from the plurality of idle operation engines OE based on the LRU algorithm, and the one or a plurality of data processing requests are assigned to the one or the plurality of operation engines OE. If idle operation engines OE do not exist in the plurality of operation engines OE, an operation engine OE is selected from one or a plurality of operation engines OE about to enter the idle state based on the LRU algorithm, and the data processing request is assigned to the selected operation engine OE).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Wu, IZENBERG, Putnam, OKADA and CHEN with ZHANG because ZHANG’s teaching of determining based on the last recently used algorithms would have provided Wu, IZENBERG, Putnam, OKADA and CHEN’s system with the advantage and capability to allow the system to improve the resource utilization rate and enhances the processing efficiency of a secure communication session in the system (see ZHANG, [0045]).
As per claim 20, it is a method claim of claim 14 above. Therefore, it is rejected for the same reason as claim 14 above. In addition, CHEN teaches wherein the assignment of the second task is based on use of the accelerator image of at least one of a plurality of FPGAs (CHEN, [0075] lines 1-4, Each request defines a workload to be executed and is accompanied by a policy defining requirements or preferences for the execution of the workload; [0077] lines 1-10, Function category 182 and function type 184 correspond to function category 144 and function 142 of records 130 (FIG. 5). In the depicted example, policy 180 is for a compression function. In some examples, multiple functions may be specified. Thus, as shown, function type 184 may be an array specifying multiple types of functions with which the application is compatible; [0086] lines 1-9, ARO 128 filters the candidate accelerators and function implementations. Specifically, ARO 128 retains candidate accelerators and function implementations that match one another and eliminates unmatched accelerators and function implementations…“matching” accelerators and function implementations are those corresponding accelerators and function implementations that are compatible with the same accelerator profile. (as when assigning the second request task, it is based on the stored indication that the destination FPGA is already configured with the accelerator image (i.e., matching”)).
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In the remark applicant’s argue in substance:
(a), The USPTO's August 4, 2025, memorandum on evaluating subject matter eligibility in AI-related technologies reminds Examiners not to expand the "mental process" grouping to encompass limitations that cannot practically be performed in the human mind.
(b), The present claim requires causing re- imaging of FPGAs, storing an indication that a destination FPGA is configured along, storing power usage by the destination FPGA during execution of the first requested task, and assigning later tasks based on the stored indication and the power usage information. These are electronic control operations executed by processors and FPGAs, not mental reasoning or paper-and-pencil steps.
(c), In Example 39, a neural-network training method was found patent-eligible at Prong One because steps such as receiving digital images, extracting facial features, and training the network involved computer processing that "cannot practically be performed in the human mind." Similarly, claim l's causing of reimaging, storage of an indication that the destination FPGA is configured with an accelerator image, storage of power usage information, and assignment of subsequent tasks based on power usage information and stored indications of whether an accelerator image is already configured in a destination FPGA is beyond human mental capability. Consistent with Example 39, the claim therefore does not recite a mental process or any other abstract idea under Prong One.
(d), Director Squires vacated a Board panel's sua sponte §101 rejection of AI-related claims, and cautioned against overbroad reasoning that risks excluding technological improvements in artificial intelligence…the recited cloud resource manager measures FPGA power usage, stores configuration state, and reuses that configuration to improve performance and energy efficiency.
Examiner respectfully disagreed with Applicant’s argument for the following reasons:
As to point (a), the instant application is related to assigning the tasks to the FPGAs, and it has nothing to do with AI-related technologies. Therefore, examiner respectfully disagreed applicant’s argument that “evaluating subject matter eligibility in AI-related technologies reminds Examiners not to expand the "mental process" grouping to encompass limitations that cannot practically be performed in the human mind”.
As to point (b), in response to applicant’s argument that “The present claim requires causing re- imaging of FPGAs, storing an indication that a destination FPGA is configured along, storing power usage by the destination FPGA during execution of the first requested task, and assigning later tasks based on the stored indication and the power usage information. These are electronic control operations executed by processors and FPGAs, not mental reasoning or paper-and-pencil steps”. Examiner respectfully disagreed.
Firstly, applicant mischaracterizing the 101 rejection. Applicant simply recites the limitations (i.e., re-imaging of FPGAs, storing an indication that a destination FPGA is configured along, storing power usage) which were not actually evaluated under Step 2A- Prong 1 (i.e., judicial exceptions). In fact, examiner has correctly evaluated each limitations under Step 2A- Prong 1, Step 2A- Prong 2: and step 2B. Here, examiner clearly evaluated the steps of “assigning” based on the stored indication and the power usage information under Step 2A- Prong 1. That is, the human mind can easily judging/evaluating/planning/scheduling/assigning the first task to FPGA, and assigning/scheduling the second task to the FPGA based on the stored indication of that FPGA is already configured with matching accelerator image, a priority of the second requested task, power usage, determination of the frequency of usages of different accelerator images, and determining whether the power usage information meets a power usage threshold. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion).
Secondly, examiner has clearly evaluated the limitation of “cause reimaging of the destination FPGA with the accelerator image” is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)). That is, the claim does not define any particular machine to “cause” this “reimaging,” other than a generic machine such as the “processor circuit,” and no details what so ever on how the claimed function will occur. And the limitation of “cause transmission of an identification of the destination FPGA to a requesting device, the requesting device to communicate with the destination FPGA to cause the destination FPGA to perform the first requested task” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g). Furthermore, “cause storage of an indication that the destination FPGA is configured with the accelerator image; cause storage of power usage by the destination FPGA during execution of the first requested task” which are insignificant extra-solution activity and merely data storing (see MPEP § 2106.05(g)). And these claim limitations as cited in Step 2A-prong 2 also evaluated under Step 2B: (i.e., Claim provides an Inventive Concept: No) and examiner has provided additional examples of Court cases for a conclusion that “obtain” and “transmission” and “cause storage” are well understood, routine, conventional activity is supported under Berkheimer options 2.
Therefore, Applicant’s argument has not been found to be persuasive,
As to point (c), Again, applicant’s argument is related to specific AI example, however, the instant application is related to assigning the tasks to the FPGAs. Please refers to point (a) and (b) above.
As to point (d), Please see point (a) to (c) above. In addition, in response to applicant’s argument that “the recited cloud resource manager measures FPGA power usage, stores configuration state, and reuses that configuration to improve performance and energy efficiency”. Examiner respectfully disagreed.
MPEP 2106.05(a) discloses that “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception”. Here, the additional limitations are related to well understood, routine, conventional activity is supported under Berkheimer options 2 (see point (b) above). And the claim merely recites a basic task assignment/scheduling concept that assigning tasks based on the stored information.
Further, the claimed technical solution (i.e., reuses that configuration to improve performance and energy efficiency) has NOT actually been integrated into the computing system (i.e., under Step 2A Prong 2). That is, the claimed solution is no more than mentally performing task scheduling/assigning, and the computing system is NOT using the FPGA yet, so how could the performance and energy efficiency can be improved, if it is just merely task planning/scheduling. For example, if you are preparing a lunch for your kids, and you are going assigning foods between your kids based on some meal plan. Do the children become full (i.e., achieving the technical solution of feeding your kids lunch) without the kids actually eating it? The same can be applied here. The task scheduling/planning/assigning is merely mentally thinking, the resources/FPGA are not even utilized, so how the claimed technical solution improves performance and energy efficiency (i.e., without actually using it?).
For the reasons above, Applicant’s argument has not been found to be persuasive, and therefore the rejections are maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZUJIA XU/Examiner, Art Unit 2195