Prosecution Insights
Last updated: April 19, 2026
Application No. 18/090,653

ACCELERATOR OR ACCELERATED FUNCTIONS AS A SERVICE USING NETWORKED PROCESSING UNITS

Non-Final OA §101§103§112
Filed
Dec 29, 2022
Examiner
WOOD, WILLIAM C
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
270 granted / 363 resolved
+19.4% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
19 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
19.9%
-20.1% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant’s Communication received 12/29/2022 for application number 18/090,653. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, claims. 3. Claims 1 – 25 are presented for examination. Claim Objections 4. Claims 15 and 21 are objected to because of the following informalities: the claims recite a “distributed databased” but should recite –distributed database--. Appropriate correction is required. Claim Rejections - 35 USC § 112 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 6. Claims 12, 18 and 24 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. The claims recite the limitation "optimizing the perf/watt or perf/$/watt metric for the workload.” There is insufficient antecedent basis for this limitation in the claim. Additionally, the terms “perf/watt” and “perf/$/watt” are not explicitly defined in the specification and are thus unclear. It is assumed for examination purposes that the terms refer to performance per watt and performance per dollar per watt metrics associated with a workload. Claim Rejections - 35 USC § 101 7. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 8. Claims 22 – 25 are directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the recited machine-readable medium is not explicitly limited in the specification to exclude transitory signals. 9. Claims 1 – 25 are directed to an abstract idea without significantly more. Independent claim 1 recites a system for orchestrating acceleration functions in a network compute mesh, comprising: a memory device configured to store instructions; and a processor subsystem, which when configured by the instructions, is operable to: access a flowgraph, the flowgraph including data producer-consumer relationships between a plurality of tasks in a workload; identify available artifacts and resources to execute the artifacts to complete each of the plurality of tasks, wherein an artifact is an instance of a function to perform a task of the plurality of tasks; determine a configuration assigning artifacts and resources to each of the plurality of tasks in the flowgraph; and schedule, based on the configuration, the plurality of tasks to execute using the assigned artifacts and resources. The limitations, as drafted, describe a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components. The abstract idea limitations are “identify available artifacts and resources to execute the artifacts to complete each of the plurality of tasks, wherein an artifact is an instance of a function to perform a task of the plurality of tasks” and “determine a configuration assigning artifacts and resources to each of the plurality of tasks in the flowgraph” in Prong I step 2A. Other limitations including “access a flowgraph, the flowgraph including data producer-consumer relationships between a plurality of tasks in a workload” and “schedule, based on the configuration, the plurality of tasks to execute using the assigned artifacts and resources” are considered pre/post-activity solutions for receiving flowgraph information and performing an action (scheduling tasks to execute) which is merely an applied application which insignificantly amounts to a judicial exception. Thus, these claims are directing to abstract idea under 35 USC 101. Other than “a system for orchestrating acceleration functions in a network compute mesh, comprising: a memory device configured to store instructions; and a processor subsystem” there is nothing in the claim elements preclude the steps from practically being performed in the mind. All of the non-abstract limitations are pre/post-activity solutions for getting/obtaining/manipulating/displaying data without significantly more. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. In particular, the components in the determining step are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of receiving information, executing a function and making a decision) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Additionally, the steps of “access a flowgraph, the flowgraph including data producer-consumer relationships between a plurality of tasks in a workload” and “schedule, based on the configuration, the plurality of tasks to execute using the assigned artifacts and resources” are pre/post-activity solutions as gathering/manipulating data that are insignificant under Prong II step 2A and 2B. See Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015) (Storing and retrieving information in memory) as noted in MPEP 2106.05(d)(II)(iv). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Independent claims 16 and 22 are rejected on the same basis as independent claim 1. Additionally, dependent claims 2 – 15, 17 - 21 and 23 - 25 are similarly rejected as being directed to an abstract idea since these claims are either further detailing the abstract idea by analyzing/processing the data or the elements are insignificant. More specifically, the dependent claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As per claim 2, wherein the artifact comprises a bitstream to program a field-programmable gate array (FPGA) recite generic computer components for applying the abstract idea. As per claim 3, wherein the artifact comprises an executable file to execute on a central processing unit (CPU) recites generic computer components for applying the abstract idea. As per claim 4, wherein the artifact comprises a binary file to configure a coarse-grained reconfigurable array (CGRA) recites generic computer components for applying the abstract idea. As per claim 5, wherein the resources comprise a central processing unit recites generic computer components for applying the abstract idea. As per claim 6, wherein the resources comprise a network-accessible processing unit recites generic computer components for applying the abstract idea. As per claim 7, wherein the resources comprise a graphics processing unit recites generic computer components for applying the abstract idea. As per claim 8, wherein the resources comprise an application specific integrated circuit (ASIC) recites generic computer components for applying the abstract idea. As per claim 9, wherein the resources comprise a field-programmable gate array (FPGA) recites generic computer components for applying the abstract idea. As per claim 10, wherein the resources comprise a coarse-grained reconfigurable array (CGRA) recites generic computer components for applying the abstract idea. As per claims 11, 17 and 23, wherein determining the configuration comprises: analyzing a service level objective (SLO); and assigning artifacts and resources to each of the plurality of tasks to satisfy the SLO recites an additional mental process. As per claims 12, 18 and 24, wherein determining the configuration comprises performing one or more of: minimizing an amount of data movement between the plurality of tasks and a storage device, minimizing latency of workload execution, maximizing resource utilization for the workload, maximizing capacity of acceleration available resources, minimizing the power consumption of the workload, or optimizing the perf/watt or perf/$/watt metric for the workload recites an additional mental process. As per claims 13, 19 and 25, wherein scheduling the plurality of tasks comprises: communicating from a first network-accessible processing unit to a second network-accessible processing unit via an application programming interface (API), to schedule a task of the plurality of tasks to execute using an artifact executing on a resource managed by the second network-accessible processing unit recites generic computer components for applying the abstract idea. As per claims 14 and 20, wherein scheduling the plurality of task comprises: lending or transferring resources from the first network-accessible processing unit to the second network-accessible processing unit for use when executing the artifact recites generic computer components for applying the abstract idea. As per claims 15 and 21, wherein a task of the plurality of tasks produces a data result, which is stored in a distributed databased accessible by at least one other task of the plurality of tasks recites generic computer components for applying the abstract idea. Claim Rejections - 35 USC § 103 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 12. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 13. Claims 1, 3, 5 – 8, 12, 16, 18, 22 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen et al. (U.S. Publication 2015/0268985) (Jokinen hereinafter) in view of Goyal et al. (U.S. Publication 2021/0097108) (Goyal hereinafter). 14. As per claim 1, Jokinen teaches identify available artifacts and resources to execute the artifacts to complete each of the plurality of tasks, wherein an artifact is an instance of a function to perform a task of the plurality of tasks [“the queue manager 110 provides a frame descriptor to the work scheduler 112 that in turn defines a plurality of tasks to be performed under the direction of task manager 114 … The first task may include a job that is a software operation that the core 120a may perform on its own, The first task may also include a job that makes use of an accelerator such as accelerator 140a,” ¶ 0016; software operation mapped to artifact, accelerator mapped to resource]; determine a configuration assigning artifacts and resources to each of the plurality of tasks in the flowgraph [“The first task may also include a job that makes use of an accelerator such as accelerator 140a. In such case, the core 120a requests use of an accelerator from the task manger 114 and stores the context information for that stage of the task in a context storage buffer in the core 120a,” ¶ 0016; context information mapped to configuration]; and schedule, based on the configuration, the plurality of tasks to execute using the assigned artifacts and resources [“the queue manager 110 provides a frame descriptor to the work scheduler 112 that in turn defines a plurality of tasks to be performed under the direction of task manager 114,” ¶ 0016]. Jokinen does not explicitly disclose but Goyal discloses a system for orchestrating acceleration functions in a network compute mesh, comprising: a memory device configured to store instructions; and a processor subsystem, which when configured by the instructions, is operable to: access a flowgraph, the flowgraph including data producer-consumer relationships between a plurality of tasks in a workload [“Upon parsing the receiving data flow graphs, the control software constructs work units, e.g., in the form of one or more work unit stacks, and configure the DPUs to perform high-speed, chained operations on data flows streaming through the DPU,” ¶ 0109; chained operations suggest task relationships; “A stream is defined as an ordered, unidirectional sequence of computational objects (referred to herein as stream data units generally or, as a specific example, data packets of a packet flow) that can be of unbounded or undetermined length. In a simple example, a stream originates in a producer and terminates at a consumer, and is operated on sequentially,” ¶ 0089]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen and Goyal available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as taught by Jokinen to include the capability of data flow graph analysis as disclosed by Goyal, thereby providing a mechanism to enhance system efficiency and maintainability by coordinating the use of acceleration resources. 15. As per claim 3, Jokinen and Goyal and teach the system of claim 1. Goyal further teaches wherein the artifact comprises an executable file to execute on a central processing unit (CPU) [“Upon parsing the receiving data flow graphs, the control software constructs work units, e.g., in the form of one or more work unit stacks, and configure the DPUs to perform high-speed, chained operations on data flows streaming through the DPU using, for example, data plane software functions (e.g., library 126 of data plane 122) executable by internal processor cores 140 and/or hardware accelerators 146 of the DPU,” ¶ 0109]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen and Goyal available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as taught by Jokinen to include the capability of data flow graph analysis as disclosed by Goyal, thereby providing a mechanism to enhance system efficiency and maintainability by coordinating the use of acceleration resources. 16. As per claim 5, Jokinen and Goyal teach the system of claim 1. Jokinen further teaches wherein the resources comprise a central processing unit [“Referring to FIG. 1, a data processing system 100, such as an all in one processor data processing system, is shown. The data processing system 100 includes a queue manager 110, a work scheduler 112 coupled to the queue manager 110, a task manager 114 coupled to the work scheduler 112, at least one core 120 coupled to task manager 114, at least one accelerator 140 coupled to task manager 114, a platform interconnect 144 coupled to cores 120, a memory 146 coupled to platform interconnect 144, and an input/output processor (IOP) 142 coupled to memory 146,” ¶ 0013; core mapped to CPU]. 17. As per claim 6, Jokinen and Goyal teach the system of claim 1. Goyal further teaches wherein the resources comprise a network-accessible processing unit [“this disclosure describes various example implementations in which the DPUs ingest data from data sources and write the data in a distributed manner across storage (e.g., local and/or network storage) and in a format that allows efficient access,” ¶ 0006]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen and Goyal available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as taught by Jokinen to include the capability of data flow graph analysis as disclosed by Goyal, thereby providing a mechanism to enhance system efficiency and maintainability by coordinating the use of acceleration resources. 18. As per claim 7, Jokinen and Goyal teach the system of claim 1. Goyal further teaches wherein the resources comprise a graphics processing unit [“each highly programmable DPU 17 comprises a network interface (e.g., Ethernet ) to connect to a network to send and receive stream data units (e.g., data packets), one or more host interfaces (e.g., Peripheral Component Interconnect-Express (PCI-e) to connect to one or more application processors (e.g., a CPU or a graphics processing unit (GPU)) or storage devices (e.g., solid state drives (SSDs)) to send and receive stream data units, and a multi-core processor with two or more of the processing cores executing a run-to-completion data plane operating system on which a software function is invoked for processing one or more of the stream data units, and with one or more of the processing cores executing a multi-tasking control plane operating system,” ¶ 0036]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen and Goyal available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as taught by Jokinen to include the capability of data flow graph analysis as disclosed by Goyal, thereby providing a mechanism to enhance system efficiency and maintainability by coordinating the use of acceleration resources. 19. As per claim 8, Jokinen teach the system of claim 1. Goyal further teaches wherein the resources comprise an application specific integrated circuit (ASIC) [“each DPU 17 may be implemented as one or more application-specific integrated circuits (ASICs) or other hardware and software components, and may be incorporated within network appliances, compute nodes, storage nodes or other devices,” ¶ 0041]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen and Goyal available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as taught by Jokinen to include the capability of data flow graph analysis as disclosed by Goyal, thereby providing a mechanism to enhance system efficiency and maintainability by coordinating the use of acceleration resources. 20. As per claim 12, Jokinen and Goyal teach the system of claim 1. Jokinen further teaches wherein determining the configuration comprises performing one or more of: minimizing an amount of data movement between the plurality of tasks and a storage device, minimizing latency of workload execution [“Referring to FIG.4, a flow chart of the operation 100 of a low latency data delivery data processing system 100 is shown. More specifically, the low latency data delivery operation begins with a task being assigned to a core 120 at Step 410. Next at step 420, the core determines that a job of the task can be completed via an accelerator 140. Next, at step 430, the task manager 114 identifies an accelerator 140 for performing the job. Next at step 440, the accelerator 140 performs and completes the job. The accelerator 140 generates output data including status information at step 450. Next at step 460, the data is written to the workspace 121 via the interconnect 150. During step 470, the core that is awaiting the output data Snoops the workspace 121 via the snoop circuit 310 and determines that the job is complete based upon the Zero byte data beat status information. Next, at step 480, the core continues the executing the task using the output data stored in the workspace 121 from the accelerator 140,” ¶ 0023], maximizing resource utilization for the workload, maximizing capacity of acceleration available resources, minimizing the power consumption of the workload, or optimizing the perf/watt or perf/$/watt metric for the workload. 21. As per claim 16, it is a method claim having similar limitations as cited in claim 1. Thus, claim 16 is also rejected under the same rationale as cited in the rejection of claim 1 above. 22. As per claim 18, it is a method claim having similar limitations as cited in claim 12. Thus, claim 18 is also rejected under the same rationale as cited in the rejection of claim 12 above. 23. As per claim 22, it is a media claim having similar limitations as cited in claim 1. Thus, claim 22 is also rejected under the same rationale as cited in the rejection of claim 1 above. 24. As per claim 24, it is a media claim having similar limitations as cited in claim 12. Thus, claim 24 is also rejected under the same rationale as cited in the rejection of claim 12 above. 25. Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Burger (U.S. Publication 2016/0380819) (Burger hereinafter). 26. As per claim 2, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Burger discloses wherein the artifact comprises a bitstream to program a field-programmable gate array (FPGA) [“An image file (e.g., an FPGA programming bitstream) is received over a network through a network interface at an acceleration component (e.g., a hardware accelerator, such as, a Field Programmable Gate Array (FPGA)),” ¶ 0027]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Burger available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of configuring acceleration components as taught by Burger, thereby providing a mechanism to enhance system efficiency and maintainability by remotely configuring available acceleration resources. 27. As per claim 9, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Burger discloses wherein the resources comprise a field-programmable gate array (FPGA) [“An image file (e.g., an FPGA programming bitstream) is received over a network through a network interface at an acceleration component (e.g., a hardware accelerator, such as, a Field Programmable Gate Array (FPGA)),” ¶ 0027]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Burger available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of configuring acceleration components as taught by Burger, thereby providing a mechanism to enhance system efficiency and maintainability by remotely configuring available acceleration resources. 28. Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Sheeley et al. (U.S. Publication 2023/0409233) (Sheeley hereinafter). 29. As per claim 4, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Sheeley discloses wherein the artifact comprises a binary file to configure a coarse-grained reconfigurable array (CGRA) [“Application programs such as for artificial intelligence and machine learning may be translated to executable configuration files for coarse-grained reconfigurable architecture (CGRA) processors,” ¶ 0009]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Sheeley available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and to include the capability of managing coarse-grained array assignments components as taught by Sheeley, thereby providing a mechanism to enhance system efficiency by remotely configuring available storage resources. 30. As per claim 10, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Sheeley discloses wherein the resources comprise a coarse-grained reconfigurable array (CGRA) [“Application programs such as for artificial intelligence and machine learning may be translated to executable configuration files for coarse-grained reconfigurable architecture (CGRA) processors,” ¶ 0009]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Sheeley available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of managing coarse-grained array assignments components as taught by Sheeley, thereby providing a mechanism to enhance system efficiency by remotely configuring available storage resources. 31. Claims 11, 17 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Chadwell et al. (U.S. Publication 2015/0081893) (Chadwell hereinafter). 32. As per claim 11, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Chadwell discloses wherein determining the configuration comprises: analyzing a service level objective (SLO); and assigning artifacts and resources to each of the plurality of tasks to satisfy the SLO [“the technology enables analysis of storage level data flows at a higher, "logical" level to recommend a particular storage configuration, e.g., to satisfy "service level objectives." Storage operations transiting virtual data storage components can be mirrored or duplicated at a workload analyzer. In various embodiments, the workload analyzer can be a virtual data storage component that receives a duplicated copy of data storage operations, e.g., from a virtual storage appliance or from a different virtual data storage component. The workload analyzer can review contents of network traffic, e.g., data indicating, at a storage layer level, a source, a destination, a type of data, and/or volume of data. As an example, the workload analyzer can determine which application is sending or requesting data, which logical storage volumes are targeted, etc. The workload analyzer can then compare the actual workload to previously specified service level objectives. The workload analyzer can then determine, e.g., based on statistics or simulations, what storage configuration changes can be made to satisfy the service level objectives,” ¶ 0030]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Chadwell available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of managing resources in light of Service Level Objectives as taught by Chadwell, thereby providing a mechanism to enhance system efficiency by remotely configuring and managing available resources according to pre-set objectives. 33. As per claim 17, it is a method claim having similar limitations as cited in claim 11. Thus, claim 17 is also rejected under the same rationale as cited in the rejection of claim 11 above. 34. As per claim 23, it is a media claim having similar limitations as cited in claim 11. Thus, claim 23 is also rejected under the same rationale as cited in the rejection of claim 11 above. 35. Claims 13, 19 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Kan et al. (U.S. Publication 2023/0004433) (Kan hereinafter). 36. As per claim 13, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Kan discloses wherein scheduling the plurality of tasks comprises: communicating from a first network-accessible processing unit to a second network-accessible processing unit via an application programming interface (API), to schedule a task of the plurality of tasks to execute using an artifact executing on a resource managed by the second network-accessible processing unit [“The CPU acceleration stack provides a host-side API for parallel task model division and scheduling as well as an underlying support, and includes a lightweight high-reliability protocol module, an RDC internal memory management module, and an FPGA accelerator driver module,” ¶ 0083]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Kan available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of managing accelerator device interfaces as taught by Kan, thereby providing a mechanism to enhance system efficiency and maintainability by facilitating communication via standard APIs. 37. As per claim 19, it is a method claim having similar limitations as cited in claim 13. Thus, claim 19 is also rejected under the same rationale as cited in the rejection of claim 13 above. 38. As per claim 25, it is a media claim having similar limitations as cited in claim 13. Thus, claim 25 is also rejected under the same rationale as cited in the rejection of claim 13 above. 39. Claims 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Kan and Cheng et al. (U.S. Patent 11,687,376) (Cheng hereinafter). 40. As per claim 14, Jokinen, Goyal and Kan teach the system of claim 13. Jokinen, Goyal and Kan do not explicitly disclose but Cheng discloses wherein scheduling the plurality of task comprises: lending or transferring resources from the first network-accessible processing unit to the second network-accessible processing unit for use when executing the artifact [“time-share processing tasks within a group of processors, or across multiple groups of processors, to optimize throughput, minimize energy consumption and generated heat. Advantages of policy-based partitioning of DP accelerators into groups include fast partitioning of DP accelerators, flexible scheduling of processing tasks within, or across, groups, time-sharing of DP accelerators and time-sharing of groups,” col. 8, lines 17 – 24; time-sharing of DP accelerators suggests resource lending]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal, Kan and Cheng available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen, Goyal and Kan to include the capability of sharing resources as taught by Cheng, thereby providing a mechanism to enhance system efficiency by leveraging available system resources. 41. As per claim 20, it is a method claim having similar limitations as cited in claim 14. Thus, claim 20 is also rejected under the same rationale as cited in the rejection of claim 14 above. 42. Claims 15 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Jokinen and Goyal in further view of Lee et al. (U.S. Publication 2020/0372013) (Lee hereinafter). 43. As per claim 15, Jokinen and Goyal teach the system of claim 1. Jokinen and Goyal do not explicitly disclose but Lee discloses wherein a task of the plurality of tasks produces a data result, which is stored in a distributed databased accessible by at least one other task of the plurality of tasks [“each accelerator 100 may share the monitoring information or the load information with each other for efficient batch processing. In this case, each accelerator 100 may share information through P2P (peer-to-peer) communication, or share information through a shared database, to which the accelerator 100 has read/write permission,” ¶ 0041]. It would have been obvious to one of ordinary skill in the art, having the teachings of Jokinen, Goyal and Lee available before the effective filing date of the claimed invention, to modify the capability of managing and executing jobs on accelerators as disclosed by Jokinen and Goyal to include the capability of storing and sharing execution results as taught by Lee, thereby providing a mechanism to enhance system efficiency by providing system access via standard database capabilities. 44. As per claim 21, it is a method claim having similar limitations as cited in claim 15. Thus, claim 21 is also rejected under the same rationale as cited in the rejection of claim 15 above. Conclusion 45. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM C WOOD whose telephone number is (571)272-5285. The examiner can normally be reached Monday - Friday, 8:00 am - 4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat C Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM C WOOD/Examiner, Art Unit 2193 /Chat C Do/Supervisory Patent Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Feb 15, 2023
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572370
APPLICATION PLATFORM AND APPLICATION MANAGEMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12536046
REVERSE LINKAGE OF AUXILIARY RESOURCES TO A RESOURCE LOCATION AND RESOURCE-RECEIVING ENTITY
2y 5m to grant Granted Jan 27, 2026
Patent 12536055
APPARATUS AND METHOD IN WHICH CONTROL FUNCTIONS AND SYNCHRONIZATION EVENTS ARE PERFORMED
2y 5m to grant Granted Jan 27, 2026
Patent 12511169
MESSAGE PARSING TO DETERMINE CROSS-APPLICATION DEPENDENCIES AMONG ACTIONS FROM DIFFERENT APPLICATIONS
2y 5m to grant Granted Dec 30, 2025
Patent 12487869
SYSTEMS AND METHODS FOR CALENDAR SYNCHRONIZATION WITH ENTERPRISE WEB APPLICATIONS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
96%
With Interview (+21.4%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month