Prosecution Insights
Last updated: April 19, 2026
Application No. 18/264,049

Allocating Computational Tasks to Computer Hardware

Non-Final OA §101§102§112
Filed
Aug 02, 2023
Examiner
ROTARU, OCTAVIAN
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Xonai Ltd.
OA Round
1 (Non-Final)
28%
Grant Probability
At Risk
1-2
OA Rounds
4y 2m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
116 granted / 409 resolved
-23.6% vs TC avg
Strong +39% interview lift
Without
With
+38.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
48 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
39.2%
-0.8% vs TC avg
§103
10.9%
-29.1% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
29.9%
-10.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION The following NON-FINAL Office action is in response to App 18264049 filed on 08/02/2023, and the preliminary amendment made by Applicant and dated 02/06/2024. Status of Claims Claims 15,16 have been canceled and Claims 17-22 have been newly added by Applicant. Claims 2-7, 9-14 have been amended by Applicant. Claims 1-14 and 17-22 are currently pending and have been rejected as follows. Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. IDS The information disclosure statement filed on 08/02/2023 complies with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 and is considered by the Examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3,10,19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 3,10,19, each recite among others: “determining a subset of computational tasks in the preliminary set to be performed by the same one or more instances of hardware”; rendering each of said claims vague and indefinite because there is insufficient antecedent basis for “the same” as in “the same one or more instances of hardware”; [bolded emphasis added] Claims 3,10,19 are recommended to be amended to each recite, among others: determining a subset of computational tasks in the preliminary set to be performed by a same one or more instances of hardware; Clarification and/or correction is/are required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 and 17-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea, here abstract idea) without significantly more. The claim(s) still recites, describe or set forth computer-aided mental processes as tested per MPEP 2106.04(a)(2) III C. Specifically, as summarized in the preamble of each of independent Claims 1,8,17 the character as a whole of the claims is “allocating computational tasks to computer hardware”. Yet, MPEP 2106.04(a)(2) III C #2 is clear that using a computer environment upon which to perform a mental process (here “determining”, “allocating” etc.) still recited the abstract idea. Thus here, recitations such as “instances of available computer hardware capable of performing each computational task” (independent Claims 1,8,17), “wherein the one or more instances of available computer hardware capable of performing each computational task comprise one or more of a central processing unit, CPU, a graphics processing unit, GPU, a field-programmable gate array, FPGA and a tensor processing unit, TPU” (dependent Claims 2,9,18) etc., “wherein the instances of computer hardware to which the computational tasks are allocated comprise one or more of local and cloud-based hardware instances” (dependent Claims 6,13,21) and intended use of “wherein the computational tasks are for1 training a machine learning algorithm” (dependent Claims 7,14,22) would at most represent such computer environment upon which the abstract or computer aided mental “determining”, “allocating” etc. are being performed. For example, “constructing a graph comprising a plurality of nodes and edges, each node representing a respective computational task and each edge representing a data flow between computational tasks”; (independent Claims 1,8,17) could have been practically performed by physical ads such as pen and paper or even computer-aid CAD. In a similar vein, “determining one or more instances of available computer hardware capable of performing each computational task” (independent Claims 1,8,17), “determining a subset of computational tasks in the preliminary set to be performed by the same one or more instances of hardware” (dependent Claims 3,10,19) represent mental or computer-aided evaluation while “allocating each computational task to one or more of the one or more instances of computer hardware determined for that computational task such that a data bandwidth between the one or more instances of computer hardware to which each computational task is allocated satisfies a data flow requirement between each computational task” (independent Claims 1,8,17), “wherein the instances of computer hardware to which the computational tasks are allocated are chosen to satisfy one or more additional performance parameters” (dependent Claims 4,11,20), “wherein the one or more additional performance parameters include one or more of power efficiency and cost” (dependent Claims 5,12) represent a cognitive or computer-aided judgment of the prior evaluation. Yet MPEP 2106.04(a)(2) III is clear that the observation, evaluation and subsequent judgment to recite the abstract idea. It thus follows that here the graph observation, followed by determination or evaluation and subsequent allocating judgment also recite or at a minimum describe or set forth the abstract mental processes executed for a computer environment, as tested per (MPEP 2106.04(a)(2) III C #2). In an abundance of caution the Examiner will more granularly test such computerization at the subsequent tests below. For now, it is clear that, given the preponderance of legal evidence above, the character as a whole of the claims is undeniably abstract. This judicial exception is not integrated into a practical application because per Step 2A prong two, the individual or combination of the additional, computer-based elements are/is found to merely apply the above abstract idea [MPEP 2106.05(f)] and/or narrow it to a field of use or technological environment [MPEP 2106.05(h)]. Specifically, here, the level of computerization and automation, even when tested beyond mere computer aids, and as additional computer-based elements, would not integrate the abstract exception into a practical application. For example, MPEP 2106.05(f)(2) states that use of computer components to perform economic tasks, and tasks to receive, store and transmit data2 represent mere invocation of computers or other machinery merely as a tool to perform a process which does not integrate the abstract idea into practical application. Here, the claimed circuitry and operable computer can be argued, along with the instances of computer hardware capable of performing each computational task” as computer aids to perform the abstract idea and/or as a computer environment upon which the abstract idea is being performed. Even, if such computer elements would now be tested as additional elements, per MPEP 2106.05(f),(h), they would still represent computer components of general purpose applying an allocation or business method with its underlining mathematical algorithm [MPEP 2106.05(f)(2) (i)3], monitoring audit log data executed on a general-purpose computer [MPEP 2106.0f(f)(2) iii4] followed by or tailoring information and provide it to the user on such generic computer [MPEP 2106.05(f)(2) v5]. They could be also argued as representative of a technological environment upon which the combination of collecting information, analyzing it, and displaying certain results of the collection and analysis, is narrowed upon [MPEP 2106.05(h) vi6], none of which integrate the abstract exception into a practical application. This judicial exception is not integrated into a practical application because as shown above, the additional computer-based elements merely apply the already recited abstract idea, [MPEP 2106.05(f)] and/or provide a narrowing of the abstract idea to a field of user or technological environment [MPEP 2106.05(h)]. Examiner follows MPEP 2106.05 (d) II and carries over the MPEP 2106.05 (f),(h) findings as sufficient option for evidence that the additional computer elements also do not provide significantly more, without relying on conventionality test of MPEP 2106.05(d). Yet, assuming arguendo that additional evidence would now be required at Step 2B to demonstrate that the above combination of additional elements are well-understood routine and conventional, Examiner would also point to MPEP 2106.05(d) I. 2. a. Specifically - Original Specification. mid-p.13 reciting at high level of generality: “processor 201 uses the condensed task graph 605 to construct a hardware graph 606. The hardware graph 606 is the same as the condensed task graph 606 except it further specifies the instance of hardware associate with each node of the condensed task graph 605 determined in accordance with the user's optimisation requirements (as exemplified in Figs. 4 and 5). That is, the hardware graph indicates which available hardware component performs each task of the condensed task graph 605. In constructing the hardware graph, the processor 201 looks up information associated with available hardware nodes (e.g. cloud and local clusters and cores) and available hardware edges (e.g. cloud and local memory bandwidth). This information is stored in the storage medium 203 or an external storage medium (not shown) accessible via communication interface 205, for example”. - Original Specification p. 19 last ¶: “Although the present disclosure has been described in connection with some embodiments, it is not intended to be limited to these embodiments. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in any manner suitable to implement the present disclosure”. In conclusion, Claims 1-14 and 17-22 although directed to statutory categories (“method” or process, “system” or machine, computer program or article) they still recite, or at least describe or set forth the abstract idea (Step 2A prong 1) with their additional, computer elements not integrating the abstract idea into a practical application (Step 2A prong 2) or providing significantly more than the abstract idea itself (Step 2B). Thus, the Claims 1-14 and 17-22 are ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-14 and 17-22 are rejected under 35 U.S.C. 102(a)(1) based upon a public use or sale or other public availability of the invention as disclosed by: Funabashi et al, US 20170005946 A1 hereinafter Funabashi Claims 1,8,17 Funabashi teaches “A computer-implemented method of allocating computational tasks to computer hardware, the method comprising”: / “An information processing apparatus for allocating computational tasks to computer hardware, the apparatus comprising circuitry configured to”: / “A computer program product comprising program code stored in a non-transitory computer readable medium operable on a computer” (Funabashi [0006], [0031]) “configured to” - “constructing a graph comprising a plurality of nodes and edges, each node representing a respective computational task and each edge representing a data flow between computational tasks”; (Funabashi ¶ [0045] 2nd-6th sentences: process group creation unit 103 divides a series of processes based on bandwidth desired for data communication between processes, and creates a process group including processes. In the embodiment, process group creation unit 103 divides series of processes between processes between which necessity bandwidth between processes is minimized, and creates a process group. The process group creation unit 103 creates a process group allocation management table (Fig.10A) for managing allocation of a process group on a node when creating the process group. The details of the process group allocation management table will be described below. The bandwidth desired for data communication between processes is bandwidth desired for data communication between nodes for performing each process in a case where 2 successive processes are allocated on different nodes), - “determining one or more instances of available computer hardware capable of performing each computational task” (Funabashi [0046] node determination unit 105, searches node capable of performing the entirety of processes included in the process group from a plurality of nodes on the network, and determines a node on which the process group is allocated, with reference to the resource allocation management table (Fig.6A). The node determination unit 105 allocates the process included in the process group on the determined node. ¶ [0047] information output unit 107 transmits allocation result (for example, node allocation information table) of each process on the node to process arrangement performance server 20. ¶ [0048] Hereinafter, processes performed in the process allocation server 10 will be described in detail according to flow charts of Figs.7,8,and 15 appropriately with reference to other drawings and associated ¶ [0057]-¶ [0067]. For example, ¶ [0058] 2nd - 5th sentences: In S105, the process group creation unit 103 searches a series of processes from last process, and specifies a process of the range which can be performed in the end node. For example, the process group creation unit 103 specifies a process of which the total amount of necessity calculation is equal to or less than the calculation resource of the end node (total amount of necessity calculation ≦ calculation resource of end node), including the last process, from a series of processes. In the network in Fig.9A, the calculation resource of the node 3 that is an end node is “4”. Accordingly, in Fig.9C, the process group creation unit 103 specifies that a process from “end” to “process 3” is a process of the range which can be performed in the end node. ¶ [0059] last 4 sentences: In Fig.9C, in the process from “end” to “process 3”, the necessity bandwidth between “process 3” that is 1st process and “process 2” that is immediately therebefore is “2”, and is smaller than necessity bandwidth of “10” between “process 3” and “end”. Accordingly, the process group creation unit 103 divides a process between “process 2” and “process 3”, and creates a process group 2 including the last process (see Fig.9D). With this, the process having a large necessity bandwidth between processes in the range which can be performed in the end node is collectively allocated on the end node. At this time, in Fig.11A, the process group creation unit 103 adds “process group 2” to the process group allocation management table. Also, ¶ [0067] noting resource allocation management table illustrated in Fig.14 at a time when the allocation of a process on the start node and the end node is completed. The total amount of necessity calculation of “process 2” and “process 3” not allocated on nodes is “4”, and a plurality of nodes (node 2 and node 3) capable of performing the entirety of “process 2” and “process 3” exist when referring to Fig.14. In this case, the node determination unit 105 compares a communication cost in a case where “process 2” and “process 3” are allocated on node 2 and a communication cost in a case where “process 2” and “process 3” are allocated on the node 3, and allocates “process 2” and “process 3” on a node in which the communication cost is minimized (FIG. 7: step S14). Here, the communication cost between nodes of the node 1 on which a process is allocated and the node 2 is “2”, and the communication cost between nodes of node 4 on which a process is allocated and the node 2 is “17”. Accordingly, the total communication cost in a case of allocating “process 2” and “process 3” on the node 2 is “36” (2×1+17 ×2). Meanwhile, since the communication cost between nodes of the node 3 and the node 1 is “12”, and the communication cost between nodes of the node 3 and the node 4 is “7”, the total communication cost in a case of allocating “process 2” and “process 3” on the node 3 is “26” (12×1+7×2). Accordingly, the node determination unit 105 allocates “process 2” and “process 3” on the node 3 in which the total communication cost is minimized (see Fig.13F) ), and - “allocating each computational task to one or more of the one or more instances of computer hardware determined for that computational task such that a data bandwidth between the one or more instances of computer hardware to which each computational task is allocated satisfies a data flow requirement between each computational task” (Funabashi ¶ [0045] 6th sentence: The bandwidth desired for data communication between processes is a bandwidth desired for data communication between nodes for performing each process in a case where two successive processes are allocated on different nodes. For example see Funabashi ¶ [0065] Since the calculation resource of node 1 that is the start node is “3”, a process from “start” to “process 1” can be performed (Fig.8: step S103). Here, in a process from “start” to “process 1”, the necessity bandwidth between “start” and “process 1” is “10”, and is greater than the necessity bandwidth of “1” between “process 1” and “process 2”. In this case, the process group creation unit 103 divides a series of processes between “process 1” and “process 2”, and creates a process group 1 including “start” that is the first process (see Fig.8: step S103 and Fig.13C). Accordingly, the node determination unit 105 allocates the process group 1 on the node 1 that is the start node (see Fig.8: step S104 and Fig.13D). Funabashi ¶ [0066] Meanwhile, since the calculation resource of the node 4 that is an end node is “2”, a process from “end” to “process 4” can be performed (Fig.8: step S105). Here, the necessity bandwidth between “process 4” that is 1st process in a direction from “end” to “start” and “process 3” that is a process immediately before “process 4” in a direction from “start” to “end” is “2”, and is smaller than necessity bandwidth of “5” between processes in a process from “end” to “process 4”. In this case, the process group creation unit 103 divides a series of processes between “process 3” and “process 4”, and creates a process group 2 including “end” that is last process (see Fig.8: step S107, Fig.13D). Accordingly, the node determination unit 105 allocates the process group 2 on node 4 that is an end node (Fig.8: step S108, Fig.13E). Funabashi ¶ [0067] The resource allocation management table is configured in Fig.14 at a time when the allocation of a process on the start node and the end node is completed. The total amount of necessity calculation of “process 2” and “process 3” not allocated on nodes is “4”, and a plurality of nodes (node 2 and node 3) capable of performing the entirety of “process 2” and “process 3” exist when referring to Fig.14. In this case, the node determination unit 105 compares a communication cost in a case where “process 2” and “process 3” are allocated on the node 2 and a communication cost in a case where “process 2” and “process 3” are allocated on the node 3, and allocates “process 2” and “process 3” on a node in which the communication cost is minimized (Fig.7: step S14). Here, the communication cost between nodes of the node 1 on which a process is allocated and the node 2 is “2”, and the communication cost between nodes of the node 4 on which a process is allocated and the node 2 is “17”. Accordingly, the total communication cost in a case of allocating “process 2” and “process 3” on the node 2 is “36” (2×1+17 ×2). Meanwhile, since the communication cost between nodes of the node 3 and the node 1 is “12”, and the communication cost between nodes of the node 3 and the node 4 is “7”, the total communication cost in a case of allocating “process 2” and “process 3” on the node 3 is “26” (12×1+7×2). Accordingly, the node determination unit 105 allocates “process 2” and “process 3” on the node 3 in which the total communication cost is minimized (see Fig.13F). Claims 2,9,18 Funabashi teaches all the limitations in claims 1,8,17. Furthermore, Funabashi teaches “the one or more instances of available computer hardware capable of performing each computational task comprise one or more of a central processing unit, CPU” (Funabashi ¶ [0003] 1st sentence, [0006] noting processing devices), “a graphics processing unit, GPU, a field-programmable gate array, FPGA and a tensor processing unit, TPU”. Claims 3,10,19 Funabashi teaches all the limitations in claims 1,8,17. Further Funabashi teaches - “obtaining source code”; (Funabashi ¶ [0038] 1st-5th sentences: First, the series process configuration information will be described based on Fig.4A. The series process configuration information, in Fig.4A, includes fields such as a “process name”, a “necessity calculation amount”, and “output side necessity bandwidth”. The name of the process included in a series of processes is stored in the field of “process name”. The information of the amount of calculation desired for performing each process is stored in the field of “necessity calculation amount”. Information of a bandwidth desired for data communication between a process and a process immediately after the process is stored in the field of “output side necessity bandwidth”. ¶ [0039] 5th-8th sentences: The network configuration information in Fig.5A, includes fields such as “node name”, “calculation resource”, and “coupling link”. The name of a node existing on a network is stored in the field of “node name”. Information of the amount of calculation which can be performed in each node is stored in the “calculation resource” field. Information (name of link in Fig.5A) of a link coupled to each node is stored in “coupling link” field. ¶ [0040] 1st-3rd sentences: in Fig.5B, the communication cost information includes fields such as “name of link”, and “communication cost”. The name of a link existing on a network is stored in the field of “name of link”. Information of the communication cost in a case of communicating through a link is stored in the field of “communication cost”. Funabashi ¶ [0043] 5th-7th sentences: in Fig.6A, resource allocation management table includes fields: “node name”, “calculation resource”, “allocated process”, “remaining calculation resource”. The name of a node is stored in the “node name” field. The information of the amount of calculation which can be performed on the node is stored in the “calculation resource” field. Funabashi ¶ [0044] 3rd-4th sentences: in Fig.6B, the node allocation information table includes fields such as a “process name”, and “allocation node”. The name of a process included in a series of processes is stored in the field of “process name”. ¶ [0056] 6th-7th sentences: The information of a process included in a process group is stored in the “process name” field. The information of a node on which a process group is allocated is stored in the “allocation node” field) - “parsing the source code to determine a preliminary set of computational tasks”; (Funabashi ¶ [0043] 9th-10th sentences: value obtained by subtracting amount of necessity calculation for the allocated process from calculation resource of a node is stored in the “remaining calculation resource” field. That is, the information of the remaining amount of calculation which can be performed on an allocated node is stored in the field of “remaining calculation resource”. Funabashi ¶ [0045] 2nd-3rd sentences: The process group creation unit 103 divides a series of processes based on a bandwidth desired for data communication between processes (necessity bandwidth), and creates a process group including one or more processes. the process group creation unit 103 divides a series of processes between processes between which a necessity bandwidth between processes is minimized, and creates a process group. Also see Funabashi ¶ [0055] 3rd-6th sentences: if the necessity bandwidth between processes in the process of the range which can be performed in the start node is smaller than the necessity bandwidth in the last process and the process immediately thereafter, the process of the range which can be performed in the start node is divided between processes between which the necessity bandwidth between processes is minimized. If the necessity bandwidth between the last process in the process of the range which can be performed in the start node and the process immediately thereafter is smaller than the necessity bandwidth between processes in the process of the range which can be performed in the start node, a series of processes is divided between the last process in the process of the range which can be performed in the start node and the process immediately thereafter. In Fig.9B, in the process from “start” to “process 2”, a necessity bandwidth of “1” between “process 1” and “process 2” is smaller than a necessity bandwidth “2” between “process 2” that is the last process of the specified process and “process 3” that is the process immediately thereafter. Thus, process group creation unit 103 divides the process from “start” to “process 2” between “process 1” and “process 2” between which the necessity bandwidth is minimized, and creates a process group 1 including “start” that is the first process (Fig. 9C). Funabashi ¶ [0057] 3rd - 6th sentences: At this time, in Fig.10B, the node determination unit 105 stores “node 1” in the field of “allocation node” corresponding to the process group 1 in the process group allocation management table. In addition, node determination unit 105 updates the resource allocation management table (Fig.6A). For example, as illustrated in Fig.10C, the node determination unit 105 stores “start” and “process 1” that are processes included in the process group 1, in the field of “allocated process” corresponding to the node 1. In addition, the node determination unit 105 stores “3” obtained by subtracting the amount of necessity calculation “2” of “process 1” from the calculation resource of “5”, in the field of “remaining calculation resource” corresponding to the node 1. Another example at Funabashi ¶ [0059] 3rd - 6th sentences: In a case where the necessity bandwidth between processes in the process of the range which can be performed in the end node is smaller than the necessity bandwidth between 1st process in the process of the range which can be performed in the end node and the process immediately therebefore, the process group creation unit 103 divides the process of the range which can be performed in the end node between processes between which the necessity bandwidth between processes is minimized. Meanwhile, in a case where the necessity bandwidth between the first process in the process of the range which can be performed in the end node and the process immediately therebefore is smaller than necessity bandwidth between processes in the process of the range which can be performed in the end node, the process group creation unit 103 divides a series of processes between 1st process in the process of the range which can be performed in the end node and the process immediately therebefore. In Fig.9C, in the process from “end” to “process 3”, the necessity bandwidth between “process 3” that is the first process and “process 2” that is the process immediately therebefore is “2” and is smaller than the necessity bandwidth of “10” between “process 3” and “end”. Accordingly, the process group creation unit 103 divides a process between “process 2” and “process 3” and creates a process group 2 including the last process (Fig. 9D). Funabashi ¶ [0065] 2nd-4th sentences in a process from “start” to “process 1”, the necessity bandwidth between “start” and “process 1” is “10”, and is greater than the necessity bandwidth of “1” between “process 1” and “process 2”. In this case, the process group creation unit 103 divides a series of processes between “process 1” and “process 2”,and creates a process group 1 including “start” that is 1st process (Fig.8, S103 and Fig.13C). Accordingly, the node determination unit 105 allocates the process group 1 on the node 1 that is the start node (Fig.8,S104, Fig.13D). Funabashi ¶ [0066] 2nd-3rd sentences: the necessity bandwidth between “process 4” that is 1st process in a direction from “end” to “start” and “process 3” that is a process immediately before “process 4” in a direction from “start” to “end” is “2”, and is smaller than the necessity bandwidth of “5” between processes in a process from “end” to “process 4”. In this case, the process group creation unit 103 divides a series of processes between “process 3” and “process 4”, and creates a process group 2 including “end” that is the last process (Fig.8 S107, Fig.13D). Funabashi ¶ [0072] 1st-2nd sentences, ¶ [0080] 1st-2nd sentences, ¶ [0084] 1st sentence, ¶ [0086] 1st sentence, ¶ [0087] 1st sentence, ¶ [0088] 1st sentence, ¶ [0090] 1st,3rd sentences, ¶ [0091] 5th-6th sentences for similar examples) - “determining a subset of computational tasks in the preliminary set to be performed by the same one or more instances of hardware”; (Funabashi ¶ [0057] 6th-8th sentences: In addition,node determination unit 105 stores “3” obtained by subtracting the amount of necessity calculation “2” of “process 1” from the calculation resource of “5”, in the field of “remaining calculation resource” corresponding to node 1. Further, node determination unit 105 updates the node allocation information table (Fig.6B). For example, in Fig.10D, “node 1” is stored in the field of “allocation node” corresponding to each of “start” and “process 1”. ¶ [0085] 1st-3rd sentences: process group creation unit 103 causes a process having a large bandwidth desired for data communication between processes to be included in the same process group with priority. With this, since the process having the large necessity bandwidth between processes is collectively allocated on one node, the amount of communication data between nodes for performing each process is reduced. Accordingly, a network load is reduced in a case where a series of processes is performed in a distributed manner in a plurality of nodes. Similarly, ¶ [0086] 2nd sentence, ¶ [0088 last sentence, ¶ [0090] last sentence) “and” - “representing the subset of computational tasks as a single node of the graph” (Funabashi ¶ [0042] 2nd sentence: The process group creation unit 103 receives a request for allocating each process included in a series of processes on a node, from the process arrangement performance server 20. Figs. 9B-F & ¶ [0051], Figs.13 B-F & ¶ [0064]-¶ [0067], Funabashi ¶ [0085] 1st-3rd sentences: process group creation unit 103 causes a process having a large bandwidth desired for data communication between processes to be included in the same process group with priority. With this, since the process having the large necessity bandwidth between processes is collectively allocated on one node, the amount of communication data between nodes for performing each process is reduced. Thus, a network load is reduced in a case where a series of processes is performed in a distributed manner in a plurality of nodes. Funabashi ¶ [0086] noting another example where the process group creation unit 103 divides a series of processes between processes between which the bandwidth desired for data communication between processes is minimized. With this, since a process having a large necessity bandwidth between processes is collectively allocated on one node and the necessity bandwidth between processes allocated on other nodes is small, the amount of communication data between nodes performing each process is reduced. Accordingly, a network load is reduced in a case where a series of processes is performed in a distributed manner in a plurality of nodes. Funabashi ¶ [0087] other example where process group creation unit 103 divides the process group between which the bandwidth desired for data communication between processes included in the process group is minimized, and creates new process group, in case where there is no node which can perform the entirety of processes included in a process group after division (Fig15 S213,Fig16E). With this the process having large necessity bandwidth between processes is collectively allocated on one node of a range which can be performed in a node on a network. Similarly, ¶ [0088] last sentence: a series of processes starts at the start node and completes at end node, and the process having the large necessity bandwidth between processes is collectively allocated on one node. Similar, [0090] 4th sentence, Figs.19A-C, [0091] 8th sentence). Claims 4,11,20 Funabashi teaches all the limitations in claims 1,8,17. Further, Funabashi teaches “wherein the instances of computer hardware to which the computational tasks are allocated are chosen to satisfy one or more additional performance parameters” (Funabashi ¶ [0003] 2nd-3rd sentences: By the distributed performance, it is possible to perform a series of processes which may not be performed in one node or reduce a performance time desired for performing a series of processes in one node. In addition, since a start node and an end node are fixed in the performance of a series of processes, there is a case where the series of processes is performed in a distributed manner in a plurality of nodes. ¶ [0032] 2nd-4th sentences: The process arrangement performance server 20 requests allocation of each process on the nodes 50-1 to 50-n to the process allocation server 10 in order to perform a series of processes in a the distributed manner on the nodes 50-1 to 50-n on the network 80 when receiving a performance request of a series of processes from the client terminal 30). Claims 5,12 Funabashi teaches all the limitations in claims 4,11. Further Funabashi teaches “wherein the one or more additional performance parameters include one or more of power efficiency and cost” (Funabashi teaches many examples starting Funabashi ¶ [0063] 1st sentence: When proceeding to step S14, node determination unit 105 allocates a process that is not allocated, on a node in which a communication cost from the start node to the end node is minimized. Funabashi ¶ [0067] 3rd-7th sentences: noting another example where the node determination unit 105 compares a communication cost in a case where “process 2” and “process 3” are allocated on node 2 and a communication cost in a case where “process 2” and “process 3” are allocated on the node 3, and allocates “process 2” and “process 3” on a node in which the communication cost is minimized (Fig.7: step S14). Here, the communication cost between nodes of the node 1 on which a process is allocated and the node 2 is “2”, and the communication cost between nodes of the node 4 on which a process is allocated and the node 2 is “17”. Accordingly, the total communication cost in a case of allocating “process 2” and “process 3” on the node 2 is “36” (2×1+17 ×2). Meanwhile, since the communication cost between nodes of the node 3 and the node 1 is “12”, and the communication cost between nodes of the node 3 and the node 4 is “7”, the total communication cost in a case of allocating “process 2” and “process 3” on the node 3 is “26” (12×1+7×2). Accordingly, the node determination unit 105 allocates “process 2” and “process 3” on the node 3 in which the total communication cost is minimized (Fig.13F). Similarly Funabashi ¶ [0075] 1st,3rd sentences: When proceeding to S211, node determination unit 105 allocates a process group on a node in which the communication cost with a node on which a process is allocated is minimized, among nodes on which allocation can be performed. In this case, the node determination unit 105 allocates the process group 3 on the node 2 in which a communication cost with node (start node: node 1) on which a process is allocated is minimized. Funabashi ¶ [0082] 2nd sentence: the node determination unit 105 allocates the process group 4-1 on the node 2 in which a communication cost with the node 2 that is a node on which “process 1” is allocated is minimized (see FIG. 16F). Similarly, ¶ [0089] the node determination unit 105 determines a node on which a process group is allocated based on a communication cost with the node on which a process is allocated, in a case where a plurality of nodes which can perform the entirety of processes included in the process group exist (Fig.7: S14 and Fig. 15: step S211). For example, the node determination unit 105 allocates a process group on a node in which the communication cost with the node on which a process is allocated is minimized. With this, in a case where a series of processes is performed in a distributed manner in a plurality of nodes, a network load is reduced, and a communication cost is reduced. Claims 6,13,21 Funabashi teaches all the limitations in claims 1,8,21 Further Funabashi teaches “wherein the instances of computer hardware to which the computational tasks are allocated comprise one or more of local” (Funabashi ¶ [0031] 3rd sentence: nodes 50-1 to 50-n are coupled to one another through… local area network) “and cloud-based hardware instances” (Funabashi ¶ [0029] provide technology for reducing a network load when a series of processes is performed in a distributed manner in a plurality of nodes. Specifically, per ¶ [0032] 4th sentences: The process arrangement performance server 20 requests allocation of each process on nodes 50-1 to 50-n to process allocation server 10 to perform a series of processes in a the distributed manner on nodes 50-1 to 50-n on network 80 when receiving a performance request of a series of processes from the client terminal 30). Claims 7,14, 22 Funabashi teaches all the limitations in claims 1,8,17. Further Funabashi teaches “wherein the computational tasks are for7 training a machine learning algorithm” (Funabashi ¶ [0081] 2nd – 3rd sentences: the process group creation unit 103 and the node determination unit 105 repeat the processes of step S207 to step S213, until the determination of step S215 is positive, that is, until the entirety of process groups is allocated on a node. That is, in step S207 to step S215, the division of a series of processes is repeated until a process group which can be processed in a node is configured). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OCTAVIAN ROTARU whose telephone number is (571)270-7950. The examiner can normally be reached on 571.270.7950 from 9AM to 6PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PATRICIA H MUNSON, can be reached at telephone number (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /OCTAVIAN ROTARU/ Primary Examiner, Art Unit 3624 A November 19th, 2025 1 USPTO’s training entitled Focus on Computer/Software-related Claims dated May 2015 at slides 16-17,20-21, which cites MPEP 2111.04, thus the expression “wherein the computational tasks are for training a machine learning algorithm” appears to be an example of intended use or intended result, which per USPTO’s training above and MPEP 2111.04 could also be argued to carry limited to no patentable weight 2 Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) 3 lice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015);  4 FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016); 5 Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015); 6 Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016);  7 USPTO’s training entitled Focus on Computer/Software-related Claims dated May 2015 at slides 16-17,20-21, which cites MPEP 2111.04, thus the expression “wherein the computational tasks are for training a machine learning algorithm” appears to be an example of intended use or intended result, which per USPTO’s training above and MPEP 2111.04 could also be argued to carry limited to no patentable weight
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Nov 19, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602627
SOLVING SUPPLY NETWORKS WITH DISCRETE DECISIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12555059
System and Method of Assigning Customer Service Tickets
2y 5m to grant Granted Feb 17, 2026
Patent 12547962
GENERATIVE DIFFUSION MACHINE LEARNING FOR RESERVOIR SIMULATION MODEL HISTORY MATCHING
2y 5m to grant Granted Feb 10, 2026
Patent 12450534
HETEROGENEOUS GRAPH ATTENTION NETWORKS FOR SCALABLE MULTI-ROBOT SCHEDULING
2y 5m to grant Granted Oct 21, 2025
Patent 12406213
SYSTEM AND METHOD FOR GENERATING FINANCING STRUCTURES USING CLUSTERING
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
28%
Grant Probability
67%
With Interview (+38.9%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month