DETAILED ACTION
This action is in response to the reply received 01/02/2026. After consideration of applicant's amendments and/or remarks:
Applicant cancels claims 2, 10, and 13.
Examiner maintains rejection of claims 1, 3-9, 11-12, and 14 under 35 USC §101.
Examiner maintains rejections of claims 1, 3-9, 11-12, and 14 under 35 USC §103.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-9, 11-12, and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 9, and 12
[Step 1] Claims 1, 9, and 12 recite a method, system, and/or medium for performing a process comprising steps of (1) identifying a first and second computer node, (2) acquiring information related to a task, (3) dynamically allocating the task to one of or both the first and second computer node, and (4) displaying a user interface comprising a task queue for each node.
[Step 2A – Prong One] The process recited in claims 1, 9, and 12 are directed to an abstract idea. The recited limitations (1)-(4) are a process that, under its broadest reasonable interpretation, covers mental processes including "observations, evaluations, judgments, and opinions" that "can be performed in the human mind, or by a human using a pen and paper" MPEP 2106.04(a)(III). The steps of (1)-(4) are drawn to a mental process because they amount to a claim directed to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” and the steps themselves “are recited at a high level of generality such that they could practically be performed in the human mind.” See MPEP 2106.04(a)(2)(III)(A). The step of (1) is accomplished by a human collecting information regarding two computer systems via reading comprehension. The step of (2) is accomplished by a human collecting information regarding a task via reading comprehension. The step of (3) is accomplished by a human arbitrarily applying any analysis to assign the task to one or both computers. The claim recites this ‘dynamic task allocation’ limitation at such a high level of generality because there is no recitation as to how this ‘dynamic task allocation’ is performed other than “based on the idle resource information,” so the broadest reasonable interpretation includes any mental process of ‘dynamic allocation based on idle resources.’ (e.g., any arbitrary human assignment of a task to idle resources over busy resources). This is accomplished by a human collecting information regarding whether a computer is busy or idle via reading comprehension or visual analysis. If one computer is idle and the other is busy, then the user could mentally choose to assign the task to the idle computer. The step of (4) is a step of displaying certain results of the collection and analysis.
[Step 2A – Prong Two] Claims 1, 9, and 12 do not recite additional elements that integrate the judicial exception into a practical application. The additional elements recited include computer hardware (e.g. processor and memory); these additional elements are merely instructions to implement an abstract idea on a computer. MPEP 2106.04(d). Further, the claim limitations attempt to cover any solution for ‘dynamic allocation,’ without providing a particular solution or way to achieve a desired outcome. See MPEP 2106.05(f)(1). The additional elements that state that the computer nodes are local or cloud based are merely intended uses— there is no recitation of any functions performed differently based on the type of system. These limitations are merely an attempt to link the use of a judicial exception to a particular technological environment or field of use. Accordingly, the identified limitations fail to integrate the judicial exception into a practical application. See MPEP 2106.05(h).
[Step 2B] Claims 1, 9, and 12 do not recite a combination of elements that amount to significantly more than the judicial exception itself. The broadest reasonable interpretation of the process comprising limitations (1)-(4) is a mental process. The additional elements recited are merely instructions to implement the mental process on a computer. Accordingly, these limitations are not enough to qualify as "significantly more" when recited with a judicial exception (i.e. the mental process). MPEP 2106.05(A). Further, these claim limitations fail to improve the functioning of the computer itself, because "a claim whose entire scope can be performed mentally, cannot be said to improve computer technology." See MPEP 2106.05(a)(I).
Claims 3, 11, and 14
The additional step of generating access right information does not render the judicial exception as a practical limitation or make a combination that is significantly more than the judicial exception because it only amounts to insignificant extra-solution activity of authorized access output (i.e. checking file permissions). See MPEP 2106.05(g). This additional element does not recite significantly more than a judicial exception, because it is recognized as well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP 2106.05(d)(II)(iv).
Claims 4-6
The additional steps of collecting access right information and allocating based on the idle computer information does not render the judicial exception as a practical limitation or make a combination that is significantly more than the judicial exception because the step is drawn to an abstract idea as a mental process. This is accomplished by a human collecting information regarding whether they have an active at a computer resource provider. If the user has an account at a first computer resource provider and not at a second computer resource provider, then the user could mentally choose to assign the task to the computer resource provider that they have an account at.
Claims 7-8
The additional steps of displaying node information does not render the judicial exception as a practical limitation or make a combination that is significantly more than the judicial exception because the step is drawn to an abstract idea as a mental process. This step is a step of displaying certain results of the collection and analysis. Examiner notes that the claimed user interface doesn’t have any actual claimed user functionality other than displaying information. Accordingly, the user interface is analogous to writing information on paper for ‘display.’
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-9, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Allen, U.S. Patent No. 11,372,689 B1, in view of Khanna et al., U.S. Patent No. 8,296,419 B1, further in view of Hu et al., U.S. PG-Publication No. 2024/0086249 A1.
Claim 1
Allen discloses a method for [executing workload jobs] performed by a computing device. Allen discloses methods for “cloud bursting capabilities,” wherein “[j]obs and workloads can be seamlessly executed on-premises or on any particular public cloud.” The method provider “user interface tools … for simplified and efficient management of resources, workloads, and conditions across cloud environments.” Allen, 2:3-20.
Allen discloses the method comprising: identifying a first computing node and a second computing node for [the workload jobs]. Allen illustrates an exemplary cloud computing environment 200. Id. at 9:28-29, FIG. 2. Environment 200 comprises clouds 202-210 for providing software services , platform services, and/or infrastructure services by hosting, managing, and providing resources and/or devices for cloud consumers (clouds 202-210 → second computing node). Id. at 9:28-10:13. Environment 200 further comprises “an on-premises site 212” including “a private cloud, branch, network, and/or data center” (on-premises site 212 → first computing node). Id. at 10:14-24. Workload management services “can provision infrastructure resources and/or schedule jobs on the clouds 202-210 for the on-premises site 212,” in order to “provide overflow computing services or resources to the on-premises site 212.” Id. at 10:25-32.
Allen discloses acquiring information related to a first task for [the workload jobs], based on a user input. On-premise site 212 includes “a workload queue 258, which can hold pending and/or processing jobs or requests.” Environment 200 comprises “management services 250” that can “manage, orchestrate and schedule resources and jobs or workloads in the workload queue 258.” Management services 250 include “ a workload manager 252 and a resource manager 254.” The workload manager 252 “can manage jobs or workloads submitted for the on-premises site 212” and “identify specific requirements for jobs or workloads … such as resource and/or job workload requirements” (i.e., information related to a workload job). Id. at 11:21-46; See Also 26:55-57 (“workload screen includes a create job button 814 that allows a user to create one or more jobs” from a user input).
Allen discloses dynamically allocating the first task to at least one of the first computing node or the second computing node. The resource manager 254 “provisions resources for the jobs of workloads submitted,” and “can reserve, allocate, and/or provision resources dynamically.” Resource manager 254 coordinates with workload manager 252 to “ensure that jobs or workloads … in the workload queue 258 are scheduled and processed by the necessary resources according to the specific requirements … corresponding to such jobs or workloads.” Id. at 11:47-58. Allen illustrates a configuration 400 “for cloud bursting onto multiple clouds 202-210” using “cloud bursting parameters and triggers 402 for sending a cloud bursting request 404 to multi-cloud bursting service 340A in order to burst to one or more of the clouds 202-210.” The parameters and triggers are rules or conditions based on “a backlog threshold, a policy violation threshold, and SLA violation threshold, a capacity threshold, a job or workload request threshold.” In one embodiment, “the trigger may be based on … the likelihoods that the on-premises site 212 will satisfy one or more requirements associated with a job or workload.” Id. at 17:25-18:28. Accordingly, the method can use threshold requirements to dynamically allocate a job either locally (i.e., on-premises site 212; first local node) or in a cloud (i.e., clouds 202-210; second cloud node).
Allen discloses displaying a first user interface related to a task queue for [the workload jobs] in an environment where the first computing node and the second computing node exist. Allen illustrates example views 830, 850, and 860 “of graphical user interfaces available to the on-premises site 212 for viewing, monitoring, managing, and configuring jobs or workloads, templates, nodes, clusters, configurations, files, queues, statuses, etc., for the on-premises site 212 and any cloud provider … nodes provisioned from the clouds 202-210 and jobs or workloads processed by such nodes” (i.e., an environment where first node and second node exist). Id. at 25:36-43. The user interface view 830 (FIG. 8B) corresponding to a “workload tab” 804 “shows a table of workloads on the on-premises site 212. Id. at 26:33-40. The user interface view 850 (FIG. 8C) “illustrates different jobs (832B) in the workload screen having an eligible status 842C. Depending on “the triggers associated with the jobs 832B and/or on-premises site 212, this can trigger provisioning and bursting onto one or more clouds 202-210 … in order to speed up the processing of these jobs and reduce the queue.” A user may also use the interface to “manually initiate a provisioning/bursting process to one or more clouds (202-210) to reduce the queue or increase performance.” Id. at 26:56-27:17. The user interface view 860 (FIG. 8D) corresponding to a “nodes tab” 808 depicts the nodes for processing jobs, including “a combination of on-premises nodes 262” (i.e., first area displaying local nodes) “and cloud nodes 412 from cloud 202” (i.e., second area displaying cloud nodes). Id. at 27:31-37.
Allen discloses wherein the first computing node includes one or more local computing resources of the user, wherein the second computing node includes one or more cloud computing resources. Environment 200 comprises clouds 202-210 for providing software services , platform services, and/or infrastructure services by hosting, managing, and providing resources and/or devices for cloud consumers (clouds 202-210 → second computing node). Id. at 9:28-10:13. Environment 200 further comprises “an on-premises site 212” including “a private cloud, branch, network, and/or data center” (on-premises site 212 → first computing node). Id. at 10:14-24. Workload management services “can provision infrastructure resources and/or schedule jobs on the clouds 202-210 for the on-premises site 212,” in order to “provide overflow computing services or resources to the on-premises site 212.” Id. at 10:25-32.
Allen discloses wherein the first user interface includes: a first area for displaying … [information corresponding] to each of the one or more local computing resources included in the first computing node; and a second area for displaying … [information corresponding] to each of the one or more cloud computing resources included in the second computing node. The user interface view 860 (FIG. 8D) corresponding to a “nodes tab” 808 depicts the nodes for processing jobs, including “a combination of on-premises nodes 262” (i.e., first area displaying local nodes) “and cloud nodes 412 from cloud 202” (i.e., second area displaying cloud nodes). Id. at 27:31-37.
Allen does not expressly disclose wherein the first user interface includes: a first area for displaying a task queue allocated to each of the one or more local computing resources included in the first computing node; and a second area for displaying a task queue allocated to each of the one or more cloud computing resources included in the second computing node.
Khanna discloses wherein the first user interface includes: a first area for displaying a task queue allocated to each of the one or more local computing resources included in the first computing node; and a second area for displaying a task queue allocated to each of the one or more cloud computing resources included in the second computing node. Khanna discloses a “Distributed Program Execution Service System Manager module … that supports an embodiment of a distributed program execution (‘DPE’) service for executing multiple pogroms on behalf of multiple customers … of the service. The DPE service “may provide various computing nodes … and other external computing resources … for use in executing programs for users in a distributed manner.” Khanna, 2:42-67. The DPE service “may provide a GUI that a remote user may interactively use to view status information related to ongoing distributed program execution … and/or to make a distributed program execution modification request,” such as “including to modify various previously specified configuration parameters for an distributed program execution.” Id. at 6:55-7:17. Figure 2A illustrates a “GUI screen 285” comprising status information 210 “that corresponds to the ongoing distributed execution of an example program.” Id. at 13:9-54. The status information 210 “includes various state information … such as to track the status of execution of execution jobs on the multiple computing nodes used.” Each line or entry of status information 210 “corresponds to the performance of a particular operation for a particular execution job on a particular computing nodes, with information being tracked that in this example includes an identification 210a of the computing node, of the execution job 210b, of the operation 210c, of the status of performance 210f … and optionally of various other information.” (i.e., displaying a particular queue of tasks allocated to a particular computing node). Id. at 13:55-14:43.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface for allocating jobs to local or cloud resources of Allen to incorporate the user interface for monitoring jobs allocated to nodes in a distributed computing environment as taught by Khanna. One of ordinary skill in the art would be motivated to integrate user interface for monitoring jobs allocated to nodes in a distributed computing environment into Allen, with a reasonable expectation of success, in order to improve performance by enabling a user to modify the distributed program execution “if the … execution is using more computing resources than … otherwise expected,” “if one or more bottleneck exist,” or “if an insufficient quantity of computing nodes of the cluster are available to perform execution.” See Khanna, 2:15-42.
Allen-Khanna does not expressly disclose that the workload jobs are specifically for training an artificial neural network model. Further, Allen-Khanna does not disclose: identifying information related to a task previously allocated to the first computing node or the second node; generating idle computing resource information of the first computing node or the second computing node, based on the information related to the task previously allocated to the first computing node or the second computing node; and dynamically allocating a task based on the idle computing resource information.
Hu discloses a method for training an artificial neural network model performed by a computing device. Hu discloses methods “for elastic allocation of resources for deep learning jobs,” that optimize “overall estimated time to completion (ETC) for all deep learning jobs … using a node-based resource allocator to allocate computing resources (e.g., nodes) to a particular deep learning job to meet the ETC for the deep learning job.” The method also provides “an improved user interface enabling users of the elastic training system to specify a range of resources to elastically allocate to the user’s training job, and/or informing users of training time saved through the use of elastic resource allocation.” Hu, ¶ 24. The method is implemented using an ”elastic training module 200 configured to provide a training service that trains a deep learning model 214.” Id. at ¶ 64, FIG. 2.
Hu discloses wherein the dynamically allocating of the first task to at least one of the first computing node or the second computing node includes: identifying information related to a task previously allocated to the first computing node or the second computing node. Hu discloses that “user interface 202 is also used to communicate the results of a completed training job, and/or the current progress of an ongoing or queued training job.” Id. at ¶ 66; See Also ¶ 102 (when job 624 completes “its single node is freed up for allocation to another ongoing job”).
Hu discloses generating idle computing resource information of the first computing node or the second computing node, based on the information related to the task previously allocated to the first computing node or the second computing node; and dynamically allocating the first task to at least one of the first computing node or the second computing node, based on the idle computing resource information. Hu discloses that an elastic training system uses a “greedy allocator,” wherein if there are “still idle nodes and training jobs in the job queue,” the allocator “allocates as many nodes as possible to the training job at the front of the job queue.” Id. at ¶ 13. It is called a “greedy allocator,” because “it always tries to utilize the system’s computing resources to their fullest extent, i.e. leave no nodes idle.” Id. at ¶ 17. For example, when “job 2 642 completes … its single node is freed up for allocation to another job” (i.e., the node is idle). Then, job 4 644 “increases its node allocation from 2 to 3” (i.e., dynamically allocating job 4 644 to an identified idle computing node). Id. at ¶ 102.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method for allocating jobs to local or cloud resources of Allen-Khanna to incorporate allocating deep training jobs to compute resources as taught by Hu. One of ordinary skill in the art would be motivated to integrate allocating deep training jobs to compute resources into Allen-Khanna, with a reasonable expectation of success, in order to “(1) improve efficient utilization of computing resources, (2) speed up the overall training time required to complete a given set of training jobs, (3) reduce queueing delay, and (4) improve the user experience when submitting a job profile to the system.” Hu, ¶ 10.
Claim 7
Hu discloses wherein the first area and the second area include one or more third areas for displaying tasks for training an artificial neural network model. Hu discloses that the user interface 202 is “used to communicate the results of a completed training job, and/or the current progress of an ongoing or queued training job.” Hu, ¶ 66. User interface 202 “may generate and send … one or more additional types of UI screens (not shown) while the job is being managed.” These UI screens “may indicate the status of the training job (e.g., position in the job queue 203, estimated time remaining in the job queue 204, ongoing), the ETC of the training job … and/or a total time saved by using the elastic training module 200.” Id. at ¶ 85; See Also ¶ 123 (user interface 200 may include “UI screens displaying job progress”).
Claim 8
Allen discloses displaying a second user interface related to selecting one of the first computing node or the second computing node to which the first task is allocated, based on an input of the user identified while displaying the first user interface. The user interface view 830 (FIG. 8B) corresponding to a “workload tab” 804 “shows a table of workloads on the on-premises site 212. Id. at 26:33-40. The user interface view 850 (FIG. 8C) “illustrates different jobs (832B) in the workload screen having an eligible status 842C. Both interface view 830 and 850 (i.e., first user interface) contain a selectable “nodes tab” 808 for receiving a user input.
Allen discloses wherein the second user interface includes a fourth area for displaying a selection area corresponding to the one or more local computing resources included in the first computing node and the one or more cloud computing resources included in the second computing node. Selecting the “nodes tab” displays user interface view 860 (FIG. 8D) for displaying the nodes for processing jobs, including “a combination of on-premises nodes 262” (i.e., first area displaying local nodes) “and cloud nodes 412 from cloud 202” (i.e., second area displaying cloud nodes). Id. at 27:31-37. Interface view 860 includes “columns 810N” comprising “a column detailing the number of cores available for each node (262A-E)” and “a column detailing the CPU utilization for each node (262A-E)” (number of cores and CPU utilization → computing resources included in the node). Id. at 26:21-32, FIG. 8D.
Claims 9
Claim 9 is rejected utilizing the aforementioned rationale for Claim 1; the claim is directed to a medium storing instructions corresponding to the method.
Claim 12
Claim 12 is rejected utilizing the aforementioned rationale for Claim 1; the claim is directed to a system performing the method.
Claims 3-6, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Allen, U.S. Patent No. 11,372,689 B1, in view of Khanna et al., U.S. Patent No. 8,296,419 B1, further in view of Hu et al., U.S. PG-Publication No. 2024/0086249 A1, further in view of Malvankar et al., U.S. PG-Publication No. 2024/0220329 A1.
Claim 3
Malvankar discloses wherein the dynamically allocating of the first task to at least one of the first computing node or the second computing node, based on the idle computing resource information includes: generating access right information related to the first computing node and the second computing node, based on account information of the user; and dynamically allocating the first task to at least one of the first computing node or the second computing node, based on the idle computing resource information and the access right information. Malvankar discloses “methods of using machine learning techniques … to scale computing clusters to accommodate a request for a computational workload or task to be performed on a computing platform (e.g., a cloud infrastructure, such a public, private, or hybrid cloud infrastructure).” Malvankar, ¶ 19. The method can “apply a policy (e.g., learned from training data) that allocates computing resources between multiple competing computational workloads or tasks (associated with other user accounts) and competing computing resources.” Id. at ¶ 24.
Using the policy, “computing resources may be downscaled from certain computational workloads or tasks associated with a lower priority user account than those computational workloads or tasks associated with a higher priority user account.” The policy may “select high-priority user accounts … to receive computing resources over low-priority user accounts.” Id. at ¶¶ 26-27. A user account may “include an account priority that may be ranked on any scale, such as a numerical scale … or as low through high” (account priority → access right information). The priority may be based on “a user paying for a subscription for access to the cloud infrastructure (e.g., a thousand dollar a month subscription provides a high priority, a one hundred dollar a month subscription provides a medium priority, a free sub scription provides a low priority, and the like).” Id. at ¶ 55. The generated policy may require “that that downscaling of clusters associated with low-priority user accounts is more likely than downscaling of clusters associated with high-priority user accounts; that upscaling speed for a particular submitted request is quicker for high-priority user accounts than low-priority user accounts; and so on.” Id. at ¶ 72.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method for allocating jobs to local or cloud resources of Allen-Khanna-Hu to incorporate allocating cloud resources based on user account priority as taught by Malvankar. One of ordinary skill in the art would be motivated to integrate allocating cloud resources based on user account priority into Allen-Khanna-Hu, with a reasonable expectation of success, in order to improve performance using an “optimization objective” that “may scale other clusters associated with applications, computational workloads, or tasks associated with other user accounts based on performance as an objective, without significantly impacting current application, computational workload, or task execution.” Malvankar, ¶ 25.
Claim 4
Malvankar discloses wherein the generating of the access right information related to the first computing node and the second computing node, based on the account information includes:
generating the access right information related to the first computing node and the second computing node, based on pricing plan information of the user. A user account may “include an account priority that may be ranked on any scale, such as a numerical scale … or as low through high” (account priority → access right information). The priority may be based on “a user paying for a subscription for access to the cloud infrastructure (e.g., a thousand dollar a month subscription provides a high priority, a one hundred dollar a month subscription provides a medium priority, a free sub scription provides a low priority, and the like).” Malvankar, ¶ 55.
Claim 5
Allen discloses wherein the dynamically allocating of the first task to at least one of the first computing node or the second computing node, based on the idle computing resource information and the access right information includes: when an idle computing resource of the first computing node is equal to or less than a predetermined threshold and the access right information includes an access right of the second computing node, allocating the first task to the second computing node. Allen illustrates a configuration 400 “for cloud bursting onto multiple clouds 202-210” using “cloud bursting parameters and triggers 402 for sending a cloud bursting request 404 to multi-cloud bursting service 340A in order to burst to one or more of the clouds 202-210.” The parameters and triggers are rules or conditions based on “a backlog threshold, a policy violation threshold, and SLA violation threshold, a capacity threshold, a job or workload request threshold.” For example, “a trigger for generating and sending the cloud bursting request 404 can be that a backlog of jobs or workloads received or in the workload queue 258 exceed a threshold.” Id. at 17:25-18:28. The bursting parameters and triggers 402 can include “a threshold availability at the on-premises site 212” (i.e., idle computing resource of the first computing node is equal to or less than a predetermined threshold).
Claim 6
Khanna discloses wherein the information related to the first task includes information related to one or more sub tasks included in the first task, and wherein the dynamically allocating of the first task to at least one of the first computing node or the second computing node, based on the idle computing resource information and the access right information includes: when the sub tasks are multiple, and the access right information includes the access right of the second computing node, allocating the first task to the second computing node. The status information 210 “includes various state information … such as to track the status of execution of execution jobs on the multiple computing nodes used.” Each line or entry of status information 210 “corresponds to the performance of a particular operation for a particular execution job on a particular computing nodes, with information being tracked that in this example includes an identification 210a of the computing node, of the execution job 210b, of the operation 210c, of the status of performance 210f … and optionally of various other information.” The other status information may include “information about dependencies or other inter-relationship between operations (e.g., operation B cannot be executed until after operation A is completed, operations C and D are to be executed simultaneously, etc.)” (i.e., information related to one or more sub tasks). Khanna, 13:55-14:43. For example, the execution of “job J-A includes operations to be performed” (task and sub-tasks → job and operations). The operations can be queued when waiting for output data from other operations. Id. at 14:55-15:18.
Claim 11
Claim 11 is rejected utilizing the aforementioned rationale for Claim 3; the claim is directed to a medium storing instructions corresponding to the method.
Claim 14
Claim 14 is rejected utilizing the aforementioned rationale for Claim 3; the claim is directed to a system performing the method.
Response to Arguments
Applicant's arguments filed 01/02/2026 have been fully considered but they are not persuasive.
Applicant Arguments Regarding 35 USC §101 Claim Rejections
Applicant argues that the claims “clearly go beyond any mental process,” because the disclosure “facilitates a user who is not an expert in the field of machine learning to easily design and train an artificial neural network model.” Rem. 8-9.
Examiner disagrees.
There are no actual ‘designing’ or ‘training’ limitations recited in the claims. The claims acquire idle resource information and dynamically allocate a task related to training a neural network based on the acquired idle information. The allocated task can be a neural network training task, but the claim is directed to “dynamically allocating” a task for training (i.e., the claim recites allocating a training task, but the actual performance of said training task is not required by the claim); and no improvement in the actual training (or designing) of said neural network is recited.
Applicant argues that at least the step of “dynamically allocating the first task to at least one of the first computing node or the second computing node” is not a mere mental process but a specific computing step that is not practically performed in the human mind. Rem. 9
Examiner disagrees.
A human can use reading comprehension (or visual analysis) of data representative of two computer systems. The data indicate that computer A is busy and computer B is idle. The human can mentally decide that the most efficient way to proceed is to assign a computing task to idle computer B. This is a mental process.
Applicant argues that the claims are statutory because they have “a remarkable effect in that a user who is not an expert … may easily design the training of the artificial neural network” because “tasks for training ... are allocated through a UI/UX of the front end that is continuously updated, so that the user may easily grasp which computing resource is currently available, based on a pricing plan that the user is currently using, and it is possible to dynamically allocate tasks to the on-premises or the cloud environment.” Rem. 9-10.
Examiner disagrees.
To the extent Applicant alleges an ‘improvement’ in Step 2A Prong Two, MPEP 2106.04(d)(1) states that “if the specification sets for an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement.” The present claims recite no limitations requiring continuous updating. Dependent claim 4 discusses using a pricing plan, but it cannot be considered an improvement— because the pricing plan only determines access rights— since determining access rights is mere data gathering (i.e., insignificant extra-solution activity) that does not meaningfully limit the claim. A qualifying improvement must be reflected in claim limitations that specify how the technology is improved, not merely a conclusory assertion of better outcomes from applying an abstract idea.
Further, to the extent Applicant alleges an ‘improvement’ in step 2B, MPEP 2106.05(a) states that “the claim must be evaluated to ensure the claim itself reflect the disclosed improvement in technology.” To determine whether a claim improvises technology requires consideration of “the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.” The alleged improvement is written in the claim as “dynamically allocating the first task … based on the idle computing resource information.” This alleged improvement is not a technological improvement as claimed, but rather an improvement of a decision/selection concept— which is an observation, evaluation, or judgement. The claim fails to recite actual technological improvements for performing the dynamic allocation task.
MPEP defines the “mental processes” abstract idea grouping as concepts performed in the human mind and explains that example include observations, evaluations, judgments, and opinions. When a limitation can practically be performed in the human mind—with or without a physical aid such as pen and paper— it falls within the mental processes grouping and the claim recites an abstract idea. MPEP 2106.04(a)(2).
Under the broadest reasonable interpretation, the limitations allocating tasks based on ‘idle computer information’ encompass a human performing the step mentally: a person can observe which computer is idle and choose to run the task on that one. Here, a human can decide to allocate a task on an idle machine practically in the human mind. The limitations regarding local and cloud computing resources are not part of the dynamic allocation process. The claim fails to recite any limitations regarding a technological improvement for deciding whether to use local or cloud resources, because allocating merely “based on the idle computing resource information” is a mental process.
Under the broadest reasonable interpretation, the allocation step does not even consider whether the resources or local or cloud, it merely requires allocating the task to either local resources or cloud resources “based on the idle computing resource information.” Here, a human can mentally decide to allocate a task to an idle local device or an idle cloud device in any arbitrary manner (e.g., random guess allocation). The claim fails to recite any limitations regarding a technological improvement for deciding whether to use local or cloud resources, because allocating merely “based on the idle computing resource information” is a mental process.
Accordingly, the alleged improvements are not a technological improvements as claimed; the rejections under 35 USC §101 are maintained.
Applicant Arguments Regarding 35 USC §103 Claim Rejections
Applicant argues that “Hu is solving a different problem that that of the current application,” since Hu is “focused solely on cloud computing.” Thus, Hu “lacks proper basis to combine” and has “no motivation to combine the teaching of Hu” since “Hu is solely teaching resource allocation for machine learning jobs in a cloud-based environment.” Rem. 10-11.
Examiner disagrees.
Applicant argues that since Hu only discloses cloud-based workloads, it cannot be combined with the distributed execution of jobs between local and external computing resources as taught by Allen-Khanna. However, Examiner relies on Allen-Khanna to teach methods of allocating workloads to either local or cloud systems, and merely relies on Hu to teach that there are neural network training workloads that are allocated based on idle computing resource information.
MPEP 2144 states that the “reason or motivation to modify the reference may often suggest what the inventor has done, but for a different propose or to solve a different problem,” and it is “not necessary that the prior art suggest the combination to achieve the same advantage or result discovered by applicant.”
Hu discloses methods “for elastic allocation of resources for deep learning jobs,” that optimize “overall estimated time to completion (ETC) for all deep learning jobs … using a node-based resource allocator to allocate computing resources (e.g., nodes) to a particular deep learning job to meet the ETC for the deep learning job.” The method also provides “an improved user interface enabling users of the elastic training system to specify a range of resources to elastically allocate to the user’s training job, and/or informing users of training time saved through the use of elastic resource allocation.” Hu, ¶ 24. The method is implemented using an ”elastic training module 200 configured to provide a training service that trains a deep learning model 214.” Id. at ¶ 64, FIG. 2.
Examiner is suggesting that the elastic allocation of resources for deep learning jobs for nodes in a cloud network could be modified for a different purpose: the elastic allocation of resource for deep learning jobs for computer nodes generally. Hu teaches a “greedy resource allocator” that “always tries to utilize the system’s computing resources to their fullest extent, i.e. leave no nodes idle.” One of ordinary skill in the art realizes that the same greedy resource allocator could be applied to any node, despite that nodes physical location (e.g., on-premises or off-premises).
The only requirement in the claimed allocation step is that (1) the task be allocated to either a local device or a cloud device (i.e., physical location) and (2) that the allocation be based on idle computing resource information. The greedy resource allocator works to leave no nodes idle—and need not consider whether those nodes are on-premises or not. The claim recites no specific means for determining task allocation based on whether the resource is local or not, rather the allocation is just based on “idle computing resource information.” One of ordinary skill in the art realizes that the greedy resource allocator could be modified to work with any group of nodes despite their physical location.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the method for allocating jobs to local or cloud resources of Allen-Khanna to incorporate allocating deep training jobs to compute resources as taught by Hu. One of ordinary skill in the art would be motivated to integrate allocating deep training jobs to compute resources into Allen-Khanna, with a reasonable expectation of success, in order to “(1) improve efficient utilization of computing resources, (2) speed up the overall training time required to complete a given set of training jobs, (3) reduce queueing delay, and (4) improve the user experience when submitting a job profile to the system.” Hu, ¶ 10.
Applicant argues that in regards to claims 3, 11, and 14, “none of the references disclose or suggest dynamically allocating the first task to at least one of the local resources or the cloud computing resources, based on the idle computing resource information and the access right information,” because “Malvankar is focused on optimizing application execution in the cloud, not on dynamically allocating tasks to local or cloud computing resources” and “Allen merely discloses spinning up to the cloud for overflow, and not dynamically allocating a first task to the local computing resources or the cloud computing resources.” Rem. 12.
Examiner disagrees.
Applicant’s characterization of the Allen reference is wrong— Allen is certainly directed to task allocation to local or cloud computing resources. Allen teaches a method of cloud bursting capabilities, wherein “[j]obs and workloads can be seamlessly executed on-premises or on any particular public cloud.” Allen, 2:1-20. A local compute environment “can process overflow traffic, queued jobs, and/or workloads exceeding a threshold to the provision workload environment” (i.e., cloud). Id. at 3:44-47. A resource manager 254 “can reserve, allocate, and/or provision resources for the jobs or workloads submitted” and can “allocate … resources dynamically.” The resource manager 254 “can provision external resources, such as cloud resources, for overflow traffic and process the overflow traffic through the external resources provisioned.” Id. at 11:47-58.
Applicant’s characterization of the Malvankar reference is misleading— Malvankar is certainly directed to task allocation to local or cloud resources. Malvankar is directed to “a method of allocating computing resources between computing clusters according to a policy.” Malvankar, ¶ 3. The method can “scale computing clusters to accommodate a request for a computational workload or task to be performed on a computing platform (e.g., a cloud infrastructure, such [as] a public, private, or hybrid cloud infrastructure). Id. at ¶ 19. One of ordinary skill in the art recognizes that a hybrid cloud environment includes private (e.g., local, on-premises) and remote nodes (e.g., public cloud, off-premises).
The policy may “select high-priority user accounts … to receive computing resources over low-priority user accounts.” Id. at ¶¶ 26-27. A user account may “include an account priority that may be ranked on any scale, such as a numerical scale … or as low through high” (account priority → access right information). The priority may be based on “a user paying for a subscription for access to the cloud infrastructure (e.g., a thousand dollar a month subscription provides a high priority, a one hundred dollar a month subscription provides a medium priority, a free sub scription provides a low priority, and the like).” Id. at ¶ 55. Malvankar discloses allocating a task to a particular local or remote computer resource based on access rights—an “account priority” as designated by a user account.
Accordingly, the rejections under 35 USC § 103 are maintained.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK D MILLS whose telephone number is (571)270-3172. The examiner can normally be reached M-F 10-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK D MILLS/Primary Examiner, Art Unit 2194 February 19, 2026