Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending.
Examiner Notes
Examiner cites particular paragraphs and/or columns and lines in the references as applied to Applicant’s claims for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The prompt development of a clear issue requires that the replies of the Applicant meet the objections to and rejections of the claims. Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Authorization for Internet Communications in a Patent Application
Applicant is encouraged to file an Authorization for Internet Communications in a Patent Application form (http://www.uspto.gov/sites/default/files/documents/sb0439.pdf) along with the response to this office action to facilitate and expedite future communication between Applicant and the examiner. If the form is submitted then Applicant is requested to provide a contact email address in the signature block at the conclusion of the official reply.
Specification Objections
The Specification is objected to because the title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Appropriate correction is required.
Claim Objections
Claims 6 and 15 are objected to because of minor informalities. Appropriate correction is required.
As per claim 6, in ll. 2, “the matching” should be “a corresponding” and in ll. 4, “the matching” should be “the corresponding”.
As per claim 15, it has similar limitations as claim 6 and is therefore objected to using the same rationale.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more.
Step 1: The claim is a process, machine, manufacture, or composition of matter:
Claim 1. A service processing method, performed by a management server, the service processing method comprising.
Step 2A Prong One: The claim recites an abstract idea because it includes limitations that can be considered mental processes (concepts performed in the human mind including an observation, evaluation, judgment, and/or opinion). If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the human mind or via pen and paper, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea:
determining a first computing power resource for executing an offline task (abstract idea mental process);
determining N edge servers configured to execute the offline task and on which cloud applications are running based on idle computing power resources of the N edge servers being greater than the first computing power resource, N being an integer greater than or equal to 1 (abstract idea mental process); and
scheduling the offline task to the N edge servers in a distributed mode while ensuring normal operation of the cloud applications, so that for each edge server in the N edge servers, the edge server executes the offline task using the idle computing power resource of the edge server.
Step 2A Prong Two: The abstract idea is not integrated into a practical application because the abstract idea is recited but for generically recited additional computer elements (i.e. data storage, processor, memory, computer readable medium, etc.) which do not add meaningful limitations to the abstract idea amounting to simply implementing the abstract idea on a generic computer using generic computing hardware and/or software (e.g. generally linking the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The generic computing components are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using the recited generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea:
determining a first computing power resource (generic computing components) for executing an offline task;
determining N edge servers (generic computing components) configured to execute the offline task and on which cloud applications are running based on idle computing power resources of the N edge servers being greater than the first computing power resource, N being an integer greater than or equal to 1; and
scheduling the offline task to the N edge servers in a distributed mode while ensuring normal operation of the cloud applications, so that for each edge server in the N edge servers, the edge server executes the offline task using the idle computing power resource of the edge server.
Step 2B: The claim includes limitations which can be considered extra-solution activity (see MPEP 2106.05(g)) insufficient to amount to significantly more than the abstract idea because the additional limitations only perform at least one of collecting, gathering, displaying, generating, modifying, updating, storing, retrieving, sending, and receiving data/information data which are well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d)II. The claim further includes limitations that do not integrate the judicial exception into a practical application because they merely recite the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Therefore, the claim, and its limitations when considered separately and in combination, is directed to patent ineligible subject matter:
determining a first computing power resource for executing an offline task;
determining N edge servers configured to execute the offline task and on which cloud applications are running based on idle computing power resources of the N edge servers being greater than the first computing power resource, N being an integer greater than or equal to 1; and
scheduling the offline task to the N edge servers in a distributed mode while ensuring normal operation of the cloud applications, so that for each edge server in the N edge servers, the edge server executes the offline task using the idle computing power resource of the edge server (merely reciting the words "apply it" or an equivalent with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using the computer as a tool to perform the abstract idea).
Claim 2. The service processing method according to claim 1, wherein the scheduling comprises:
dividing the offline task into N subtasks based on the idle computing power resources of the N edge servers (abstract idea mental process); and
respectively allocating the N subtasks to the N edge servers, so that each edge server in the N edge servers executes a corresponding subtask (merely reciting the words "apply it" or an equivalent with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using the computer as a tool to perform the abstract idea).
Claim 3. The service processing method according to claim 1, wherein the cloud applications are deployed to M edge servers for execution, the M edge servers are allocated to P edge computing nodes, and each of the P edge computing nodes is deployed with one or more edge servers, M and P being integers greater than or equal to 1 (generic computing components); and wherein determining the N edge servers comprises:
selecting L edge computing nodes from the P edge computing nodes such that node idle computing power resources of the L edge computing nodes are greater than the first computing power resource, the node idle computing power resource being obtained according to the idle computing power resources of the edge servers deployed in the each edge computing node (abstract idea mental process);
determining at least one candidate edge server from edge servers comprised in the L edge computing nodes based on attribute information of each edge server comprised in the L edge computing nodes (abstract idea mental process); and
determining the N edge servers from the at least one candidate edge server according to the idle computing power resource of each edge server in the at least one candidate edge server and the first computing power resource (abstract idea mental process).
Claim 4. The service processing method according to claim 3, wherein the attribute information of the each edge server comprises a working state of each edge server, and the working state comprises an idle state or a busy state (abstract idea mental process); and wherein determining the at least one candidate edge server comprises:
determining an edge server with the working state being the idle state of the edge servers comprised in the L edge computing nodes as a candidate edge server (abstract idea mental process).
Claim 5. The service processing method according to claim 3, wherein the attribute information of the each edge server comprises a server type group to which the each edge server belongs, and the server type group comprises a default whitelist group and an ordinary group (abstract idea mental process); and wherein the determining at least one candidate edge server comprises:
determining an edge server with the server type group being the ordinary group of the edge servers comprised in the L edge computing nodes as the candidate edge server (abstract idea mental process).
Claim 6. The service processing method according to claim 2, further comprising:
monitoring execution of the matching subtask of each edge server (abstract idea mental process); and
reselecting, based on monitoring an exception in the execution of the matching subtask, a new edge server and executing the matching subtask of the new edge server (abstract idea mental process).
Claim 7. The service processing method according to claim 2, wherein a subtask corresponds to an execution duration threshold (abstract idea mental process), and wherein the method further comprises:
receiving timeout prompt information reported by any edge server based on the any edge server not being able to execute the matching subtask, the timeout prompt information indicating that a duration required for the any edge server to execute the matching subtask is greater than an execution duration threshold corresponding to the matching subtask and indicating that a new edge server needs to be reallocated to execute the matching subtask of the new edge server (extra-solution activity of receiving data/information).
Claim 8. The service processing method according to claim 1, wherein the first computing power resource comprises any one or more of the following: a graphics processing unit computing power resource, a central processing unit computing power resource, an internal memory, a network bandwidth, and a network throughput (generic computing components); and
wherein the graphics processing unit computing power resource comprises at least one of the following: floating-point operations per second of a graphics processing unit and operations per second of the graphics processing unit; and the central processing unit computing power resource comprises at least one of the following: floating-point operations per second of a central processing unit and operations per second of the central processing unit (generic computing components).
Claim 9. The service processing method according to claim 1, wherein the determining the first computing power resource comprises:
determining a computation complexity corresponding to a task type of the offline task based on a correspondence between the task type and the computation complexity (abstract idea mental process);
finding at least one matching historical offline task from historical offline tasks according to the determined computation complexity based on a computation complexity corresponding to the at least one matching historical offline task matching the determined computation complexity (abstract idea mental process); and
estimating a computing power resource required for the offline task based on the computing power resource for executing the at least one matching historical offline task, to obtain the first computing power resource required to execute the offline task (abstract idea mental process).
As per claim 10, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
As per claim 11, it has similar limitations as claim 2 and is therefore rejected using the same rationale.
As per claim 12, it has similar limitations as claim 3 and is therefore rejected using the same rationale.
As per claim 13, it has similar limitations as claim 4 and is therefore rejected using the same rationale.
As per claim 14, it has similar limitations as claim 5 and is therefore rejected using the same rationale.
As per claim 15, it has similar limitations as claim 6 and is therefore rejected using the same rationale.
As per claim 16, it has similar limitations as claim 7 and is therefore rejected using the same rationale.
As per claim 17, it has similar limitations as claim 8 and is therefore rejected using the same rationale.
As per claim 18, it has similar limitations as claim 9 and is therefore rejected using the same rationale.
As per claim 19, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
As per claim 20, it has similar limitations as claim 2 and is therefore rejected using the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 10, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over
Wilde et al. (US 2022/0291734) (hereinafter Wilde) in view of
Liu et al. (US 2014/0165119) (hereinafter Liu) in view of
Cai (US 2018/0121240).
As per claim 1, Wilde primarily teaches the invention as claimed including a service processing method, performed by a management server ([0019] global power dispatcher controls allocation of power within a domain for job scheduling), the service processing method comprising::
determining a first computing power resource for executing a task ([0032] job scheduler uses power as a schedulable resource and schedules jobs according to an available power budget);
determining N edge servers ([0029] nodes may be edge processing servers) configured to execute the task based on idle computing power resources of the N edge servers being greater than the first computing power resource, N being an integer greater than or equal to 1 ([0021] job scheduler may schedule jobs on a particular group of nodes that are idle or will be idle when processing of the jobs begins pursuant to a particular job scheduling policy; [0032] schedule jobs based on there being sufficient power available for the power budget for the jobs; [0020] groups of nodes execute various jobs; [0044] a job being concurrently executed by a plurality of nodes); and
scheduling the task to the N edge servers, so that for each edge server in the N edge servers, the edge server executes the task using the idle computing power resource of the edge server ([0021] job scheduler may schedule jobs on a particular group of nodes that are idle or will be idle when processing of the jobs begins pursuant to a particular job scheduling policy; [0032] schedule jobs based on there being sufficient power available for the power budget for the jobs; [0020] groups of nodes execute various jobs; [0044] a job being concurrently executed by a plurality of nodes).
Wilde does not explicitly teach:
an offline task;
a server on which cloud applications are running; and
scheduling the offline task to the N servers in a distributed mode while ensuring normal operation of the cloud applications.
However, Liu teaches:
an offline task ([0179] offline tasks);
a server on which cloud applications are running ([0250] video-on-demand system plays a program for user to start, stop, back, fast-forward, and/or pause a video and [0331]-[0332] cloud-on-demand video file can be watched and played on the client through any device); and
scheduling the offline task to the N servers ([0285] offline tasks are scheduled to one or more offline download servers in a cluster)
Liu and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu because it would provide for an offline download solution that can schedule offline tasks according to the load of the download servers to improve utilization of the download servers. The offline download method can schedule offline tasks according to the load of each download server to improve utilization of download servers using a network-side transcoding server to transcode multimedia so as to save resources consumed by a client in transcoding the multimedia, and improve multimedia processing efficiency of the client.
Wilde in view of Liu do not explicitly teach:
scheduling the task to the N servers in a distributed mode while ensuring normal operation of the cloud applications.
However, Cai teaches:
scheduling the task to the N servers in a distributed mode ([0008] job scheduling in a distributed system) while ensuring normal operation of the cloud applications ([0137] to ensure the normal operation of the job and improve the self-repairing capability of the task failure, when any task instance in the second task fails to process the execution data, the task instance, corresponding to the execution data, of the first task is scheduled to execute again).
Cai and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai because it would provide for a distributed system composed of a central node, control nodes, and the computing nodes. The central node assigns the tasks, and the control nodes schedule the tasks, which reduces the scheduling burden of the central node and improves the schedule efficiency. When scheduling the tasks, after the completion of at least one task instance of the first task in the job, the execution of at least one task instance of the second task is scheduled and the execution data is processed. There is no need to wait for the completion of execution of all task instances of the first task to schedule the task instance of the second task for execution to conduct data processing. This results in fully utilizing the cluster resources, improving the resource utilization rate, and task concurrent degree, and reducing the task execution time.
As per claim 10, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
As per claim 19, it has similar limitations as claim 1 and is therefore rejected using the same rationale.
Claims 2, 11, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Sze et al. (US 2021/0360295) (hereinafter Sze).
As per claim 2, Wilde in view of Liu in view of Cai do not explicitly teach wherein the scheduling comprises: dividing the offline task into N subtasks based on the idle computing power resources of the N edge servers; and respectively allocating the N subtasks to the N edge servers, so that each edge server in the N edge servers executes a corresponding subtask.
However, Sze teaches wherein the scheduling comprises: dividing the offline task into N subtasks based on the idle computing power resources of the N edge servers; and respectively allocating the N subtasks to the N edge servers, so that each edge server in the N edge servers executes a corresponding subtask ([0122] split tasks into smaller portions and assign them to idle cloud resources for processing).
Sze and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Sze because it would provide for control and management of processing of data streams such that processing time and resources can be optimized using processors configured for generating instruction sets for downstream processing of data streams (e.g., video streams). Inherent or actively adduced processing delays, transmission delays, etc., and coordinated management and control of resources may permit a greater range of processing options to be conducted within a given period of time by distributing and allocating activities across cost-efficient distributed resources (e.g., utilizing off-peak availability).
As per claim 11, it has similar limitations as claim 2 and is therefore rejected using the same rationale.
As per claim 20, it has similar limitations as claim 2 and is therefore rejected using the same rationale.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Sze in view of Yan et al. (US 2014/0189702) (hereinafter Yan).
As per claim 6, Wilde in view of Liu in view of Cai in view of Sze do not explicitly teach monitoring execution of the matching subtask of each edge server; and reselecting, based on monitoring an exception in the execution of the matching subtask, a new edge server and executing the matching subtask of the new edge server.
However, Yan teaches monitoring execution of the matching subtask of each edge server; and reselecting, based on monitoring an exception in the execution of the matching subtask, a new edge server and executing the matching subtask of the new edge server ([0074] monitor the status of instance resources for any failure associated with the assigned sub-task and reassign the sub-task to an alternate instance resource).
Yan and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Sze in view of Yan because it would provide for automatic model identification and creation through automatically provisioning computing resources from a heterogeneous set of computing resources for purposes of machine learning. This can be accomplished by taking a request from a user, selecting, from a database of models, a subset of models that meet the performance requirements specified in the user's request, and searching for a single best model or best combination of a series of models. The search process is performed by breaking up the model space into individual job components consisting of one or more models, with each model having multiple individual instances using that model. The division of the user's request into discrete units of work allows the system to leverage multiple computing resources in processing the request. The system leverages many different sources of computing resources, including both cloud computing resources from various cloud providers, as well as private clouds or internal computing resources. The system also leverages different types of computing resources, such as computing resources differing in underlying operating system and hardware architecture. The ability to leverage multiple sources of computing resources, as well as types of computing resources allows the system greater flexibility and computational capacity. The combination of automation, flexibility, and capacity makes analysis of large search spaces feasible where, before, it was a manual, time consuming process. The system also includes constraint features that can allow a user to customize a request such that it can be restricted to what type of computing resources it leverages, or how much computing resources it leverages.
As per claim 15, it has similar limitations as claim 6 and is therefore rejected using the same rationale.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Sze in view of Siddiqui et al. (US 10,554,507).
As per claim 7, Wilde in view of Liu in view of Cai in view of Sze do not explicitly teach wherein a subtask corresponds to an execution duration threshold, and wherein the method further comprises: receiving timeout prompt information reported by any edge server based on the any edge server not being able to execute the matching subtask, the timeout prompt information indicating that a duration required for the any edge server to execute the matching subtask is greater than an execution duration threshold corresponding to the matching subtask and indicating that a new edge server needs to be reallocated to execute the matching subtask of the new edge server.
However, Siddiqui teaches wherein a subtask corresponds to an execution duration threshold, and wherein the method further comprises: receiving timeout prompt information reported by any edge server based on the any edge server not being able to execute the matching subtask, the timeout prompt information indicating that a duration required for the any edge server to execute the matching subtask is greater than an execution duration threshold corresponding to the matching subtask and indicating that a new edge server needs to be reallocated to execute the matching subtask of the new edge server (col. 37, ll. 1-21 reassign sensor to different cluster based on analysis of cluster operability by monitoring the number of timeouts that occur during the communication session between the sensor and the cluster and determining whether the number of timeouts exceeds a timeout threshold e.g., once or over a prescribed period of time, thereby signifying that the cluster is currently unable to adequately support the data submissions level provided by the sensor, resulting in a readjustment of one or more cluster/sensor pairings i.e., the sensor may be re-assigned to a different cluster).
Siddiqui and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Sze in view of Siddiqui because in the case of a notable discrepancy between aggregated data and statistical information (e.g., exceeding a set amount of discrepancy to avoid repeated investigation alerts) or a finding of non-compliance with the service performance level, a subscriber management system can be configured to send an alert to a prescribed network device associated with an administrator of the subscriber site to prompt an investigation as to the discrepancy or non-compliance. As a result, the subscriber management system is able to monitor, in real-time, the activity and health of a sensor and enforce compliance with service guarantees indicated by the service performance level assigned to the customer or the sensor to determine which cluster or clusters is best suited for supporting the sensor (e.g., clusters that are geographically close to the sensor may be preferred for reduced transmission latency or legal requirements such as privacy regulations) and/or best satisfy the service attributes applicable to the subscriber's information.
As per claim 16, it has similar limitations as claim 7 and is therefore rejected using the same rationale.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Kazama et al. (US 2011/0231860) (hereinafter Kazama).
As per claim 3, Wilde further teaches the M edge servers are allocated to P edge computing nodes ([0029] nodes are contained in racks having chassis units which may be edge processing servers), and each of the P edge computing nodes is deployed with one or more edge servers, M and P being integers greater than or equal to 1 ([0029] nodes are contained in racks having chassis units which may be edge processing servers); and wherein determining the N edge servers comprises: selecting L edge computing nodes from the P edge computing nodes such that node idle computing power resources of the L edge computing nodes are greater than the first computing power resource ([0021] job scheduler may schedule jobs on a particular group of nodes that are idle or will be idle when processing of the jobs begins pursuant to a particular job scheduling policy; [0032] schedule jobs based on there being sufficient power available for the power budget for the jobs; [0020] groups of nodes execute various jobs; [0044] a job being concurrently executed by a plurality of nodes), the node idle computing power resource being obtained according to the idle computing power resources of the edge servers deployed in the each edge computing node ([0021] job scheduler may schedule jobs on a particular group of nodes that are idle or will be idle when processing of the jobs begins pursuant to a particular job scheduling policy; [0032] schedule jobs based on there being sufficient power available for the power budget for the jobs; [0020] groups of nodes execute various jobs; [0044] a job being concurrently executed by a plurality of nodes).
Wilde does not explicitly teach:
wherein the cloud applications are deployed to M edge servers for execution,
determining at least one candidate edge server from edge servers comprised in the L edge computing nodes based on attribute information of each edge server comprised in the L edge computing nodes; and
determining the N edge servers from the at least one candidate edge server according to the idle computing power resource of each edge server in the at least one candidate edge server and the first computing power resource.
However, Liu teaches wherein the cloud applications are deployed to M edge servers for execution ([0250] video-on-demand system plays a program for user to start, stop, back, fast-forward, and/or pause a video and [0331]-[0332] cloud-on-demand video file can be watched and played on the client through any device).
Liu and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu because it would provide for an offline download solution that can schedule offline tasks according to the load of the download servers to improve utilization of the download servers. The offline download method can schedule offline tasks according to the load of each download server to improve utilization of download servers using a network-side transcoding server to transcode multimedia so as to save resources consumed by a client in transcoding the multimedia, and improve multimedia processing efficiency of the client.
Wilde in view of Liu in view of Cai do not explicitly teach:
determining at least one candidate edge server from edge servers comprised in the L edge computing nodes based on attribute information of each edge server comprised in the L edge computing nodes; and
determining the N edge servers from the at least one candidate edge server according to the idle computing power resource of each edge server in the at least one candidate edge server and the first computing power resource.
However, Kazama teaches:
determining at least one candidate edge server from edge servers comprised in the L edge computing nodes based on attribute information of each edge server comprised in the L edge computing nodes ([0041] candidate selector selects the server achieving the lowest power consumption); and
determining the N edge servers from the at least one candidate edge server according to the idle computing power resource of each edge server in the at least one candidate edge server and the first computing power resource ([0041] candidate selector selects the server achieving the lowest power consumption and allocates the job to that server).
Kazama and Wilde are both concerned with computer task/job scheduling/allocation and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Kazama because it would provide a way to reduce power consumption and attain power saving such that only when a load is lower than a threshold value in a computer system which is low in throughput and is low in power consumption is selected to be used. This results in a task being scheduled to two processors of different power efficiencies so as to minimize the power consumption.
As per claim 12, it has similar limitations as claim 3 and is therefore rejected using the same rationale.
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Kazama in view of Skjolsvold et al. (US 2015/0319230) (hereinafter Skjolsvold).
As per claim 4, Wilde in view of Liu in view of Cai in view of Kazama do not explicitly teach wherein the attribute information of the each edge server comprises a working state of each edge server, and the working state comprises an idle state or a busy state; and wherein determining the at least one candidate edge server comprises: determining an edge server with the working state being the idle state of the edge servers comprised in the L edge computing nodes as a candidate edge server.
However, Skjolsvold teaches wherein the attribute information of the each edge server comprises a working state of each edge server, and the working state comprises an idle state or a busy state; and wherein determining the at least one candidate edge server comprises: determining an edge server with the working state being the idle state of the edge servers comprised in the L edge computing nodes as a candidate edge server ([0189] candidates function can sort each server from busiest to idlest as quantified by a load metric, such as the server load metric, the candidate target server set may be limited to a number of those servers that have the lowest server load, and servers can be added to the candidate target server set based on server load, for example, based on having low server load).
Skjolsvold and Wilde are both concerned with computer task/workload scheduling/distribution and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Kazama in view of Skjolsvold because it would provide a way of determining, for a triggered optimization module that a server is over utilized on a dimension, selecting candidate operations for partitions assigned to the server, for a higher priority optimization module than the triggered optimization module, removing a candidate operation from the candidate operations that would diminish a modeled state of scalable storage, determining an operation of the candidate operations that would improve the modeled state of the scalable storage with respect to a metric of the dimension on the server, and executing the operation on the scalable storage.
As per claim 13, it has similar limitations as claim 4 and is therefore rejected using the same rationale.
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Kazama in view of Messick et al. (US 2004/0042489) (hereinafter Messick).
As per claim 5, Wilde in view of Liu in view of Cai in view of Kazama do not explicitly teach wherein the attribute information of the each edge server comprises a server type group to which the each edge server belongs, and the server type group comprises a default whitelist group and an ordinary group; and wherein the determining at least one candidate edge server comprises: determining an edge server with the server type group being the ordinary group of the edge servers comprised in the L edge computing nodes as the candidate edge server.
However, Messick teaches wherein the attribute information of the each edge server comprises a server type group to which the each edge server belongs ([0040] group servers into priority categories), and the server type group comprises a default whitelist group and an ordinary group ([0041] priority group servers and ordinary group servers); and wherein the determining at least one candidate edge server comprises: determining an edge server with the server type group being the ordinary group of the edge servers comprised in the L edge computing nodes as the candidate edge server ([0044] resource manager can identify which clients/servers are high priority clients and which are not i.e., which priority group a client belongs to based on a unique client identifier).
Messick and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Kazama in view of Messick because it would provide a way for a resource manager for a resource to give preferential access to the resource to any higher priority server group as compared to a lower priority server group. This will help optimize the operation of the network. Moreover, when a new client/server is added to the network, it is not necessary to specifically identify that client and its need for access to the resource. Rather, the new client can simply be added to an existing priority group and will then be given the same access to the resource as other clients in that group. This makes it easier to expand and manage the network as needed.
As per claim 14, it has similar limitations as claim 5 and is therefore rejected using the same rationale.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Otsuka et al. (US 2022/0261945) (hereinafter Otsuka).
As per claim 8, Wilde in view of Liu in view of Cai do not explicitly teach wherein the first computing power resource comprises any one or more of the following: a graphics processing unit computing power resource, a central processing unit computing power resource, an internal memory, a network bandwidth, and a network throughput; and wherein the graphics processing unit computing power resource comprises at least one of the following: floating-point operations per second of a graphics processing unit and operations per second of the graphics processing unit; and the central processing unit computing power resource comprises at least one of the following: floating-point operations per second of a central processing unit and operations per second of the central processing unit.
However, Otsuka teaches wherein the first computing power resource comprises any one or more of the following: a graphics processing unit computing power resource, a central processing unit computing power resource, an internal memory, a network bandwidth, and a network throughput; and wherein the graphics processing unit computing power resource comprises at least one of the following: floating-point operations per second of a graphics processing unit and operations per second of the graphics processing unit; and the central processing unit computing power resource comprises at least one of the following: floating-point operations per second of a central processing unit and operations per second of the central processing unit ([0044] floating point operations per second for both a GPU and a CPU).
Otsuka and Wilde are both concerned with computer task/job execution and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Otsuka because it would provide a way to efficiently realize switching to a process of generating data corresponding to different applications because the amount of data regarding the different applications which needs to be saved decreases immediately after the completion of a process of generating one unit of data corresponding to the first application. That is, it is possible to efficiently realize a process of generating a plurality of pieces of data corresponding to a plurality of applications while at the same time reducing the number of hardware resources and improving a system availability rate by performing, with a single GPU, the process of generating the plurality of pieces of data corresponding to the plurality of applications. Therefore, it is possible to reduce the amount of context data regarding the first application which needs to be saved and realize an efficient context switch. In other words, by performing a context switch after the completion of drawing by the GPU, it becomes easy to interrupt and switch processing in the GPU, thus reducing a processing volume for the context switch.
As per claim 17, it has similar limitations as claim 8 and is therefore rejected using the same rationale.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wilde in view of Liu in view of Cai in view of Ferdous et al. (US 2012/0030679) (hereinafter Ferdous) in view of Moroo (US 2018/0144272).
As per claim 9, Wilde in view of Liu in view of Cai do not explicitly teach wherein the determining the first computing power resource comprises:
determining a computation complexity corresponding to a task type of the offline task based on a correspondence between the task type and the computation complexity;
finding at least one matching historical offline task from historical offline tasks according to the determined computation complexity based on a computation complexity corresponding to the at least one matching historical offline task matching the determined computation complexity; and
estimating a computing power resource required for the offline task based on the computing power resource for executing the at least one matching historical offline task, to obtain the first computing power resource required to execute the offline task.
However, Ferdous teaches:
determining a computation complexity corresponding to a task type of the offline task based on a correspondence between the task type and the computation complexity ([0030] the goal of problem size determination is to provide a measure of job complexity that can be used to compare the incoming job with historical application-specific execution information represented by benchmarks, whereby the basic assumption underlying the use of problem size to evaluate benchmarks is that the data processing resources required to execute the current job will be similar to the processing requirements of actual or simulated test runs having the same or similar problem size);
finding at least one matching historical offline task from historical offline tasks according to the determined computation complexity based on a computation complexity corresponding to the at least one matching historical offline task matching the determined computation complexity ([0030] the goal of problem size determination is to provide a measure of job complexity that can be used to compare the incoming job with historical application-specific execution information represented by benchmarks, whereby the basic assumption underlying the use of problem size to evaluate benchmarks is that the data processing resources required to execute the current job will be similar to the processing requirements of actual or simulated test runs having the same or similar problem size).
Ferdous and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Ferdous because it would provide a way to determine which computer system is best equipped to execute an application and process/run a job. It may be beneficial to execute the application on the least powerful machine possible while still meeting applicable processing constraints. This matching the application's execution needs to system capabilities can be done to reserve the more powerful systems to handle executions that the lower powered machines would not be able to handle. Hence, maximum utilization of available resources may thus be achieved.
Wilde in view of Liu in view of Cai in view of Ferdous do not explicitly teach estimating a computing power resource required for the offline task based on the computing power resource for executing the at least one matching historical offline task, to obtain the first computing power resource required to execute the offline task.
However, Moroo teaches estimating a computing power resource required for the offline task based on the computing power resource for executing the at least one matching historical offline task, to obtain the first computing power resource required to execute the offline task ([0097] estimate the power consumption per computing node for jobs awaiting execution by referring to power consumption history table and [0113] estimate power consumption of the computing node for the job based on a similarity of the job and a past job from the power consumption history table).
Moroo and Wilde are both concerned with computer task scheduling and are therefore combinable/modifiable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wilde in view of Liu in view of Cai in view of Ferdous in view of Moroo because it would provide a way of determining whether file names partially match to find past jobs that are similar. As a result, the estimation precision for the power consumption of jobs is improved. To increase the throughput of the parallel processing system, it is preferable to schedule jobs so as to minimize the number of unused nodes.
As per claim 18, it has similar limitations as claim 9 and is therefore rejected using the same rationale.
Citation of Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
Bodas et al. (US 2016/0054780) disclose a power aware job scheduler and manager.
Potlapally et al. (US 9,557,792) disclose datacenter power management optimizations.
De Lind van Wijngaarden et al. (US 2012/0210150) disclose smart power management for mobile communication terminals.
Ghose (US 8,631,411) disclose energy aware processing load distribution.
Angaluri (US 2010/0235840) disclose power management using dynamic application scheduling.
Cooley et al. (US 9,052,904) disclose power-availability information for a power grid and power-usage information.
Armentrout et al. (US 6,463,457) disclose executing tasks using idle computational power.
Rawson et al. (US 5,692,204) disclose managing power states of hardware resources.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Adam Lee whose telephone number is (571) 270-3369. The examiner can normally be reached on M-TH 8AM-5PM.
If attempts to reach the above noted Examiner by telephone are unsuccessful, the Examiner’s supervisor, Pierre Vital, can be reached at the following telephone number: (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form.
/Adam Lee/Primary Examiner, Art Unit 2198 January 29, 2026