DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
The present application having Application No. 18/125,99 filed on 03/24/2023 presents claims 1-20 for examination.
Priority
The present application claims benefit of U.S. Provisional Patent Application No. 63/362377, filed April 1, 2022, and U.S. Provisional Patent Application No. 63/362943, filed April 13, 2022. Acknowledgment is made of applicant’s claim for domestic priority to above U.S. provisional patent applications.
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDSs) submitted on 04/11/2023, 05/01/2023, 07/31/2023, 03/22/2024 and 03/02/2026 are acknowledged, the submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the cited references have been considered by the examiner.
Drawings
The applicant’s drawings submitted are acceptable for examination purposes.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5, 12 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 5, 12 and 18 recite “…processing jobs with a smaller size and/or associated with a user…” in line 3 of claims 5, 12 and 18. Per MPEP 2173.05(e), “and/or” creates uncertain scope because the claim may cover “jobs with a smaller size,” “jobs associated with a user with fewer processing jobs…,” or both. The scope is not reasonably certain.
Furthermore, each of these claims recite comparative phrases (e.g., “a smaller size” and “fewer processing jobs”) that lack clear reference base line rendering the scope of the claims unclear. The phrases are unclear—"smaller than what” or “fewer than what or whom”. Without an explicit comparison baseline, a POSITA cannot determine which jobs qualify as having “a smaller size,” and thus cannot determine when the claimed ordering/scoring behavior is required. Similarly, the claims do not specify the comparison baseline for “fewer processing jobs”, making them ambiguous which users are considered to have fewer processing jobs and thus receive lower scores. This ambiguity prevents a clear understanding of the required scoring and ordering behavior.
Accordingly, claims 5, 12 and 18 are also indefinite because the phrases “a smaller size” and “fewer processing jobs…” are comparative terms that do not identify a reference point for comparison, leaving the metes and bounds of the claims uncertain.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5, 8-9, 11-12, 15, 17 and 18 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Karanasos et al. (US 2018/0300174 A1; Cited in IDS submitted on 07/31/2023) (hereinafter Karanasos) in view of James Paul Schneider (US 2009/0222831 A1) (hereinafter Schneider).
As per claim 1, Karanasos discloses A computer-implemented method for queuing processing jobs, the computer-implemented method comprising, by one or more hardware processors executing program instructions (e.g. Karanasos: [0012] [0036] discloses method, systems and computer program products for managing tasks queues for tasks which are to be executed in a distributed computing environment. [Fig. 16] [0147-0150] further describes embodiments may be implemented and practiced with a computing environment. Fig. 16 illustrates an example computing environment that facilitates and enables embodiments for efficient queue management for cluster scheduling. As depicted, embodiments may include computer hardware, such as one or more processors and system memory. Also see [0138-0139]. Also see claims 1, 11 and 16.): receiving a processing job associated with a user (e.g. Karanasos: [0012][0036] discloses receiving a job/tasks. [0055] [0057] disclose a client submits job/tasks to a cluster. [0139] the method may include receiving a job at a cluster for execution.); scoring the processing job (e.g. Karanasos: [Claims 1, 6-7, 11, 16] discloses determining a priority of the task relative to other tasks in the queue. [0121] discloses a task prioritization algorithm that takes as input two tasks, a taskCMP(t1, t2) function (which may be one of a plurality of reordering strategies…), as well as a hard and a relative starvation threshold. [0132] The reordering strategies including SRJF, LRTF,SJF and EJF may be applied at the job level. SRJF would assign higher priority to jobs with the smallest remaining work, whereas LRTF would prioritize jobs with the least remaining number of tasks. STF would be shortest job first, using available information about job durations. [0139] disclose a priority for the task relative to other tasks in the queue may be determined. Based on the priority of the task, an order of execution of all tasks in the queue may also be determined.); applying one or more bounds (e.g. Karanasos: [Abstract] discloses task queues may be bounded by length-based bounding or delay-based bounding. [0068-0069] discloses embodiments discussed herein may employ at least two mechanism for bounding queue lengths: length-based bounding and delay-based bounding. In length-based queue bounding, all nodes may have a predefined queue length b, and an RM may place up to b tasks at the queue of each node. [0080] For delay-based bounding: This strategy relies on an estimated queue wait time that gets reported by node…In particular, a maximum time, WTmax, may be specified that a task is allowed to wait in the queue. When a task t is to be placed at the queue of node n…only when WTn<= WTmax would t then be queued.); adding the processing job to a queue (e.g. Karanasos: [0012] discloses “the task may be placed into a queue such that the task will be run on the determined node.” [0036] discloses method determines queue size for one or more queues into which task to be placed for execution in the distributed computing environment. The task may be placed into a queue… [0055] the tasks of the job may be added to the queue…The JM may then dispatch the tasks for execution…Until the task begins execution at the node, it may wait in a queue. [0139] [Fig. 15, step 1540] discloses placing the talk into a queue. Also see [0040] [0051] [0064] [0069] [0108] [0116] [0194].); ordering the queue based on scores of processing jobs in the queue (e.g. Karanasos: [0012] [0036] discloses “Based on the priority of the task, an order of execution for all tasks in the queue may be determined. The tasks in the queue may then be ordered based on the determined order of execution.” [0126] discloses Shortest Task First (STF) orders tasks based on increasing expected duration. [0127] Earliest Job First orders tasks based on the arrival time of the job. [Fig. 15, steps 1560, 1570] [0139] A priority of the task relative to other tasks in the queue may be determined. Based on the priority of the task, an order of execution for all tasks in the queue may also be determined. The tasks in the queue may then be ordered based on the determined order of execution. These passages show that Karanasos orders each queue based on computed priorities [score] derived from job/task attributes (remaining work, remaining tasks, task duration, job arrival time, etc.). Also see [claims 1, 11 and 16].); and sampling processing jobs from the queue for dispatch for processing (e.g. Karanasos: [0054-0055] discloses when JM receives resources, it may dispatch tasks to the associated nodes for execution. [0057] The JM, which may be acting as a task scheduler, uses a scheduling policy to select a node to which tasks will be dispatched. Thus, Karanasos samples tasks from the ordered queue for dispatch to actual processing on the node whenever resource becomes available.).
As discussed above implies “scoring the processing job” in terms of assigning priority to the job but does not expressly disclose scoring the processing job. Karanasos uses job/task priority but does not label this operation as scoring.
However, Schneider expressly discloses scoring the processing job (e.g. Schneider: [0012] discloses scheduler receives a request to process one or more computation jobs. The scheduler generates a size metric [scoring] corresponding to size of an executable image of each computation job. The scheduler distributes the plurality of jobs…, where the system configuration setting prioritizes a computation job with a smaller size metric higher than a computation job with a larger size metric. [0016] [0018] discloses scheduler receives to process computation jobs. Scheduler generates a size metric corresponding to a size of an executable image of each computation job and a corresponding data set associated with each computation job. Scheduler then adjusts a priority of each computation job, prioritizing a computation job with a smaller size metric higher than a computation job with a larger size metric. Scheduler then schedules each job to be processed by processor according to its priority. Also see [Abstract] [0019-0020] [Figs. 5 and 6]. Thus, Schneider explicitly teaches computing a job-level metric and using it to adjust job priority.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of computing size metric of the job or scoring the job and adjusting job priority based on that metric as taught by Schneider into Karanasos because Karanasos and Schneider are both directed to scheduling and queue management for computation jobs in distributed/clustered systems. Karanasos focuses on efficient queue management for cluster scheduler, including bounding queue lengths/delays and sophisticated job prioritization strategy. Schneider, on the other hand, specifically teaches that computing a size metric for each job and using that metric to adjust job priority. A POSITA would have found it obvious to use Schneider’s size-metric based scoring as one of the inputs to Karanasos’s priority determination, because both references address the same problem domain (scheduling jobs) and Schneider provides well-motivated, well-understood heuristic (job size) that is directly relevant to Karanasos’s stated goals (reducing job completion time and improving utilization for workloads). Moreover, combining these references would be straightforward substitution of one known scoring heuristic (size-based score) into another known scheduling framework (Karanasos’s Yaq queue manger), yielding a predictable result: more informed prioritization of jobs based on their expected runtime and resource use, and thus improved performance. Karanasos explicitly supports pluggable prioritization strategies (paragraph [0121]), which further suggests to a POSITA that alternate or additional scoring metrics, such as Schneider’s size metric, could be adopted without undue experimentation.
As per claim 2, the combination of Karanasos and Schneider discloses The computer-implemented method of claim 1 [See rejection to claim 1 above] further comprising, by the one or more hardware processors executing program instructions: updating the scoring of the processing jobs in the queue (e.g. Karanasos: teaches that task priorities are recomputed and updated as job progress and queue state change. For example, [0121-0124] states “A task prioritization algorithm is provided…starved tasks have higher priority than non-starved ones. If none of the tasks are starved, the tasks are compared with taskCmp(t1, t2)…SRJF gives highest priority to the job that has the least remaining work. The remaining work for a job is…computed based on task duration estimates. [0116] discloses updating remaining tasks durations when a task gets removed or added, and the same process repeats for all queued tasks. This implies that as task complete or estimates change, the priority for tasks/job is updated. Schneider: [0016] further discloses scheduler generates size metric for each computation job and adjust a priority of each job.); updating application of bounds to processing jobs in the queue (e.g. Karanasos: [0080] Rod delay-based bounds, Karanasos updates per-node wait estimates as tasks are queue and run: This strategy relies on an estimated queue wait time that gets reported by each node…When a task t is to be placed at the queue…the last estimated queue wait time Q Tn reported by node n may be checked. Only when W Tn<= W Tmax would t then be queued...Upon queuing, the RM may use simple formula to update W Tn, taking into account t’s task duration estimate, until a fresh value for W Tn is received from node n. [111-1118] Algorithm 2 iteratively updates the simulated wait time based on current running and queued tasks each time it is executed. Each worker node may estimate the expected queuing delay that a new task will incur if it is placed in that node’s queue. Queue wait time estimates are my then be periodically sent to an RM to help with a determination of task placement. [Claims 4 and 5] discloses determining queue size comprises dynamically computing a length-based bounding or a delay-based bounding. Also see [0192] These passages show that the bound calculations are re-applied and updated as the queue contents change.); and updating ordering of the processing jobs in the queue (e.g. Karanasos: [Abstract] discloses tasks may be prioritized and task queues may be dynamically reordered based on task priorities. [0049] discloses queue management techniques, such as task prioritization through queue reordering, and queue sizing. [0119-0120] discloses a task prioritization algorithm enables reordering queued tasks that can improve job completion times. [0121] discloses a task prioritization algorithm may be one of a plurality of possible reordering strategies. [0129-0130] discloses during reordering it may be checked whether a task has waited too long in the queue. If so, the waiting task may be given higher priority to avoid starvation. Thus, Karanasos teaches dynamically updating priorities/scores, updating bound and re-ordering queue based on updated priorities.).
As per claim 4, the combination of Karanasos and Schneider discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein the scoring is based on at least one of: a size of the processing job, or a number of processing jobs associated with the user and currently running (e.g. Karanasos: [0124] discloses SRJF prioritizes jobs with least remaining work: give highest priority to the tasks/jobs that have the least remaining work. The least remaining work is computed using remaining task duration based on task duration estimates. Schneider: [Abstract] [0016] disclose a scheduler generates a size metric corresponding to a size of an executable image for each computation job and adjusts a priority of each computation job. The priority is explicitly based on a job’s size metric, directly matching scoring based on a size of the processing job.).
As per claim 5, the combination of Karanasos and Schneider discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein the queue is ordered such that processing jobs with lower scores are positioned at a top of the queue, and wherein processing jobs with a smaller size and/or associated with a user with fewer processing jobs currently running are scored lower (e.g. Karanasos: [0121-0128] discloses strategies like SRJF/LRTF/STF/EJF give higher priority (effectively lower score) to task/jobs that meet the chosen criteria and cause them to be executed earlier. [0139] discloses determining a priority for the task…determining an order of execution for all tasks in the queue and ordering the task in the queue based on the determined order of execution. [0130] during reordering it may be checked whether a talk has waited too long in the queue. If so, the waiting task may be given higher priority to avoid starvation. Schneider: [Abstract] [0012] expressly discloses prioritizing a computation job with a smaller size metric higher than a computation job with a larger size metric. Thus, smaller jobs get a more favorable score (higher priority), consistent with “scored lower” and places toward the top of the queue.).
As per claims 8-9 and 11-12, these are system claims having similar limitations as cited in method claims 1-2 and 4-5, respectively. Thus, claims 8-9 and 11-12 are also rejected under the same rationale as cited in the rejection of rejected claims 1-2 and 4-5.
As per claims 15, 17 and 18, these are medium claims having similar limitations as cited in method claims 1, 4 and 5, respectively. Thus, claims 15, 17 and 18 are also rejected under the same rationale as cited in the rejection of rejected claims 1, 4 and 5.
Claims 3, 10 and 16 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Karanasos in view of Schneider and further in view of Yuan et al. (US 2020/0233701 A1) (hereinafter Yuan).
As per claim 3, the combination of Karanasos and Schneider discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], but does not disclose further comprising, by the one or more hardware processors executing program instructions: causing results of processing a processing job to be displayed to the user in a graphical user interface.
However, Yuan discloses the one or more hardware processors executing program instructions: causing results of processing a processing job to be displayed to the user in a graphical user interface (e.g. Yuan: [Fig. 1] [0019] discloses data may be provided via the user interface. The user interface may be sued for providing output to users associated with user devices, such as validation results, data processing job performance results, data processing job status updates, and/or like.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of using GUI to display results of process job as taught by Yuan into the combination of Karanasos and Schneider because adding Yuan will provide the user interface for job submission and feedback. It would have been obvious to POSITA, to provide a user interface, like that of Yuan, on top of the queuing and scheduling mechanisms taught by Karanasos and Schneider, so that users can submit jobs and view the results and status of those jobs. Doing so would be straightforward combination of known elements—using a conventional job-management UI (Yuan) with known cluster scheduling backend (Karanasos and Schneider) yields the predictable benefit of improved usability and transparency for multi-tenant job execution system that executes user submitted jobs.
As per claim 10, this is a system claim having similar limitations as cited in method claim 3. Thus, claim 10 is also rejected under the same rationale as cited in the rejection of rejected claim 3.
As per claim 16, this is a program product claim having similar limitations as cited in method claim 3. Thus, claim 16 is also rejected under the same rationale as cited in the rejection of rejected claim 3.
Claims 6, 13 and 19 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Karanasos in view of Schneider and further in view of Proctor et al. (US 2008/0235693 A1) (hereinafter Proctor).
As per claim 6, the combination of Karanasos and Schneider discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein the one or more bounds comprise at least one of: an upper bound on a number of processing jobs the user can have running at once, or an upper bound on a number of processing jobs the user can have in the queue (e.g. Karanasos: [0068-0072] discloses determining length of queue is important and beneficial. In length-based queue bounding, a node has a predefined queue length b, and an RM may place up to b tasks at the queue. In Length-based queue bounding, r is the maximum number of tasks that can run concurrently and b is the maximum number of tasks that can be queued. [0080] discloses the number of tasks that get queued may be dynamically adapted based on current load of the node and the tasks that are currently running and queued. [0134] discloses sharing policies that impose fairness and/or capacity constraints. For instance, two users, A and B, could each be given an equal share of the cluster, or each could be given some particular capacity (user A takes 80% and user B takes 20% of the cluster). Weighted fair sharing in a distributed setting may also be impose. Thus, Karanasos discloses determining maximum number of tasks that can be executed concurrently on a node and a maximum number tasks that can be queued and further implies restricting maximum number of tasks per user that can be executed at once or queue using queue-length based bounding.).
The combination of Karanasos and Schneider does not expressly disclose per-user maximum task limit.
However, Proctor discloses per-user maximum task limit in a priority queue (e.g. Proctor: [0043] discloses the task scheduler defines available window key to describe a window that does not exceed a maximum talk limit for the user. The maximum task limit allows the priority queue to assign a submitted task to a window that does not contain a predetermined max task capacity as to the submitting user. The maximum task limit can also be defined to allow each user a maximum of tasks of differing priority types to be contained within a window(s). For instance, 4 tasks per user (2 of high priority type, 1 of medium priority, and 1 of low priority). Further, when a window satisfies (e.g. exceeds) the maximum task limit for a user, then the next task submitted by that user must be queued at another window (as in the available window) that does not contain the maximum allowed tasks for that user. Proctor explicitly teaches a maximum task limit per user in a priority queue, which corresponds to a per-user bound on the number of tasks/jobs that user can have in a given window of the queue. Once that per-user limit is reached, additional tasks from that user are not placed into the queue.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of setting maximum per-user tasks that can be placed in a queue or executed at a time as taught by Proctor into the combination of Karanasos and Schneider because it is a straightforward extension in a Karanasos-style cluster scheduler, an upper bound or maximum number of jobs a user can have also naturally bounds how many tasks/jobs can be executed concurrently. Karanasos already recognizes that a shared cluster must support fairness and capacity policies among multiple users and proposes queue level bound to improve overall job completion times. Proctor, on the other hand, is directed specifically to fair scheduling in a priority queue and teaches that one effective way to enforce fairness is to impose a maximum task limit per user for a given window of the queue. This prevents any user from over-populating high-priority portion of the queue and ensures a fair distribution of execution opportunities (See Proctor: [Abstract] [0004-0005]).
Furthermore, it would have been obvious to a POSITA to apply Proctor’s per-user maximum task limit concept to queuing system of Karanasos and Schneider, thereby treating per-user job-count limits as another type of “bound” on queue and running jobs. Doing, so is a straightforward substitution of one well-known fairness mechanism (per-user job limits in a priority queue), into another well-known multi-user scheduling framework, with the predictable results of improving fairness and preventing resource monopolization by any single user.
As per claim 13, this is a system claim having similar limitations as cited in method claim 6. Thus, claim 13 is also rejected under the same rationale as cited in the rejection of rejected claim 6.
As per claim 19, this is a program product claim having similar limitations as cited in method claim 6. Thus, claim 19 is also rejected under the same rationale as cited in the rejection of rejected claim 6.
Allowable Subject Matter
Claims 7, 14 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, after addressing any claim objections/rejections detailed above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hiren Patel whose telephone number is (571) 270-3366. The examiner can normally be reached on Monday-Friday 9:30 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
If attempts to reach the above noted Examiner by telephone are unsuccessful, the Examiner’s supervisor, April Y. Blair, can be reached at the following telephone number: (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions on access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
March 6, 2026
/HIREN P PATEL/Primary Examiner, Art Unit 2196