Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,637

SELF-MANAGING SCHEDULER FOR WORKLOADS IN AN INFORMATION PROCESSING SYSTEM

Non-Final OA §102§112
Filed
Dec 07, 2023
Examiner
HO, ANDY
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
930 granted / 1017 resolved
+36.4% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
16 currently pending
Career history
1033
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
17.5%
-22.5% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
25.5%
-14.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1017 resolved cases

Office Action

§102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This action is in response to the application filed 12/7/2023. 2. Claims 1-20 have been examined and are pending in the application. Claim Rejections - 35 USC § 112 The following is a quotation of the second paragraph of 35 U.S.C. 112: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 3. Claims 2-9 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. The claim language in the following claims is not clearly understood: (i) As to claim 2, it is unclear whether “a scheduler” (line 1) refers to “a scheduler” (line 4 of claim 1). Correction is required. (ii) As to claim 7, it is unclear whether “a result of the monitoring” (line 3) refers to “a result of the monitoring” (line 4 of claim 6). Correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claims 1-3, 10-17 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nandavar U.S Patent No. 11,561,848. As to claim 1, Nandavar teaches an apparatus comprising: at least one processing platform (Fig. 7 and associated specifications) comprising at least one processor (The processing resource 702 may be a physical device, for example, one or more CPU, one or more semiconductor-based microprocessor, one or more GPU, ASIC, FPGA, other hardware devices capable of retrieving and executing the instructions 706-714 stored in the machine-readable medium 704, lines 40-45 column 13) coupled to at least one memory (the machine-readable medium 704 may be, for example, RAM, an EEPROM, a storage drive, a flash memory, a CD-ROM, and the like, lines 28-30 column 13), the at least one processing platform, when executing program code, is configured to: maintain, in a set of nodes (the controller nodes 604-1 and 604-2, Fig. 6 and associated specifications) managed by a manager node (policy lifecycle manager 602, Fig. 6 and associated specifications), a scheduler in at least one node (scheduler 606 and 608, Fig. 6 and associated specifications), wherein the scheduler is configured to self-manage execution of at least one workload (scheduler 606, 608 configured to schedule the execution of workloads 610, 612 in the target environment 436-1, 436, lines 51-53 column 12) by at least one execution unit instantiated on the node (workloads may include pods that are formed by grouping one or more containers, lines 32-33 column 4). As to claim 2, Nandavar further teaches to maintain a scheduler in the node, the at least one processing platform is further configured to obtain one or more schedule configuration parameters for the at least one workload, wherein the one or more schedule configuration parameters for the at least one workload are obtained during a registration of the at least one workload at the node (the processing resource 114 may identify a workload profile of a first workload deployed at the target environment, which may be facilitated in the system 102. In some examples, the workload profile may be identified based on a workload specification 112 (labeled as WL_SPEC), which may be received from the workload creation node 104. The workload specification 112 may include information regarding characteristics of the workload 110, lines 47-55 column 5). As to claim 3, Nandavar further teaches at the node via the scheduler, initialize the at least one execution unit for the at least one workload at an execution unit initialization time determined from the one or more schedule configuration parameters (in the example where the first workload 110-A and the second workload 110-B are containers in a pod, the workloads may share data in a common directory, send read/write commands to a common memory, etc. In some examples, the execution of the first workload 110-A may be triggered, paused, or stopped, due to the execution of one or more threads or processes of the second workload 110-B, lines 35-42 column 9). As to claim 10, Nandavar further teaches the at least one processing platform comprises a pod-based management platform (workloads may include pods that are formed by grouping one or more containers, lines 32-33 column 4), wherein the node is a worker node of a set of worker nodes (the target environment 436-1, 436-2 may be a Kubernetes cluster including a plurality of worker nodes 618, 620, lines 57-59 column 12), the at least one execution unit is a pod instantiated on the worker node (workloads may include pods that are formed by grouping one or more containers, lines 32-33 column 4), and the at least one workload is a task in a container executed by the pod (in the example where the first workload 110-A and the second workload 110-B are containers in a pod, the workloads may share data in a common directory, send read/write commands to a common memory, etc. In some examples, the execution of the first workload 110-A may be triggered, paused, or stopped, due to the execution of one or more threads or processes of the second workload 110-B, lines 35-42 column 9). As to claim 11, Nandavar further teaches the scheduler is implemented as a sidecar container separate from the pod (monitoring workload logs using a sidecar executable 440. In some examples, the sidecar executable 440 may use the compiled logging policies including predefined log pattern, predefined log depth, and log levels. In some examples, the sidecar 440 may be a containerized application which may communicate with other containers (e.g., workloads 110-A, 110-B) in a pod to monitor workload logs, lines 43-50 column 9). As to claim 12, Nandavar further teaches the sidecar container is configured to respectively self-manage execution of one or more other workloads on one or more other pods instantiated on the node (monitoring workload logs using a sidecar executable 440. In some examples, the sidecar executable 440 may use the compiled logging policies including predefined log pattern, predefined log depth, and log levels. In some examples, the sidecar 440 may be a containerized application which may communicate with other containers (e.g., workloads 110-A, 110-B) in a pod to monitor workload logs, lines 43-50 column 9). As to claim 13, Nandavar further teaches the at least one workload comprises a microservice (the workload may include any piece of code that may be developed as a microservice, lines 22-23 column 4). As to claims 14-16, note the discussions of claims 1-3 above, respectively. As to claim 17, Nandavar further teaches maintaining the scheduler in the node further comprises calling the at least one execution unit to execute the at least one workload at a scheduled workload start time (in the example where the first workload 110-A and the second workload 110-B are containers in a pod, the workloads may share data in a common directory, send read/write commands to a common memory, etc. In some examples, the execution of the first workload 110-A may be triggered, paused, or stopped, due to the execution of one or more threads or processes of the second workload 110-B, lines 35-42 column 9). As to claim 19, note the discussion of claim above 10. As to claim 20, note the discussion of claim above 1. Allowable Subject Matter 5. Claims 4-9 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S Patent No. 12,045,659 discloses efficiently maintaining a globally uniform-in-time execution schedule for a dynamically changing set of periodic workload instances. U.S Patent No. 11,726,816 discloses scheduling workloads using at least two schedulers that operate independently. U.S Patent No. 11,711,268 discloses executing a workload in an edge environment. U.S Publication No. 2024/0403139 discloses deploying workloads of containerized services to worker nodes in a network using network telemetry-aware scheduling. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andy Ho whose telephone number is (571) 272-3762. A voice mail service is also available for this number. The examiner can normally be reached on Monday – Friday, 8:30 am – 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kevin Young can be reached on (571) 270-3180. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2100. Any response to this action should be mailed to: Commissioner for Patents P.O Box 1450 Alexandria, VA 22313-1450 Or fax to: AFTER-FINAL faxes must be signed and sent to (571) 273 - 8300. OFFICAL faxes must be signed and sent to (571) 273 - 8300. NON OFFICAL faxes should not be signed, please send to (571) 273 – 3762 /Andy Ho/ Primary Examiner Art Unit 2194
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602350
HOST ENDPOINT ADAPTIVE COMPUTE COMPOSABILITY
2y 5m to grant Granted Apr 14, 2026
Patent 12585494
METHOD AND SYSTEM FOR PERFORMING DOMAIN LEVEL SCHEDULING OF AN APPLICATION IN A DISTRIBUTED MULTI-TIERED COMPUTING ENVIRONMENT USING HEURISTIC SCHEDULING
2y 5m to grant Granted Mar 24, 2026
Patent 12585513
Data Management Method, Apparatus, and Device, Computer-Readable Storage Medium, and System
2y 5m to grant Granted Mar 24, 2026
Patent 12566628
SYSTEM AND METHOD FOR MANAGING A MIGRATION OF A PRODUCTION ENVIRONMENT EXECUTING LOGICAL DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12554548
NODE ASSESSMENT IN HCI ENVIRONMENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+7.6%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1017 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month