Prosecution Insights
Last updated: April 19, 2026
Application No. 18/470,177

DYNAMIC ADAPTIVE SCHEDULING FOR ENERGY-EFFICIENT HETEROGENEOUS SYSTEMS-ON-CHIP AND RELATED ASPECTS

Non-Final OA §102§103§112
Filed
Sep 19, 2023
Examiner
EWALD, JOHN ROBERT DAKITA
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
BOARD OF REGENTS OF THE UNIVERSITY OF TEXAS SYSTEM
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+21.2% vs TC avg
Strong +56% interview lift
Without
With
+55.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
24 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this application. Information Disclosure Statement The IDS filed on 1/27/2025 has been considered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2-4, 8, 11-12, and 17-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 2, the claim recites “wherein the DAS system outperforms either the first OS scheduler or the second OS scheduler…” If the DAS is comprised of the first OS scheduler and the second OS scheduler as claimed in the independent claim, how is it possible for the DAS to outperform itself? The first OS scheduler and the second OS scheduler are the entities performing the scheduling of a given task; therefore, it would be impossible for the first and/or second OS scheduler to outperform itself. As per claims 3 and 4, they recite the phrase “an average speedup of at least about 1.2x. However, it is unclear what is being speedup by 1.2x. Execution time? Energy-delay product? Energy consumption? Something else? There are several instances in the claims where relative terms of degree are used. The term “at least about” in claims 3 and 4 is a relative term which renders the claim indefinite. The term “less than about” in claims 8, 11, and 17 is a relative term which renders the claim indefinite. The terms “medium to low workload” and “heavy workload” are relative terms that render the claim indefinite. The term “more than about” in claims 12 and 18 is a relative term which renders the claim indefinite. The aforementioned terms are not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-9, 11-15, and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Goksoy et al. (NPL Document - "DAS: Dynamic Adaptive Scheduling for Energy-Efficient Heterogenous SoCs" hereinafter Goksoy). As per claim 1, Goksoy teaches a dynamic adaptive scheduling (DAS) computing system, comprising: a first operating system (OS) scheduler; a second OS scheduler that is slower than the first OS scheduler (Section III(A), “Overview and Preliminaries”, “Unlike the current practice, which is limited to a single scheduler, DAS allows the OS to choose one scheduling policy π ∈   Π S = F ,   S , where F and S refer to the fast and slow schedulers, respectively. Once the predecessors of a task are completed, the OS can call either a fast ( π = F ) or a slow scheduler ( π = S ) as a function of the system state and workload.”); and; a runtime preselection classifier that is operably connected to the first scheduler and the second scheduler, which runtime preselection classifier is configured to effect selective use of the first scheduler or the second scheduler to perform a given scheduling task (Section III(B), “Zero-Delay DAS Preselection Classifier”, “The OS periodically refreshes the performance counters to reflect the current system state. Each time the features are refreshed, DAS preselection classifier updates its scheduler selection which will be used for the next ready task. This decision will always be up to date since it uses the features that reflect the most recent system state. This way, DAS determines which scheduler should be called even before a task is ready for scheduling.”). As per claim 2, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS system outperforms either the first OS scheduler or the second OS scheduler individually when performing the given scheduling task in terms of one or more performance measures selected from the group consisting of: execution time, energy-delay product (EDP), and energy consumption (Section I, “Introduction”, “The following key observations enable us to design the DAS framework that outperforms both types of schedulers taken separately. First, the scheduling is not an ordinary process that may be called in the future with some probability. Instead, it will be called with 100% certainty and use a subset of available performance counters, i.e., features used for scheduling.” Section III(A), “Overview and Preliminaries”, “The goal of the fast scheduler F is to approach the theoretically minimum (i.e., zero) scheduling overhead by making decisions in a few cycles with a minimum number of operations. In contrast, the slow scheduler S aims to handle more complex scenarios when the task wait times dominate the execution times. The goal of DAS is to outperform both underlying schedulers in terms of execution time and EDP by dynamically switching between them as a function of system state and workload.”). As per claim 3, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system achieves an average speedup of at least about 1.2x and at least about 30% lower EDP relative to the first OS scheduler when a workload complexity increases (Section IV(C), “Performance and Scheduling Overhead Analysis”, “This section compares the DAS framework with LUT (fast), ETF (slow), and ETF-ideal schedulers. ETF-ideal is a version of the ETF scheduler which ignores the scheduling overhead…DAS achieves 1.28× speedup and 37% lower EDP than LUT (i.e., first OS scheduler), when the complexity increases. In summary, DAS consistently performs better than both of the underlying schedulers, successfully adapts to the workloads at runtime, and aptly chooses between LUT and ETF to achieve low execution time and EDP.”). As per claim 4, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system achieves an average speedup of at least about 1.2x and at least about 40% lower EDP relative to the second OS scheduler at a low data rate (Section IV(C), “Performance and Scheduling Overhead Analysis”, “This section compares the DAS framework with LUT (fast), ETF (slow), and ETF-ideal schedulers. ETF-ideal is a version of the ETF scheduler which ignores the scheduling overhead…At low data rates, DAS achieves 1.29× speedup and 45% lower EDP than ETF (i.e., second OS scheduler)…”). As per claim 5, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the runtime preselection classifier is configured to dynamically switch between use of the first OS scheduler and the second OS scheduler for the given scheduling task as a function of a state of system resources and/or workload characteristics (Section III(A), “Overview and Preliminaries”, “The OS collects a set of performance counters during the workload execution to enable two aspects for the DAS framework: 1) precise assessment of the system state and 2) desirable features for the classifier to dynamically switch between the fast (i.e., first) and slow (i.e., second) schedulers.”). As per claim 6, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system comprises a heterogeneous computing system (Section V, “Conclusion”, “In this letter, we presented a DAS framework that combines the benefits of fast and sophisticated schedulers for heterogeneous SoCs.”). As per claim 7, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system is implemented in a system that comprises scheduling algorithms comprising operating system kernels and a runtime software environment (Section III(B), “Zero-Delay DAS Preselection Classifier”, “The OS periodically refreshes the performance counters to reflect the current system state. Each time the features are refreshed, DAS preselection classifier updates its scheduler selection which will be used for the next ready task. This decision will always be up to date since it uses the features that reflect the most recent system state.” Section III(C), “Fast and Slow (Sophisticated) (F&S) Schedulers”, “The DAS framework can work with any choice of fast and slow scheduling algorithms. This work uses a LUT implementation as the fast scheduler since the goal of the fast scheduler is to achieve almost zero overhead. The LUT stores the most energy-efficient processor in the system for each known task in the target domain.”). As per claim 8, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system achieves a scheduling overhead comprising less than about 5 nJ energy and less than about 10 ns runtime for a given medium to low workload and less than about 30 nJ energy and less than about 70 ns runtime for a given heavy workload (Section I, “Introduction”, “The major contributions of this work are as follows. 1) The DAS framework that dynamically combines two schedulers and outperforms each of them. 2) Low Scheduling Overhead: 4.2 nJ energy and 6 ns runtime for low to medium loads; 27.2 nJ energy and 65 ns runtime for heavy workloads.” See also “Conclusion”.). As per claim 9, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system comprises a processor and a memory communicatively coupled to the processor, the memory storing non-transitory computer executable instructions which, when executed by the processor, perform operations comprising: using the runtime preselection classifier to effect the selective use of the first scheduler or the second scheduler to perform the given scheduling task (Section III(B), “Zero-Delay DAS Preselection Classifier”, “At runtime, a background process periodically updates a preallocated local memory with a small subset of performance counters required by the classifier. After each update, the classifier determines whether the fast F or slow S scheduler should be used for the next available task.” Section III(C), “Fast and Slow (Sophisticated) (F&S) Schedulers”, “The DAS framework can work with any choice of fast and slow scheduling algorithms. This work uses a LUT implementation as the fast scheduler since the goal of the fast scheduler is to achieve almost zero overhead. The LUT stores the most energy-efficient processor in the system for each known task in the target domain. Unknown tasks are mapped to the next available CPU core.”). As per claim 11, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the first scheduler comprises a scheduling overhead having less than about 10 nJ energy and less than about 10 nanoseconds of runtime (Section III(C), “Fast and Slow (Sophisticated) (F&S) Scheduler”, “To profile the scheduling overhead, we developed an implementation using C with inline assembly code. Experiments show that our fast scheduler takes ~7.2 cycles (6 ns on Arm Cortex-A53 at 1.2 GHz) on average and incurs negligible (2.3 nJ) energy overhead.”). As per claim 12, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the second scheduler comprises a scheduling overhead having more than about 10 nJ energy and more than about 10 nanoseconds of runtime (See Table 1, Fig. 2, and Fig. 3. Section I, “Introduction”, “The major contributions of this work are as follows. 1) The DAS framework that dynamically combines two schedulers and outperforms each of them. 2) Low Scheduling Overhead: 4.2 nJ energy and 6 ns runtime for low to medium loads; 27.2 nJ energy and 65 ns runtime for heavy workloads.”). As per claim 13, Goksoy teaches the system of claim 1. Goksoy also teaches wherein the DAS computing system comprises a heterogeneous systems-on-chip (SoCs), a high-performance computing system, and/or an embedded device (Section V, “Conclusion”, “In this letter, we presented a DAS framework that combines the benefits of fast and sophisticated schedulers for heterogeneous SoCs.”). As per claim 14, Goksoy teaches the system of claim 13. Goksoy also teaches wherein the heterogeneous SoC comprises a domain-specific SoC (DSSoCs) (Section IV(A), “Simulation Environment: We use DS3 [8], an open-source DSSoC simulation framework, for the detailed evaluation of DAS. DS3 includes built-in scheduling algorithms, models for PEs, interconnect, and memory systems. The framework has been validated with Xilinx Zynq ZCU102 and Odroid-XU3. DSSoC Configuration: We construct a DSSoC configuration that comprises clusters of general-purpose cores and hardware accelerators. The application domains used in this study are wireless communications and radar systems. The DSSoC used in our experiments uses the Arm big.LITTLE architecture with four cores each.”). As per claim 15, Goksoy teaches a method of scheduling a runtime task in a heterogeneous multi-core computing system, the method comprising using a runtime preselection classifier of the heterogeneous multi-core computing system to effect selective use of a first scheduler or a second scheduler that is slower than the first scheduler to perform a given scheduling task, thereby scheduling the runtime task in the heterogeneous multi-core computing system (Section III(A), “Overview and Preliminaries”, “Unlike the current practice, which is limited to a single scheduler, DAS allows the OS to choose one scheduling policy π ∈   Π S = F ,   S , where F and S refer to the fast and slow schedulers, respectively. Once the predecessors of a task are completed, the OS can call either a fast ( π = F ) or a slow scheduler ( π = S ) as a function of the system state and workload.” Section III(B), “Zero-Delay DAS Preselection Classifier”, “The OS periodically refreshes the performance counters to reflect the current system state. Each time the features are refreshed, DAS preselection classifier updates its scheduler selection which will be used for the next ready task. This decision will always be up to date since it uses the features that reflect the most recent system state. This way, DAS determines which scheduler should be called even before a task is ready for scheduling.”). As per claim 17, it is a method claim comprising similar limitations to claim 11, so it is rejected for similar reasons. As per claim 18, it is a method claim comprising similar limitations to claim 12, so it is rejected for similar reasons. As per claim 19, Goksoy teaches the method of claim 15. Goksoy also teaches wherein the method comprises: generating an oracle; selecting one or more features; and, training a model for the runtime preselection classifier (Section III(B), “Zero-Delay DAS Preselection Classifier”, “Offline Classifier Design: The first step to designing the preselection classifier is generating the training data based on the domain applications known at design time. Each scenario in the training data consists of concurrent applications and their respective data rates (e.g., a combination of WiFi transmitter and receiver chains, at a specific upload and download speed).” See also Fig. 1 which is a flowchart describing the flow of the Das framework: oracle generation, feature selection, and training a model for the classifier.). As per claim 20, it is a computer readable media claim comprising similar limitations to claim 15, so it is rejected for similar reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Goksoy as applied to claims 1 and 15 above, and further in view of Raj (US Pub. No. 2017/0220383 A1). As per claim 10, Goksoy teaches the system of claim 1. Goksoy teaches wherein the runtime preselection classifier is configured to effect use of the first scheduler or the second scheduler to perform the given scheduling task based upon one or more workload characteristics (Section III(A), “Overview and Preliminaries”, “The OS collects a set of performance counters during the workload execution to enable two aspects for the DAS framework: 1) precise assessment of the system state and 2) desirable features for the classifier to dynamically switch between the fast and slow schedulers. Table I presents the performance counters collected by DAS. For a DSSoC with 19 PEs, it uses 62 counters.” See also Table 1.). Although Goksoy teaches scheduling decisions based on task/workload characteristics, Goksoy fails to the workload characteristics being one or more of: a function of application arrival rate, a number of application instances being processed, and a number of scheduling tasks present in a ready queue. Accordingly, Raj teaches one or more workload characteristics that are selected from the group consisting of: a function of application arrival rate, a number of application instances being processed, and a number of scheduling tasks present in a ready queue (¶ [0082]-[0084], “According to some embodiments, in conjunction with managing processing times at the workload agents, the scheduler actively manages the job flow to individual workload agents based on the throughputs of the workload agents. The scheduler accomplishes this by maintaining a variable for each workload agent that represents the ability of the workload agent to process new jobs. For example, the scheduler may maintain a variable RcvdN for workload agent N. The variable RcvdN takes into account the incoming arrival rate of jobs to agent N as well as the throughput of agent N, and is updated based on the number of jobs that are submitted to agent N in each time interval…The scheduler then compares the calculated value of RcvdN to the number of jobs that are available to be scheduled for processing by Agent N, defined as NJobsN. If RcvdN is greater than or equal to the number of jobs that are to be scheduled for processing by Agent N, then all pending jobs (NJobsN) can be scheduled…”). Goksoy and Raj are considered to be analogous to the claimed invention because they are in the same field of task scheduling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the workload characteristics of Goksoy to account for task arrival rate and total number of tasks being processed as taught in Raj to arrive at the claimed invention. The motivation to modify Goksoy with the teachings of Raj is that monitoring such workload characteristics allows the scheduler to manage job flow and schedule jobs according to the current job flow. As per claim 16, it is a method claim comprising similar limitations to claim 10, so it is rejected for similar reasons. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ROBERT DAKITA EWALD whose telephone number is (703)756-1845. The examiner can normally be reached Monday-Friday: 9:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.D.E./Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602267
DYNAMIC APPLICATION PROGRAMMING INTERFACE MODIFICATION TO ADDRESS HARDWARE DEPRECIATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572377
TRANSMITTING INTERRUPTS FROM A VIRTUAL MACHINE (VM) TO A DESTINATION PROCESSING UNIT WITHOUT TRIGGERING A VM EXIT
2y 5m to grant Granted Mar 10, 2026
Patent 12547465
METHOD AND SYSTEM FOR VIRTUAL DESKTOP SERVICE MANAGER PLACEMENT BASED ON END-USER EXPERIENCE
2y 5m to grant Granted Feb 10, 2026
Patent 12536041
SYSTEM AND METHOD FOR DETERMINING MEMORY RESOURCE CONFIGURATION FOR NETWORK NODES TO OPERATE IN A DISTRIBUTED COMPUTING NETWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12524281
C²MPI: A HARDWARE-AGNOSTIC MESSAGE PASSING INTERFACE FOR HETEROGENEOUS COMPUTING SYSTEMS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+55.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month