DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Herein after “it would have been obvious” should be read as “it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-7, 10-13, 15-17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eastep et al PN 2016/0179162 in view of Kuhlmann et al PN 2004/0163000.
In regards to claims 1, 12, 20: Eastep et al teaches a computer-implemented method, comprising: determining a metric (“objective function metric” associated with power and energy management (power budget) in a high performance computing (HPC) system, the HPC system ([0002] “High Performance Computing (HPC) systems may include a large number of nodes connected by a fabric for distributed computing. Moreover, an application is divided into tasks that run concurrently across the nodes in the HPC system” ) comprising a plurality of nodes (nodes) running a plurality of jobs (tasks Abstract “The policy is coordinated across the plurality of compute nodes to manage a job to one or more objective functions, where the job includes a plurality of tasks that are to run concurrently on the plurality of compute nodes. Other embodiments are also disclosed and claimed”), a respective node comprising one or more processing elements cores [0009] “There may be one or multiple tasks mapped to each node, and a single task may run across one or multiple cores” [0015] “In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as “cores 106,” or more generally as “core 106”), a cache 108 (which may be a shared cache or a private cache in various embodiments), and/or a router 110. The processor cores 106 may be implemented on a single integrated circuit (IC) chip. This may also be implemented with multiple integrated circuits in the same package”), and the metric being based on a factor which is configurable ([0012] “More specifically, a new performance and power management framework is described for coordinating software and hardware policy across (e.g., all) nodes in a job, while managing the job to configurable objective functions (e.g. maximum performance within a job power cap, maximum efficiency within a job power cap, etc.). One use of the framework is to solve the load balancing problem described above”), an amount of energy consumed by the HPC system [0068] “maximize performance while meeting a power cap, maximize energy efficiency while meeting a power cap”), and a runtime associated with the plurality of jobs [0045] “The strengths of these opposing forces are systematically tuned at runtime. One embodiment measures how stable and predictable the relationship between the objective function metric and the policy is” Abstract. “The policy is coordinated across the plurality of compute nodes to manage a job to one or more objective functions, where the job includes a plurality of tasks that are to run concurrently on the plurality of compute nodes”); calculating (average or median ((an average or a median is a calculation of the samples)) the metric at a predetermined time interval [0050] “The objective function may be sampled upon phase change events, e.g., sampled at fixed time intervals coarser than phase durations, or sampled at fixed time intervals finer than phase durations.” [0036] “Each child agent's performance may be an average or median (or other functions) of some number of samples”); identifying a global policy ([0038] “As described herein, HGPPM embodiments can tune different kinds of policies (beyond power budgets) and tune more than one type at once. In this mode, HGPPM techniques compose the policies into a joint policy”) for providing power to the HPC system; and changing the global policy dynamically ([0039] “The outputs of the RL agent are new policy settings (e.g. a new setting for the number of threads per application process or a new subdivision of the node power budget among hardware components of the node”) by: configuring the factor in the metric to a value which corresponds to a new global policy ([0049] “As a result, the parent cannot set a new policy (e.g., a new power budget for the child) before the child is ready. This self-configuration strategy has the advantage of maximizing responsiveness of global policy optimization. There are many ways to determine when a good policy is reached in accordance with some embodiments. One canonical way is convergence testing: a good policy has been reached if the change in policy over the last k iterations has been less than epsilon. k and epsilon are free parameters that may be tuned according to offline manual procedures”); and setting, based on the configured factor, an assigned power (if a new policy is tried it is set.[0050] “At this level of the hierarchy, the agents can choose when to sample the objective function metric and try a new policy option”) corresponding to a minimum of the metric (maximize energy efficiency is minimum consumption while meeting performance requirements). Eastep et al does not state the power is assigned “per processing element”. Kuhlmann et al teaches [0044] “These requirements are specified in the NP element PM table along with dynamic utilization thresholds for each PM state change and the current power mode of each processing element. If the current mode is maxmode, step 622 indicates that the process is done (626). If there are still power-save modes available for testing, the last mode is updated to mode +1 to show that all power-save modes less than or equal to the mode that was entered are no longer under consideration. Step 606 is then entered again”). It would have been obvious to adjust the power factor per processing element because this would have given greater granularity in control of power efficiency. Eastep et al teaches instructions performing the function. ([0008] “Further, various aspects of embodiments may be performed using various means, such as integrated semiconductor circuits (“hardware”), computer-readable instructions organized into one or more programs (“software”), or some combination of hardware and software”). Eastep et al teaches each node having a processor but does not expressly mention a processor executes the “computer-readable instructions” however for the instructions to performed the specified function a processor executing the instructions is inherent.
In regards to claims 2, 13: Eastep et al teaches ([0036] “HGPPM via annotations made by the programmer or inferred automatically by analysis of performance counters or other means), the runtime of the application phases completed so far between milestones, the rate of instructions retired, the rate of main memory accesses, etc.”) a rate is defined as “a quantity, amount, or degree of something measured per unit of something else” Thus Eastep et al teaches a number of instructions retired per unit time. Completed so far is a prior time interval. Official notice is taken that seconds is a unit of time measurement. It would have been obvious to measure the rate of instructions retired in seconds because seconds is a standard measure of time.
In regards to claims 4, 15: Eastep et al teaches (abstract “determination of a policy for power and performance management across the plurality of compute nodes”) ([0012] “More specifically, a new performance and power management framework is described for coordinating software and hardware policy across (e.g., all) nodes in a job, while managing the job to configurable objective functions (e.g. maximum performance within a job power cap, maximum efficiency within a job power cap, etc.)”) [0025] “Moreover, HGPPM's hierarchical learning framework further improves scalability and increases responsiveness of load balancing for better application performance or efficiency”) ([0035] “At each granularity, performance is compared dynamically, and power is steered from the computational elements that are ahead to the elements that are behind (with reference to reaching the next milestone in the sequence and reaching the barrier) to maximize or improve application performance”).
In regards to claims 5, 16: Eastep et al teaches maximal application performance by changing a factor (that factor being power steering [0035] “At each granularity, performance is compared dynamically, and power is steered from the computational elements that are ahead to the elements that are behind (with reference to reaching the next milestone in the sequence and reaching the barrier) to maximize or improve application performance”).
In regards to claims 6, 17: Eastep et al teaches changing a policy based on the external condition of a new global policy. ([0025] “Examples of new policy and optimizations enabled by HGPPM embodiments include but are not limited to: (a) tuning applications for maximum performance or efficiency through a new policy knob controlling the number of cores that each application task may utilize; (b) tuning the processor for better performance or efficiency through a new policy knob controlling how aggressively the processor speculatively executes arithmetic operations or memory prefetch operations, etc. The design of new policies and optimizations is considered to be critical for meeting performance and efficiency challenges of Exascale systems, and HGPPM embodiments are the first performance and power management framework capable of orchestrating such optimizations. Moreover, HGPPM's hierarchical learning framework further improves scalability and increases responsiveness of load balancing for better application performance or efficiency”).
In regards to claim 7: Eastep et al teaches “a new policy knob” a knob is a single point of access.
In regards to claim 10: Eastep et al teaches power management based upon an algorithm. ([0021] “Holistic Global Performance and Power Management (HGPPM), which is at least partially based on a hierarchical machine learning algorithm in one embodiment”). Kuhlmann et al teaches (Abstract “The monitoring and control are implemented through the use of a power management state change algorithm”).
In regards to claim 11: Eastep et al teaches enforcing a new policy specific to respective jobs “job load balancing” and steering power to each respective jobs in nodes.([0035] “An embodiment maps the process of job load balancing to the abstractions of Reinforcement Learning by tasking each agent with learning the best division of its input power budget among its children and defining the objective function such that a) discrepancies in performance of the child agents are penalized and b) aggregate performance is rewarded, where aggregate performance is taken to be the minimum performance obtained by the child agents” … “At each granularity, performance is compared dynamically, and power is steered from the computational elements that are ahead to the elements that are behind (with reference to reaching the next milestone in the sequence and reaching the barrier) to maximize or improve application performance”).
Claim(s) 3, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eastep et al PN 2016/0179162 in view of Kuhlmann et al PN 2004/0163000 as applied to claim 1 above, and further in view of Barsness et al PN 2005/0198636.
In regards to claims 3, 14: ([0050] The objective function may be sampled upon phase change events, e.g., sampled at fixed time intervals coarser than phase durations, or sampled at fixed time intervals finer than phase durations.). There is no indication this fixed time interval is based on historical data but is based on phase durations which may or may not be historical. Kuhlmann et al teaches ([0010] “The medium determines when each element in the system should be run based on a historical pattern of system utilization”) but does not state the time interval(s) are based on historical data. Barsness et al teaches ([0046] “Each of the intervals is selected as a measuring unit that will serve as a marker to facilitate a determination of whether a batch job will complete its run in the time defined. Preferably, user selected parameters define these time intervals. For example, the servicing period can be divided into four (4) time segments, such as through a GUI by a system user. Alternatively, the time intervals may be selected automatically based on other criteria, such as historical data for particular types of files. The time intervals need not be equal.”). It would have been obvious to use historical data to set the time interval because this would have allowed adapting based on usage.
Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eastep et al PN 2016/0179162 in view of Kuhlmann et al PN 2004/0163000 as applied to claim 1 above, and further in view of Bodas et al PN 2016/00548779.
In regards to claims 8, 18: Eastep et al teaches using a knob to set a new policy and configuring factors based on policies, but does not state the policy is set by an administrative user. Bodas et al teaches [0023] “HPC System Power Manager 300 also receives administrative policies from HPC System Administrator 202”). It would have been obvious to allow an administrator to set policies because this is a purpose of an administrator.
Claim(s) 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eastep et al PN 2016/0179162 in view of Kuhlmann et al PN 2004/0163000 as applied to claim 1 above, and further in view of Tomi PN 2012/0102351.
In regards to claims 9, 19: Eastep et al teaches calculating the amount of energy used per task/job and the presence of a display but does not teach the amount of energy/power being used by a job being displayed. Tomi teaches ([0087] “In a power consumption amount graph 85, the date time and the power consumption amount consumed by each job are displayed as a graph as information of jobs executed by the user A in the applicable month”). It would have been obvious to display the amount of power/energy consumed per job/task in a node because this would have provided a user/administrator information for setting a new policy.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL R MYERS whose telephone number is (571)272-3639. The examiner can normally be reached telework M-F start 7-8 leave 4-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jaweed Abbaszadeh can be reached at 571-270-1640. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Paul R. MYERS/ Primary Examiner, Art Unit 2176