Font Size: a A A

Low-Power Techniques For Architecture And Compiler Optimization

Posted on:2007-07-29Degree:DoctorType:Dissertation
Country:ChinaCandidate:H Z YiFull Text:PDF
GTID:1118360215970565Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Power consumption has become an obstruction in the road to higher-performance computer systems. First of all, the continuing growth of power consumption has increased the packaging and cooling cost. In addition, the higher temperature accentuates a large number of failure mechanisms in integrated circuits (ICs) and causes frequent failure of computer systems.Embedded systems for mobile computing are developing rapidly, and a crucial parameter of mobile systems is the continued time of energy supply. Although the performance in ICs has been increasing rapidly in recent years, battery techniques are developed slowly. So it is of significant importance for battery-powered mobile systems to use more effective low-power techniques.The energy consumption by the facilities from IT industry has been steadily growing year by year and large quantities of energy consumption necessitate power management to improve energy efficiency. So it is very imperative not only for mobile systems but also for high-performance systems to develop effective low-power techniques.Quantities of novel low-power techniques at different levels including circuit, logic, architecture and software levels, in order of increasing abstraction, have been proposed to reduce energy consumption. This thesis aims at reducing energy consumption by architecture design and compiler optimization. First of all, the architecture is the interface between software and hardware, and significantly affects low power hardware design and software-directed power management. So energy efficiency of microprocessor architecture is investigated, and parallel processing is analyzed as an energy-efficient architecture technique. Secondly, new hardware techniques such as dynamic voltage scaling (DVS) and turning off unused system units (TOSU) have come forward, and are widely used by the software-directed work in the thesis. In sum, the thesis consists of three parts: the first is to investigate energy efficiency of microprocessor architecture; the second is to present some methods of energy optimization in real-time systems; the last is to give some methods of energy optimization in parallel systems. The main contributions of the thesis are as follows:1. A model on energy efficiency of microprocessor architecture is proposed. Since it eliminates the influence of technology and voltage, the model can be used to evaluate energy efficiency of different architecture designs. The analytical results of typical microprocessors show the model is a reasonable metric of energy efficiency. By model analyses of multiple architecture techniques, the results show that parallel processing and localizing the use of system units are primary solutions improving energy efficiency. 2. A dynamic voltage scaling method integrated with estimation of the reduced worst-case execution time is proposed in detail. Compared with the past work, dynamic voltage scaling and WCET (worst-case execution time) analysis combine to a united frame, and a simulation environment named RTLPower is the realization result. The simulation results from embedded applications show the new dynamic voltage scaling method can obtain energy reduction of up to 50% over no power management.3. Two optimizing placement methods of dynamic voltage scaling points, OPOT and OPTO, are proposed. OPOT is declared as an optimal placement method without time overhead and is proved, OPTO is an optimizing placement method. The simulation results from embedded applications show two methods reduce energy consumption effectively.4. Two real-time voltage adjustment schemes are proposed. One is a voltage adjustment scheme directed by the optimal frequency configuration of fixed execution pattern, and the other further considers the maximum frequency of system. Compared with the past voltage schemes, the new schemes can make use of the slack time more efficiently. The simulation results from synthetic applications show the new schemes can obtain the largest energy reduction.5. Compiler-directed energy-time tradeoff on DVS-enabled parallel systems is proposed. Compared with the past work, the new method has used compiler techniques to automatically form communication regions and computation regions, and the optimal frequency and voltage are assigned to each region by solving a 0-1 integer-programming problem. A performance/power parallel simulation environment, MIPSpar, is established, and the simulation results from MPI benchmark applications show that the method can save 20~40% energy consumption with less than 5% performance degradation.6. A technique of compiler-directed power-aware on/off network links is proposed. Compared with the past history-based work, the new technique has used compiler techniques to automatically divide MPI applications into communication intervals and computation intervals, and avoided time overhead of state switching. The simulation results from MPI applications show that the proposed compiler-directed method can reduce energy consumption of interconnection networks by 20~70%, at a loss of less than 1% network latency and performance degradation.
Keywords/Search Tags:Compiler, Low Power, Energy Optimization, Architecture, Dynamic Voltage Scaling, Real-Time, High-Performance, Interconnection, Network Links
PDF Full Text Request
Related items