Seventh International Workshop on
Software and Performance

WOSP 2008
Princeton, NJ, USA
June 23-26, 2008

 
Home
Conference Schedule
Discussion Panels
Technical Program
Tutorials
Organizing Committee
Program Committee
CFP
Submission
Important Dates
Reg. Schedule
Registration
US Visa
Venue and Hotel
Corporate Support
ICPE 2010
Contact Us
  TUTORIALS

Tutorial 1:

"Transformations from software models to quality models: mechanisms, approaches, technologies, tools"
Vittorio Cortellessa, Antinisca di Marco and Luca Berardinelli

   Software quality is intended as a multi-attribute composed by performance, reliability, etc. It is widely recognized that in order to make quality validation an integrated activity along the software lifecycle the support of automation is crucial. Easiness to annotate software models with extra- functional parameters (e.g. the operational profile) and automated translations of the annotated models into "ready- to-validate" models are the key challenges in this direction. Several methodologies have been introduced in the last few years to address these challenges. The tutorial introduces the attendance to the main approaches and technologies for annotating and transforming software models into models suited to validate software quality.

   The tutorial roadmap is as follows: (i) we introduce the topic of quality validation of software systems; (ii) we describe the main mechanisms and approaches to annotate and transform models; (iii) we introduce the most recent technologies of support to this goal and on which the existing tools are based; (iv) then we present some of the tools that have been recently built on top of these technologies; (v) we classify the tranformation methodologies basing on diferent dimensions and parameters; (vi) finally, we give some ideas about the future directions in this research area.

Tutorial 2:

"Software Performance and Power Management in Large Scale Servers"
Daniel Mosse and Alexandre Ferreira

   Today to be green is in fashion but for computer scientists and professionals it is has become a necessity. Newer designs do not only aim to have higher performance or lower cost, but also have lower energy consumption and better thermal characteristics. These two last characteristics have gained importance and today may even determine the success of a design. In almost all existing systems, power and temperature are critical, from embedded systems to very large clusters. Examples go from cellular phones that consume the battery too fast, to laptop computers that are too hot to use comfortably, to large clusters that require huge amounts of cooling and megawatts of power.

   Almost all current CPUs have power and thermal management mechanisms allowing systems to reduce energy consumption with low performance impact. Throttling and dynamic voltage caling are well-studied and common solutions to reduce power. These mechanisms are also used to protect CPUs from excess temperature, by applying them whenever a temperature threshold is achieved. More recently, it has been recognized that other components in a computer system (e.g., memory, disks, high end graphics cards, networking elements) should also be studied with respect to power and thermal problems.

   Most power management mechanisms imply performance loss so they should be used judiciously. Some are easy to control (e.g., CPU frequency) and are not visible to the software layers but others are not (e.g., CPU throttling). Therefore, we see that the hardware provides the means, but software running on that hardware has provide the intelligence of when and how to use the mechanisms to have as little performance impact as possible, but as large energy savings as possible. Some systems are even designed with restrictions that all components cannot be used at capacity simultaneously (to avoid extreme power consumption), making resource allocation fundamental. An examples is when memory and CPU share a power budget.

   Software algorithms to optimize energy consumption have been developed and used in many applications. Some of these algorithms use the DVS mechanisms based on measured system performance; for example, the Linux on demand controls frequency and voltage settings based on the load measured in the CPU. However, the more effective and more recent algorithms use application behavior knowledge to make good decisions towards power savings. Applications can themselves provide either hints or directly request specific perfomance/power compromises to obtain the best operating point. An application that has memory intensive and CPU intensive portions can request power to be diverted from CPU to memory and vice-versa to reduce power without or even getting performance gain.

   This tutorial has the objective to show why power, energy, and temperature are also important components of high-performance systems, and how they can be managed as first-class resources. During the 3-hour tutorial, we will first present an overview of the primary hardware mechanisms available in a variety of systems. Then we will describe the most common algorithms that are currently being applied to manage these mechanisms. Then, we will look at future trends of power/energy/temperature management, and the benefits that new management schemes will bring to a variety of systems and applications. Examples range from embedded devices to large clusters.