每一焦耳都是宝贵的:重新审视操作系统设计以提高能源效率的理由

Amin Vahdat, A. Lebeck, C. Ellis
{"title":"每一焦耳都是宝贵的:重新审视操作系统设计以提高能源效率的理由","authors":"Amin Vahdat, A. Lebeck, C. Ellis","doi":"10.1145/566726.566735","DOIUrl":null,"url":null,"abstract":"By some estimates, there will be close to one billion wireless devices capable of Internet connectivity within five years, surpassing the installed base of traditional wired compute devices. These devices will take the form of cellular phones, personal digital assistants (PDA's), embedded processors, and \"Internet appliances\". This proliferation of networked computing devices will enable a number of compelling applications, centering around ubiquitous access to global information services, just in time delivery of personalized content, and tight synchronization among compute devices/appliances in our everyday environment. However, one of the principal challenges of realizing this vision in the post-PC environment is the need to reduce the energy consumed in using these next-generation mobile and wireless devices, thereby extending the lifetime of the batteries that power them. While the processing power, memory, and network bandwidth of post-PC devices are increasing exponentially, their battery capacity is improving at a more modest pace. Thus, to ensure the utility of post-PC applications, it is important to develop low-level mechanisms and higher-level policies to maximize energy efficiency. In this paper, we propose the systematic re-examination of all aspects of operating system design and implementation from the point of view of energy efficiency rather than the more traditional OS metric of maximizing performance. In [7], we made the case for energy as a first-class OS-managed resource. We emphasized the benefits of higher-level control over energy usage policy and the application/OS interactions required to achieve them. This paper explores the implications that this major shift in focus can have upon the services, policies, mechanisms, and internal structure of the OS itself based on our initial experiences with rethinking system design for energy efficiency. Our ultimate goal is to design an operating system where major components cooperate to explicitly optimize for energy efficiency. A number of research efforts have recently investigated aspects of energy-efficient operating systems (a good overview is available at [16, 20]) and we intend to leverage existing \"best practice\" in our own work where such results exist. However, we are not aware of any systems that systematically revisit system structure with energy in mind. Further, our examination of operating system functionality reveals a number of opportunities that have received little attention in the literature. To illustrate this point, Table 1 presents major operating system functionality, along with possible techniques for improving power consumption characteristics. Several of the techniques are well studied, such as disk spindown policies or adaptively trading content fidelity for power [8]. For example, to reduce power consumption for MPEG playback, the system could adapt to a smaller frame rate and window size, consuming less bandwidth and computation. One of the primary objectives of operating systems is allocating resources among competing tasks, typically for fairness and performance. Adding energy efficiency to the equation raises a number of interesting issues. For example, competing processes/users may be scheduled to receive a fair share of battery resources rather than CPU resources (e.g., an application that makes heavy use of DISK I/O may be given lower priority relative to a compute-bound application when energy resources are low). Similarly, for tasks such as ad hoc routing, local battery resources are often consumed on behalf of remote processes. Fair allocation dictates that one battery is not drained in preference to others. Finally, for the communication subsystem, a number of efforts already investigate adaptively setting the polling rate for wireless networks (trading latency for energy). Our efforts to date have focused on the last four areas highlighted in Table 1. For memory allocation, our work explores how to exploit the ability of memory chips to transition among multiple power states. We also investigate metrics for picking energy-efficient routes in ad hoc networks, energy-efficient placement of distributed computation, and flexible RPC/name binding that accounts for power consumption. These last two points of resource allocation and remote communication highlight an interesting property for energy-aware OS design in the post-PC environment. Many tasks are distributed across multiple machines, potentially running on machines with widely varying CPU, memory, and power source characteristics. Thus, energy-aware OS design must closely cooperate with and track the characteristics of remote computers to balance the often conflicting goals of optimizing for energy and speed. The rest of this paper illustrates our approach with selected examples extracted from our recent efforts toward building an integrated hardware/software infrastructure that incorporates cooperative power management to support mobile and wireless applications. The instances we present in subsequent sections cover the resource management policies and mechanisms necessary to exploit low power modes of various (existing or proposed) hardware components, as well as power-aware communications and the essential role of the wide-area environment. We begin our discussion with the resources of a single machine and then extend it to the distributed context.","PeriodicalId":147728,"journal":{"name":"Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"185","resultStr":"{\"title\":\"Every joule is precious: the case for revisiting operating system design for energy efficiency\",\"authors\":\"Amin Vahdat, A. Lebeck, C. Ellis\",\"doi\":\"10.1145/566726.566735\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"By some estimates, there will be close to one billion wireless devices capable of Internet connectivity within five years, surpassing the installed base of traditional wired compute devices. These devices will take the form of cellular phones, personal digital assistants (PDA's), embedded processors, and \\\"Internet appliances\\\". This proliferation of networked computing devices will enable a number of compelling applications, centering around ubiquitous access to global information services, just in time delivery of personalized content, and tight synchronization among compute devices/appliances in our everyday environment. However, one of the principal challenges of realizing this vision in the post-PC environment is the need to reduce the energy consumed in using these next-generation mobile and wireless devices, thereby extending the lifetime of the batteries that power them. While the processing power, memory, and network bandwidth of post-PC devices are increasing exponentially, their battery capacity is improving at a more modest pace. Thus, to ensure the utility of post-PC applications, it is important to develop low-level mechanisms and higher-level policies to maximize energy efficiency. In this paper, we propose the systematic re-examination of all aspects of operating system design and implementation from the point of view of energy efficiency rather than the more traditional OS metric of maximizing performance. In [7], we made the case for energy as a first-class OS-managed resource. We emphasized the benefits of higher-level control over energy usage policy and the application/OS interactions required to achieve them. This paper explores the implications that this major shift in focus can have upon the services, policies, mechanisms, and internal structure of the OS itself based on our initial experiences with rethinking system design for energy efficiency. Our ultimate goal is to design an operating system where major components cooperate to explicitly optimize for energy efficiency. A number of research efforts have recently investigated aspects of energy-efficient operating systems (a good overview is available at [16, 20]) and we intend to leverage existing \\\"best practice\\\" in our own work where such results exist. However, we are not aware of any systems that systematically revisit system structure with energy in mind. Further, our examination of operating system functionality reveals a number of opportunities that have received little attention in the literature. To illustrate this point, Table 1 presents major operating system functionality, along with possible techniques for improving power consumption characteristics. Several of the techniques are well studied, such as disk spindown policies or adaptively trading content fidelity for power [8]. For example, to reduce power consumption for MPEG playback, the system could adapt to a smaller frame rate and window size, consuming less bandwidth and computation. One of the primary objectives of operating systems is allocating resources among competing tasks, typically for fairness and performance. Adding energy efficiency to the equation raises a number of interesting issues. For example, competing processes/users may be scheduled to receive a fair share of battery resources rather than CPU resources (e.g., an application that makes heavy use of DISK I/O may be given lower priority relative to a compute-bound application when energy resources are low). Similarly, for tasks such as ad hoc routing, local battery resources are often consumed on behalf of remote processes. Fair allocation dictates that one battery is not drained in preference to others. Finally, for the communication subsystem, a number of efforts already investigate adaptively setting the polling rate for wireless networks (trading latency for energy). Our efforts to date have focused on the last four areas highlighted in Table 1. For memory allocation, our work explores how to exploit the ability of memory chips to transition among multiple power states. We also investigate metrics for picking energy-efficient routes in ad hoc networks, energy-efficient placement of distributed computation, and flexible RPC/name binding that accounts for power consumption. These last two points of resource allocation and remote communication highlight an interesting property for energy-aware OS design in the post-PC environment. Many tasks are distributed across multiple machines, potentially running on machines with widely varying CPU, memory, and power source characteristics. Thus, energy-aware OS design must closely cooperate with and track the characteristics of remote computers to balance the often conflicting goals of optimizing for energy and speed. The rest of this paper illustrates our approach with selected examples extracted from our recent efforts toward building an integrated hardware/software infrastructure that incorporates cooperative power management to support mobile and wireless applications. The instances we present in subsequent sections cover the resource management policies and mechanisms necessary to exploit low power modes of various (existing or proposed) hardware components, as well as power-aware communications and the essential role of the wide-area environment. We begin our discussion with the resources of a single machine and then extend it to the distributed context.\",\"PeriodicalId\":147728,\"journal\":{\"name\":\"Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"185\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/566726.566735\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/566726.566735","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 185

摘要

据估计,五年内将有近10亿台能够连接互联网的无线设备,超过传统有线计算设备的安装基数。这些设备将以蜂窝电话、个人数字助理(PDA)、嵌入式处理器和“互联网设备”的形式出现。网络计算设备的激增将使许多引人注目的应用程序成为可能,这些应用程序围绕着对全球信息服务的无处不在的访问、个性化内容的及时交付以及我们日常环境中计算设备/设备之间的紧密同步。然而,在后pc环境中实现这一愿景的主要挑战之一是需要减少使用这些下一代移动和无线设备所消耗的能量,从而延长为它们供电的电池的使用寿命。虽然后pc设备的处理能力、内存和网络带宽呈指数级增长,但它们的电池容量却以更温和的速度提高。因此,为了确保后pc应用程序的效用,开发低级机制和高级政策以最大限度地提高能源效率是很重要的。在本文中,我们建议系统地重新检查操作系统设计和实现的各个方面,从能源效率的角度,而不是更传统的操作系统指标最大化的性能。在[7]中,我们将能源作为一流的操作系统管理资源。我们强调了对能源使用策略进行更高级别控制的好处,以及实现这些策略所需的应用程序/操作系统交互。本文根据我们重新思考系统设计以提高能源效率的初步经验,探讨了这一重点的重大转变对操作系统本身的服务、政策、机制和内部结构的影响。我们的最终目标是设计一个操作系统,其中主要组件合作以明确优化能源效率。许多研究工作最近调查了节能操作系统的各个方面(一个很好的概述可以在[16,20]中找到),我们打算在我们自己的工作中利用现有的“最佳实践”,因为这些结果存在。然而,我们不知道任何系统系统地重访系统结构与能量的思想。此外,我们对操作系统功能的研究揭示了许多在文献中很少受到关注的机会。为了说明这一点,表1展示了主要的操作系统功能,以及改进功耗特性的可能技术。其中一些技术得到了很好的研究,例如磁盘休眠策略或自适应交易内容保真度以换取功率[8]。例如,为了减少MPEG回放的功耗,系统可以适应更小的帧率和窗口大小,从而消耗更少的带宽和计算量。操作系统的主要目标之一是在相互竞争的任务之间分配资源,通常是为了公平和性能。将能源效率加入到这个等式中会引发一些有趣的问题。例如,竞争的进程/用户可能会被安排接受公平的电池资源而不是CPU资源(例如,当能源资源较低时,相对于计算绑定的应用程序,大量使用磁盘I/O的应用程序可能会被给予较低的优先级)。类似地,对于诸如自组织路由之类的任务,本地电池资源通常用于远程进程。公平分配规定一个电池不会优先于其他电池。最后,对于通信子系统,已经进行了大量的研究,以自适应地设置无线网络的轮询速率(用延迟换取能量)。到目前为止,我们的工作主要集中在表1中强调的最后四个领域。对于内存分配,我们的工作探索了如何利用存储芯片在多种电源状态之间转换的能力。我们还研究了在自组织网络中选择节能路由的指标,分布式计算的节能放置,以及考虑功耗的灵活RPC/名称绑定。资源分配和远程通信的最后两点突出了后pc环境中节能操作系统设计的一个有趣特性。许多任务分布在多台机器上,可能运行在CPU、内存和电源特性差异很大的机器上。因此,能耗敏感的操作系统设计必须密切配合并跟踪远程计算机的特性,以平衡优化能耗和速度这两个经常相互冲突的目标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Every joule is precious: the case for revisiting operating system design for energy efficiency
By some estimates, there will be close to one billion wireless devices capable of Internet connectivity within five years, surpassing the installed base of traditional wired compute devices. These devices will take the form of cellular phones, personal digital assistants (PDA's), embedded processors, and "Internet appliances". This proliferation of networked computing devices will enable a number of compelling applications, centering around ubiquitous access to global information services, just in time delivery of personalized content, and tight synchronization among compute devices/appliances in our everyday environment. However, one of the principal challenges of realizing this vision in the post-PC environment is the need to reduce the energy consumed in using these next-generation mobile and wireless devices, thereby extending the lifetime of the batteries that power them. While the processing power, memory, and network bandwidth of post-PC devices are increasing exponentially, their battery capacity is improving at a more modest pace. Thus, to ensure the utility of post-PC applications, it is important to develop low-level mechanisms and higher-level policies to maximize energy efficiency. In this paper, we propose the systematic re-examination of all aspects of operating system design and implementation from the point of view of energy efficiency rather than the more traditional OS metric of maximizing performance. In [7], we made the case for energy as a first-class OS-managed resource. We emphasized the benefits of higher-level control over energy usage policy and the application/OS interactions required to achieve them. This paper explores the implications that this major shift in focus can have upon the services, policies, mechanisms, and internal structure of the OS itself based on our initial experiences with rethinking system design for energy efficiency. Our ultimate goal is to design an operating system where major components cooperate to explicitly optimize for energy efficiency. A number of research efforts have recently investigated aspects of energy-efficient operating systems (a good overview is available at [16, 20]) and we intend to leverage existing "best practice" in our own work where such results exist. However, we are not aware of any systems that systematically revisit system structure with energy in mind. Further, our examination of operating system functionality reveals a number of opportunities that have received little attention in the literature. To illustrate this point, Table 1 presents major operating system functionality, along with possible techniques for improving power consumption characteristics. Several of the techniques are well studied, such as disk spindown policies or adaptively trading content fidelity for power [8]. For example, to reduce power consumption for MPEG playback, the system could adapt to a smaller frame rate and window size, consuming less bandwidth and computation. One of the primary objectives of operating systems is allocating resources among competing tasks, typically for fairness and performance. Adding energy efficiency to the equation raises a number of interesting issues. For example, competing processes/users may be scheduled to receive a fair share of battery resources rather than CPU resources (e.g., an application that makes heavy use of DISK I/O may be given lower priority relative to a compute-bound application when energy resources are low). Similarly, for tasks such as ad hoc routing, local battery resources are often consumed on behalf of remote processes. Fair allocation dictates that one battery is not drained in preference to others. Finally, for the communication subsystem, a number of efforts already investigate adaptively setting the polling rate for wireless networks (trading latency for energy). Our efforts to date have focused on the last four areas highlighted in Table 1. For memory allocation, our work explores how to exploit the ability of memory chips to transition among multiple power states. We also investigate metrics for picking energy-efficient routes in ad hoc networks, energy-efficient placement of distributed computation, and flexible RPC/name binding that accounts for power consumption. These last two points of resource allocation and remote communication highlight an interesting property for energy-aware OS design in the post-PC environment. Many tasks are distributed across multiple machines, potentially running on machines with widely varying CPU, memory, and power source characteristics. Thus, energy-aware OS design must closely cooperate with and track the characteristics of remote computers to balance the often conflicting goals of optimizing for energy and speed. The rest of this paper illustrates our approach with selected examples extracted from our recent efforts toward building an integrated hardware/software infrastructure that incorporates cooperative power management to support mobile and wireless applications. The instances we present in subsequent sections cover the resource management policies and mechanisms necessary to exploit low power modes of various (existing or proposed) hardware components, as well as power-aware communications and the essential role of the wide-area environment. We begin our discussion with the resources of a single machine and then extend it to the distributed context.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信