To see the other types of publications on this topic, follow the link: Dynamic Voltage and Frequency Scaling (DVFS).

Journal articles on the topic 'Dynamic Voltage and Frequency Scaling (DVFS)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Dynamic Voltage and Frequency Scaling (DVFS).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kocanda, Piotr, and Andrzej Kos. "Energy losses and DVFS effectiveness vs technology scaling." Microelectronics International 32, no. 3 (August 3, 2015): 158–63. http://dx.doi.org/10.1108/mi-01-2015-0008.

Full text
Abstract:
Purpose – This article aims to present complete analysis of energy losses in complementary metal-oxide semiconductor (CMOS) circuits and the effectiveness of dynamic voltage and frequency scaling (DVFS) as a method of energy conservation in CMOS circuits in variety of technologies. Energy efficiency in CMOS devices is an issue of highest importance with still continuing technology scaling. There are powerful tools for energy conservation in form of dynamic voltage scaling (DVS) and dynamic frequency scaling (DFS). Design/methodology/approach – Using analytical equations and Spice models of various technologies, energy losses are calculated and effectiveness of DVS and DFS is evaluated for every technology. Findings – Test showed that new dedicated technology for low static energy consumption can be as economical as older technologies. The dynamic voltage and frequency scaling are most effective when there is a dominance of dynamic energy losses in circuit. In case when static energy losses are comparable to dynamic energy losses, use of dynamic voltage frequency scaling can even lead to increased energy consumption. Originality/value – This paper presents complete analysis of energy losses in CMOS circuits and effectiveness of mentioned methods of energy conservation in CMOS circuits in six different technologies.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Shen, Jun Song Li, and Jian Feng Jiang. "Dynamic Voltage and Frequency Scaling Under an Accurate System Energy Model." Advanced Materials Research 442 (January 2012): 321–25. http://dx.doi.org/10.4028/www.scientific.net/amr.442.321.

Full text
Abstract:
Dynamic voltage and frequency scaling (DVFS) is a technique used in modern portable devices operated by battery to set voltage and frequency levels that meet performance requirements while minimizing energy consumption. Most of the present work on DVFS policies are based on simplistic assumptions about the hardware characteristics. In this paper, we discuss the DVFS problem under an accurate system energy model which comes from a application system portable media player (PMP) with a DVFS-capable processor PXA255. We present an optimal DVFS algorithm based on all frequency combinations at the cost of large computation. And a simplified algorithm based on two frequency combinations consumes a little more energy than the former with less computations. The experiment results show that the proposed two algorithms reduce the energy consumption effectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Wen Yew, Ming Feng Chang, Yen Lin Chen, and Jenq Haur Wang. "Performance Evaluation for Dynamic Voltage and Frequency Scaling Using Runtime Performance Counters." Applied Mechanics and Materials 284-287 (January 2013): 2575–79. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.2575.

Full text
Abstract:
Dynamic voltage and frequency scaling (DVFS) is an effective technique for reducing power consumption. The system performance is not easy to evaluate through Dynamic Voltage and Frequency Scaling. Most of studies use the execution time as an indicator while measuring the performance. However, DVFS adjusted processor speed during a fixed-length period so it cannot rely on the execution time to evaluate the system performance. This study proposes a novel and simple performance evaluation method to evaluate the system performance when DVFS is activated. Based on the performance evaluation method, this study also proposes a DVFS algorithm (P-DVFS) for a general-purpose operating system. The algorithm has been implemented on the Linux operating system and used a PXA270 development board. The results show that P-DVFS could accurately predict the suitable frequency, given runtime statistics information of a running program. In this way, the user can easily control the energy consumption by specifying allowable performance loss factor.
APA, Harvard, Vancouver, ISO, and other styles
4

Gendler, Alex, Ernest Knoll, and Yiannakis Sazeides. "I-DVFS: Instantaneous Frequency Switch During Dynamic Voltage and Frequency Scaling." IEEE Micro 41, no. 5 (September 1, 2021): 76–84. http://dx.doi.org/10.1109/mm.2021.3096655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Florence, A. Paulin, V. Shanthi, and C. B. Sunil Simon. "Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud." Scientific World Journal 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/9328070.

Full text
Abstract:
Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS) scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.
APA, Harvard, Vancouver, ISO, and other styles
6

Jia, Fan, and Longbing Zhang. "Fine-Grained CPU Power Management Based on Digital Frequency Divider." Electronics 12, no. 2 (January 13, 2023): 407. http://dx.doi.org/10.3390/electronics12020407.

Full text
Abstract:
Dynamic voltage and frequency scaling (DVFS) is a widely used method to improve the energy efficiency of the CPU. Reducing the voltage and frequency during memory-intensive workloads can minimize power consumption without affecting performance, thereby improving overall energy efficiency. A finer-grained DVFS strategy leads to better energy efficiency. However, due to the limitation of voltage regulators, the implementation granularity of the current DVFS strategies is 100 μs or more. This paper proposes that managing the CPU’s power through a more fine-grained load-aware approach can improve CPU energy efficiency, even with limitations of the voltage regulators. This paper adds a more fine-grained dynamic frequency divider to the DVFS system. This mechanism can improve the processor’s energy efficiency in scenarios where DVFS does not take effect. This paper also proposes a DVFS management strategy based on finer-grained sampling. In order to improve the accuracy of performance estimation, we enhanced the state-of-the-art CRIT method to complete accurate memory time estimation in a shorter interval. The power management strategy was verified on the ChampSim and McPAT simulating platforms. In the SPEC CPU 2017 benchmark, this work saves an average of 16.36% energy consumption and improves energy efficiency by 13.57%. Compared with the state-of-the-art CRIT of 9.77% and 6.79%, this work improved energy consumption and efficiency by 6.20% and 6.35%, respectively. This method brings a 2.04% performance reduction, only a 0.16% drop in performance compared to CRIT.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yen-Lin, Ming-Feng Chang, Chao-Wei Yu, Xiu-Zhi Chen, and Wen-Yew Liang. "Learning-Directed Dynamic Voltage and Frequency Scaling Scheme with Adjustable Performance for Single-Core and Multi-Core Embedded and Mobile Systems." Sensors 18, no. 9 (September 12, 2018): 3068. http://dx.doi.org/10.3390/s18093068.

Full text
Abstract:
Dynamic voltage and frequency scaling (DVFS) is a well-known method for saving energy consumption. Several DVFS studies have applied learning-based methods to implement the DVFS prediction model instead of complicated mathematical models. This paper proposes a lightweight learning-directed DVFS method that involves using counter propagation networks to sense and classify the task behavior and predict the best voltage/frequency setting for the system. An intelligent adjustment mechanism for performance is also provided to users under various performance requirements. The comparative experimental results of the proposed algorithms and other competitive techniques are evaluated on the NVIDIA JETSON Tegra K1 multicore platform and Intel PXA270 embedded platforms. The results demonstrate that the learning-directed DVFS method can accurately predict the suitable central processing unit (CPU) frequency, given the runtime statistical information of a running program, and achieve an energy savings rate up to 42%. Through this method, users can easily achieve effective energy consumption and performance by specifying the factors of performance loss.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Wei, Zhen Wang, Mianxiong Dong, and Zhuzhong Qian. "A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System." Scientific Programming 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/4386362.

Full text
Abstract:
The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs). However, it may lose some performance points on energy saving and the Quality of Service (QoS) for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS) is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.
APA, Harvard, Vancouver, ISO, and other styles
9

Khriji, Sabrine, Rym Chéour, and Olfa Kanoun. "Dynamic Voltage and Frequency Scaling and Duty-Cycling for Ultra Low-Power Wireless Sensor Nodes." Electronics 11, no. 24 (December 7, 2022): 4071. http://dx.doi.org/10.3390/electronics11244071.

Full text
Abstract:
Energy efficiency presents a significant challenge to the reliability of Internet of Things (IoT) services. Wireless Sensor Networks (WSNs) present as an elementary technology of IoT, which has limited resources. Appropriate energy management techniques can perform increasing energy efficiency under variable workload conditions. Therefore, this paper aims to experimentally implement a hybrid energy management solution, combining Dynamic Voltage and Frequency Scaling (DVFS) and Duty-Cycling. The DVFS technique is implemented as an effective power management scheme to optimize the operating conditions during data processing. Moreover, the duty-cycling method is applied to reduce the energy consumption of the transceiver. Hardware optimization is performed by selecting the low-power microcontroller, MSP430, using experimental estimation and characterization. Another contribution is evaluating the energy-saving design by defining the normalized power as a metric to measure the consumed power of the proposed model per throughput. Extensive simulations and real-world implementations indicate that normalized power can be significantly reduced while sustaining performance levels in high-data IoT use cases.
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, An, Huifeng Zhu, Jingwen Leng, Xin He, Vijay Janapa Reddi, Christopher D. Gill, and Xuan Zhang. "System-level Early-stage Modeling and Evaluation of IVR-assisted Processor Power Delivery System." ACM Transactions on Architecture and Code Optimization 18, no. 4 (December 31, 2021): 1–27. http://dx.doi.org/10.1145/3468145.

Full text
Abstract:
Despite being employed in numerous efforts to improve power delivery efficiency, the integrated voltage regulator (IVR) approach has yet to be evaluated rigorously and quantitatively in a full power delivery system (PDS) setting. To fulfill this need, we present a system-level modeling and design space exploration framework called Ivory for IVR-assisted power delivery systems. Using a novel modeling methodology, it can accurately estimate power delivery efficiency, static performance characteristics, and dynamic transient responses under different load variations and external voltage/frequency scaling conditions. We validate the model over a wide range of IVR topologies with silicon measurement and SPICE simulation. Finally, we present two case studies using architecture-level performance and power simulators. The first case study focuses on optimal PDS design for multi-core systems, which achieves 8.6% power efficiency improvement over conventional off-chip voltage regulator module– (VRM) based PDS. The second case study explores the design tradeoffs for IVR-assisted PDSs in CPU and GPU systems with fast per-core dynamic voltage and frequency scaling (DVFS). We find 2 μs to be the optimal DVFS timescale, which not only reaps energy benefits (12.5% improvement in CPU and 50.0% improvement in GPU) but also avoids costly IVR overheads.
APA, Harvard, Vancouver, ISO, and other styles
11

Sabri, Sharizal Fadlie, Noor Azurati Ahmad, Shamsul Sahibuddin, and Rudzidatul Dziyauddin. "Dynamic frequency scheduling for CubeSat's on-board and data handling subsystem." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 3 (June 1, 2021): 1672. http://dx.doi.org/10.11591/ijeecs.v22.i3.pp1672-1678.

Full text
Abstract:
<p>CubeSat is a small-sized satellite that provides a cheaper option for the manufacturer to have a fully operational satellite. Due to its size, CubeSat can only generate limited power, and this will restrict its functionality. This research aims to improve CubeSat’s power consumption by implementing the dynamic voltage and frequency scaling (DVFS) technique to on-board and data handling subsystem (OBDH). DVFS will find the best operating frequency to execute all of OBDH’s task. This paper explains how we determined the task set, representing all routine tasks performed by OBDH during normal operation mode. We have simulated the task set using two DVFS algorithms, static earliest deadline first (EDF) and cycle conserving edf (CC EDF). The result shows that both scheduling algorithms give a similar result to our task set. However, when the scheduler is configured as non-preemptive, the simulator failed to schedule the critical task. It means that the system fails to work as intended. Therefore, we conclude that we need to implement mixed-criticality scheduling to prevent critical tasks from being aborted by the system.</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Seyeon, Kyungmin Bin, Sangtae Ha, Kyunghan Lee, and Song Chong. "zTT." GetMobile: Mobile Computing and Communications 25, no. 4 (March 30, 2022): 30–34. http://dx.doi.org/10.1145/3529706.3529714.

Full text
Abstract:
With the advent of mobile processors integrating CPU and GPU, high-performance tasks, such as deep learning, gaming, and image processing are running on mobile devices. To fully exploit CPU and GPU's capability on mobile devices, we need to utilize their processing capability as much as possible. However, it is challenging due to the nature of mobile devices whose users are sensitive to battery consumption and device temperature. Many researchers have studied techniques enabling energy-efficient operations in mobile processors, mostly at managing the temperature and power consumption below predefined thresholds. DVFS (Dynamic Voltage and Frequency Scaling) is a technique that reduces heat generation and power consumption from the circuit by adjusting CPU or GPU voltage-frequency levels at runtime. To best utilize its benefits, many DVFS techniques have been developed for mobile processors. Still, it is challenging to implement a DVFS that performs ideally for mobile devices, and there are several reasons behind this difficulty.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Jingwei, Duanyu Teng, and Jinwei Lin. "A Two-stage Strategy to Optimize Energy Consumption for Latency-critical Workload under QoS Constraint." Information Technology And Control 49, no. 4 (December 19, 2020): 608–21. http://dx.doi.org/10.5755/j01.itc.49.4.25029.

Full text
Abstract:
Data centers afford huge energy costs. Saving energy while providing efficient quality of service (Qos) is the goal pursued by data centers. But this is a challenging issue. To ensure the Qos of latency-critical applications, data centers always schedule processors to run at higher frequencies. The continuous high frequency operation will cause great energy waste. Modern processors are equipped with dynamic voltage and frequency scaling (DVFS) technology, which allows the processor to run at every frequency levels it supports, so we focus on how to use DVFS to trade-off between energy and Qos. In this paper, we propose a two-stage strategy based on DVFS to dynamically scaling the CPU frequency during latency-critical workload execution, aimed at minimizing the energy consumption for latency-critical workload which is under the Qos constraint. The two-stage strategy includes a static stage and dynamic stage, which are worked together to determine the optimal frequency for running workload. The static stage uses a well designed heuristic algorithm to determine the frequency-load matches under Qos constraint, while the dynamic stage leverages a threshold method to determine whether to adjust the pre-set frequency. We evaluate the two-stage strategy in terms of Qos and energy saving on the cloudsuite benchmark, and compares the two metrics with the-state-of art Ondemand. Results show that our strategy is superior to Ondemand for energy saving, improving more than 13%.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Lixia, Jian Li, Ruhui Ma, Haibing Guan, and Hans-Arno Jacobsen. "Balancing Power And Performance In HPC Clouds." Computer Journal 63, no. 6 (June 2020): 880–99. http://dx.doi.org/10.1093/comjnl/bxz150.

Full text
Abstract:
Abstract With energy consumption in high-performance computing clouds growing rapidly, energy saving has become an important topic. Virtualization provides opportunities to save energy by enabling one physical machine (PM) to host multiple virtual machines (VMs). Dynamic voltage and frequency scaling (DVFS) is another technology to reduce energy consumption. However, in heterogeneous cloud environments where DVFS may be applied at the chip level or the core level, it is a great challenge to combine these two technologies efficiently. On per-core DVFS servers, cloud managers should carefully determine VM placements to minimize performance interference. On full-chip DVFS servers, cloud managers further face the choice of whether to combine VMs with different characteristics to reduce performance interference or to combine VMs with similar characteristics to take better advantage of DVFS. This paper presents a novel mechanism combining a VM placement algorithm and a frequency scaling method. We formulate this VM placement problem as an integer programming (IP) to find appropriate placement configurations, and we utilize support vector machines to select suitable frequencies. We conduct detailed experiments and simulations, showing that our scheme effectively reduces energy consumption with modest impact on performance. Particularly, the total energy delay product is reduced by up to 60%.
APA, Harvard, Vancouver, ISO, and other styles
15

Renugadevi, T., K. Geetha, Natarajan Prabaharan, and Pierluigi Siano. "Carbon-Efficient Virtual Machine Placement Based on Dynamic Voltage Frequency Scaling in Geo-Distributed Cloud Data Centers." Applied Sciences 10, no. 8 (April 14, 2020): 2701. http://dx.doi.org/10.3390/app10082701.

Full text
Abstract:
The tremendous growth of big data analysis and IoT (Internet of Things) has made cloud computing an integral part of society. The prominent problem associated with data centers is the growing energy consumption, which results in environmental pollution. Data centers can reduce their carbon emissions through efficient management of server power consumption for a given workload. Dynamic voltage frequency scaling (DVFS) can be applied to control the operating frequencies of the servers based on the workloads assigned to them, as this approach has a cubic increment relationship with power consumption. This research work proposes two DVFS-enabled host selection algorithms for virtual machine (VM) placement with a cluster selection strategy, namely the carbon and power-efficient optimal frequency (C-PEF) algorithm and the carbon-aware first-fit optimal frequency (C-FFF) algorithm.The main aims of the proposed algorithms are to balance the load among the servers and dynamically tune the cooling load based on the current workload. The cluster selection strategy is based on static and dynamic power usage effectiveness (PUE) values and the carbon footprint rate (CFR). The cluster selection is also extended to non-DVFS host selection policies, namely the carbon- and power-efficient (C-PE) algorithm, carbon-aware first-fit (C-FF) algorithm, and carbon-aware first-fit least-empty (C-FFLE) algorithm. The results show that C-FFF achieves 2% more power reduction than C-PEF and C-PE, and demonstrates itself as a power-efficient algorithm for CO2 reduction, retaining the same quality of service (QoS) as its counterparts with lower computational overheads.
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, SeongKi, and Seok-Kyoo Kim. "DVFS Algorithms of GPU and Memory for Mobile GPGPU Applications: A case study." International Journal of Engineering & Technology 7, no. 3 (August 24, 2018): 1918. http://dx.doi.org/10.14419/ijet.v7i3.15296.

Full text
Abstract:
Although both OpenCL and RenderScript have allowed the General-Purpose Graphics Processing Unit (GPGPU) to be used even for mobile GPUs, it is still difficult for mobile applications to use the GPGPU for several reasons. One of the reasons is that mobile devices place restrictions on GPU performance through power-saving technologies such as Dynamic Voltage and Frequency Scaling (DVFS). DVFS tries to control the balance between performance and energy consumption based on the application’s requirements. This technology has been successful in many cases and is widely used; however, it significantly decreases the performance of GPGPU applications. In this paper, we propose novel DVFS algorithms for GPU and memory when the GPGPU applications run. The suggested algorithms decreased the energy consumption by more than 0.7 times without any algorithm changes, and improved the energy efficiency (performance per watt) by more than 3.42 times in comparison with the conventional interval-based algorithm.
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Wei Ben, Chi Hsiung Wang, Hsiang Hsiung Chang, Ming Hao Hong, and Jsung Mo Shen. "Low-Power Fast-Settling Low-Dropout Regulator Using a Digitally Assisted Voltage Accelerator for DVFS Application." Applied Mechanics and Materials 284-287 (January 2013): 2526–30. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.2526.

Full text
Abstract:
This paper presents a low-power fast-settling low-dropout regulator (LDO) using a digitally assisted voltage accelerator. Using the selectable-voltage control technique and digitally assisted voltage accelerator significantly improves the transition response time within output voltage switched. The proposed LDO regulator uses the selectable-voltage control technique to provide two selectable-voltage outputs of 2.5 V and 1.8 V. Using the digitally assisted voltage accelerator when the output voltage is switched reduces the settling time. The simulation results show that the settling time of the proposed LDO regulator is significantly reduced from 4.2 ms to 15.5 μs. Moreover, the selectable-voltage control unit and the digitally assisted voltage accelerator of the proposed LDO regulator consume only 0.54 mW under a load current of 100 mA. Therefore, the proposed LDO regulator is suitable for low-power dynamic voltage and frequency-scaling applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Peluso, Valentino, Roberto Giorgio Rizzo, and Andrea Calimera. "Performance Profiling of Embedded ConvNets under Thermal-Aware DVFS." Electronics 8, no. 12 (November 29, 2019): 1423. http://dx.doi.org/10.3390/electronics8121423.

Full text
Abstract:
Convolutional Neural Networks (ConvNets) can be shrunk to fit embedded CPUs adopted on mobile end-nodes, like smartphones or drones. The deployment onto such devices encompasses several algorithmic level optimizations, e.g., topology restructuring, pruning, and quantization, that reduce the complexity of the network, ensuring less resource usage and hence higher speed. Several studies revealed remarkable performance, paving the way towards real-time inference on low power cores. However, continuous execution at maximum speed is quite unrealistic due to a fast increase of the on-chip temperature. Indeed, proper thermal management is paramount to guarantee silicon reliability and a safe user experience. Power management schemes, like voltage lowering and frequency scaling, are common knobs to control the thermal stability. Obviously, this implies a performance degradation, often not considered during the training and optimization stages. The objective of this work is to present the performance assessment of embedded ConvNets under thermal management. Our study covers the behavior of two control policies, namely reactive and proactive, implemented through the Dynamic Voltage-Frequency Scaling (DVFS) mechanism available on commercial embedded CPUs. As benchmarks, we used four state-of-the-art ConvNets for computer vision flashed into the ARM Cortex-A15 CPU. With the collected results, we aim to show the existing temperature-performance trade-off and give a more realistic analysis of the maximum performance achievable. Moreover, we empirically demonstrate the strict relationship between the on-chip thermal behavior and the hyper-parameters of the ConvNet, revealing optimization margins for a thermal-aware design of neural network layers.
APA, Harvard, Vancouver, ISO, and other styles
19

Sheng, Duo, Hsueh-Ru Lin, and Li Tai. "Low-Process–Voltage–Temperature-Sensitivity Multi-Stage Timing Monitor for System-on-Chip Applications." Electronics 10, no. 13 (June 30, 2021): 1587. http://dx.doi.org/10.3390/electronics10131587.

Full text
Abstract:
High performance and complex system-on-chip (SoC) design require a throughput and stable timing monitor to reduce the impacts of uncertain timing and implement the dynamic voltage and frequency scaling (DVFS) scheme for overall power reduction. This paper presents a multi-stage timing monitor, combining three timing-monitoring stages to achieve a high timing-monitoring resolution and a wide timing-monitoring range simultaneously. Additionally, because the proposed timing monitor has high immunity to the process–voltage–temperature (PVT) variation, it provides a more stable time-monitoring results. The time-monitoring resolution and range of the proposed timing monitor are 47 ps and 2.2 µs, respectively, and the maximum measurement error is 0.06%. Therefore, the proposed multi-stage timing monitor provides not only the timing information of the specified signals to maintain the functionality and performance of the SoC, but also makes the operation of the DVFS scheme more efficient and accurate in SoC design.
APA, Harvard, Vancouver, ISO, and other styles
20

Lee, Jinsu, and Eunji Lee. "Concerto: Dynamic Processor Scaling for Distributed Data Systems with Replication." Applied Sciences 11, no. 12 (June 21, 2021): 5731. http://dx.doi.org/10.3390/app11125731.

Full text
Abstract:
A surge of interest in data-intensive computing has led to a drastic increase in the demand for data centers. Given this growing popularity, data centers are becoming a primary contributor to the increased consumption of energy worldwide. To mitigate this problem, this paper revisits DVFS (Dynamic Voltage Frequency Scaling), a well-known technique to reduce the energy usage of processors, from the viewpoint of distributed systems. Distributed data systems typically adopt a replication facility to provide high availability and short latency. In this type of architecture, the replicas are maintained in an asynchronous manner, while the master synchronously operates via user requests. Based on this relaxation constraint of replica, we present a novel DVFS technique called Concerto, which intentionally scales down the frequency of processors operating for the replicas. This mechanism can achieve considerable energy savings without an increase in the user-perceived latency. We implemented Concerto on Redis 6.0.1, a commercial-level distributed key-value store, demonstrating that all associated performance issues were resolved. To prevent a delay in read queries assigned to the replicas, we offload the independent part of the read operation to the fast-running thread. We also empirically demonstrate that the decreased performance of the replica does not cause an increase of the replication lag because the inherent load unbalance between the master and replica hides the increased latency of the replica. Performance evaluations with micro and real-world benchmarks show that Redis saves 32% on average and up to 51% of energy with Concerto under various workloads, with minor performance losses in the replicas. Despite numerous studies of the energy saving in data centers, to the best of our best knowledge, Concerto is the first approach that considers clock-speed scaling at the aggregate level, exploiting heterogeneous performance constraints across data nodes.
APA, Harvard, Vancouver, ISO, and other styles
21

Choi, Jongmoo, Bumjong Jung, Yongjae Choi, and Seiil Son. "An Adaptive and Integrated Low-Power Framework for Multicore Mobile Computing." Mobile Information Systems 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/9642958.

Full text
Abstract:
Employing multicore in mobile computing such as smartphone and IoT (Internet of Things) device is a double-edged sword. It provides ample computing capabilities required in recent intelligent mobile services including voice recognition, image processing, big data analysis, and deep learning. However, it requires a great deal of power consumption, which causes creating a thermal hot spot and putting pressure on the energy resource in a mobile device. In this paper, we propose a novel framework that integrates two well-known low-power techniques, DPM (Dynamic Power Management) and DVFS (Dynamic Voltage and Frequency Scaling) for energy efficiency in multicore mobile systems. The key feature of the proposed framework is adaptability. By monitoring the online resource usage such as CPU utilization and power consumption, the framework can orchestrate diverse DPM and DVFS policies according to workload characteristics. Real implementation based experiments using three mobile devices have shown that it can reduce the power consumption ranging from 22% to 79%, while affecting negligibly the performance of workloads.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Jing, and Yu Yung Ke. "A Dynamic Power Management Mechanism for Embedded System with Micro-Kernel Operating System." Applied Mechanics and Materials 325-326 (June 2013): 916–21. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.916.

Full text
Abstract:
As mobile devices of embedded systems prevail in our daily life, energy saving emerges as one very important issue and power management techniques are desirable. This paper presents the design and the implementation of a dynamic power management mechanism (DPMM) for embedded systems running micro-kernel operating systems. The DPMM is composed of policy manager, DVFS (Dynamic Voltage Frequency Scaling) controller, DPM (Dynamic Power Management) server, resource management flags, and DPM library. They are imple-mented to execute on an embedded system platform equipped with an XScale PXA270 processor and various I/O interfaces or devices running Zinix micro-kernel operating system. Testing results indicate that this DPMM effectively achieves power management in the system.
APA, Harvard, Vancouver, ISO, and other styles
23

Kim, Young-Joo, Jong-Soo Seok, YungJoon Jung, and Ok-Kyoon Ha. "Light-Weight and Versatile Monitor for a Self-Adaptive Software Framework for IoT Systems." Journal of Sensors 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/8085407.

Full text
Abstract:
Today, various Internet of Things (IoT) devices and applications are being developed. Such IoT devices have different hardware (HW) and software (SW) capabilities; therefore, most applications require customization when IoT devices are changed or new applications are created. However, the applications executed on these devices are not optimized for power and performance because IoT device systems do not provide suitable static and dynamic information about fast-changing system resources and applications. Therefore, this paper proposes a light-weight and versatile monitor for a self-adaptive software framework to automatically control system resources according to the system status. The monitor helps running applications guarantee low power consumption and high performance for an optimal environment. The proposed monitor has two components: a monitoring component, which provides real-time static and dynamic information about system resources and applications, and a controlling component, which supports real-time control of system resources. For the experimental verification, we created a video transport system based on IoT devices and measured the CPU utilization by dynamic voltage and frequency scaling (DVFS) for the monitor. The results demonstrate that, for up to 50 monitored processes, the monitor shows an average CPU utilization of approximately 4% in the three DVFS modes and demonstrates maximum optimization in the Performance mode of DVFS.
APA, Harvard, Vancouver, ISO, and other styles
24

Gu, Hao Lin, Wei Wei Shan, Yun Fan Yu, and Yin Chao Lu. "Design and CMOS Implementation of a Low Power System-on-Chip Integrated Circuit." Applied Mechanics and Materials 121-126 (October 2011): 755–59. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.755.

Full text
Abstract:
A low power 32-bit microcontroller using different kinds of low-power techniques to adapt to the dynamically changing performance demands and power consumption constraints of battery powered applications is designed and tested. Four power domains and six power modes are designed to fulfill low-power targets and meet different functional requirements. Varieties of low power methods such as dynamic voltage and frequency scaling (DVFS), multiple supply voltages (MSV), power gating (PG) and so on are applied. A novel zero steady-state current POR circuit which makes excellent performance in the chip’s OFF mode is also integrated. The SoC occupies 20 mm2 in a 0.18 um, 1.8 V nominal-supply, CMOS process. Test results show that the microcontroller works normally at the frequency of 70MHz and performs well in different power modes. Yet it only consumes 1.67μA leakage current in the OFF mode.
APA, Harvard, Vancouver, ISO, and other styles
25

Gao, Nan, Cheng Xu, Xin Peng, Haibo Luo, Wufei Wu, and Guoqi Xie. "Energy-Efficient Scheduling Optimization for Parallel Applications on Heterogeneous Distributed Systems." Journal of Circuits, Systems and Computers 29, no. 13 (March 25, 2020): 2050203. http://dx.doi.org/10.1142/s0218126620502035.

Full text
Abstract:
Designing energy-efficient scheduling algorithms on heterogeneous distributed systems is increasingly becoming the focus of research. State-of-the-art works have studied scheduling by combining dynamic voltage and frequency scaling (DVFS) technology and turning off the appropriate processors to reduce dynamic and static energy consumptions. However, the methods for turning off processors are ineffective. In this study, we propose a novel method to assign priorities to processors for facilitating effective selection of turned-on processors to decrease static energy consumption. An energy-efficient scheduling algorithm based on bisection (ESAB) is proposed on this basis, and this algorithm directly turns on the most energy-efficient processors depending on the idea of bisection to reduce static energy consumption while dynamic energy consumption is decreased by using DVFS technology. Experiments are performed on fast Fourier transform, Gaussian elimination, and randomly generated parallel applications. Results show that our ESAB algorithm makes a better trade-off between reducing energy consumption and low computation time of task assignment (CTTA) than existing algorithms under different scale conditions, deadline constraints, and degrees of parallelism and heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Jing, Madhavan Manivannan, Mustafa Abduljabbar, and Miquel Pericàs. "ERASE: Energy Efficient Task Mapping and Resource Management for Work Stealing Runtimes." ACM Transactions on Architecture and Code Optimization 19, no. 2 (June 30, 2022): 1–29. http://dx.doi.org/10.1145/3510422.

Full text
Abstract:
Parallel applications often rely on work stealing schedulers in combination with fine-grained tasking to achieve high performance and scalability. However, reducing the total energy consumption in the context of work stealing runtimes is still challenging, particularly when using asymmetric architectures with different types of CPU cores. A common approach for energy savings involves dynamic voltage and frequency scaling (DVFS) wherein throttling is carried out based on factors like task parallelism, stealing relations, and task criticality. This article makes the following observations: (i) leveraging DVFS on a per-task basis is impractical when using fine-grained tasking and in environments with cluster/chip-level DVFS; (ii) task moldability, wherein a single task can execute on multiple threads/cores via work-sharing, can help to reduce energy consumption; and (iii) mismatch between tasks and assigned resources (i.e., core type and number of cores) can detrimentally impact energy consumption. In this article, we propose EneRgy Aware SchedulEr (ERASE), an intra-application task scheduler on top of work stealing runtimes that aims to reduce the total energy consumption of parallel applications. It achieves energy savings by guiding scheduling decisions based on per-task energy consumption predictions of different resource configurations. In addition, ERASE is capable of adapting to both given static frequency settings and externally controlled DVFS. Overall, ERASE achieves up to 31% energy savings and improves performance by 44% on average, compared to the state-of-the-art DVFS-based schedulers.
APA, Harvard, Vancouver, ISO, and other styles
27

BENOIT, ANNE, RAMI MELHEM, PAUL RENAUD-GOUD, and YVES ROBERT. "ASSESSING THE PERFORMANCE OF ENERGY-AWARE MAPPINGS." Parallel Processing Letters 23, no. 02 (June 2013): 1340003. http://dx.doi.org/10.1142/s0129626413400033.

Full text
Abstract:
We aim at mapping streaming applications that can be modeled by a series-parallel graph onto a 2-dimensional tiled chip multiprocessor (CMP) architecture. The objective of the mapping is to minimize the energy consumption, using dynamic voltage and frequency scaling (DVFS) techniques, while maintaining a given level of performance, reflected by the rate of processing the data streams. This mapping problem turns out to be NP-hard, and several heuristics are proposed. We assess their performance through comprehensive simulations using the StreamIt workflow suite and randomly generated series-parallel graphs, and various CMP grid sizes.
APA, Harvard, Vancouver, ISO, and other styles
28

Song, Minseok, and Jinhan Park. "A Dynamic Programming Solution for Energy-Optimal Video Playback on Mobile Devices." Mobile Information Systems 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/1042525.

Full text
Abstract:
Due to the development of mobile technology and wide availability of smartphones, the Internet of Things (IoT) starts to handle high volumes of video data to facilitate multimedia-based services, which requires energy-efficient video playback. In video playback, frames have to be decoded and rendered at high playback rate, increasing the computation cost on the CPU. To save the CPU power, dynamic voltage and frequency scaling (DVFS) dynamically adjusts the operating voltage of the processor along with frequency, in which appropriate selection of frequency on power could achieve a balance between performance and power. We present a decoding model that allows buffering frames to let the CPU run at low frequency and then propose an algorithm that determines the CPU frequency needed to decode each frame in a video, with the aim of minimizing power consumption while meeting buffer size and deadline constraints, using a dynamic programming technique. We finally extend this algorithm to optimize CPU frequencies over a short sequence of frames, producing a practical method of reducing the energy required for video decoding. Experimental results show a system-wide reduction in energy of27%, compared with a processor running at full speed.
APA, Harvard, Vancouver, ISO, and other styles
29

Dey, Somdip, Samuel Isuwa, Suman Saha, Amit Kumar Singh, and Klaus McDonald-Maier. "CPU-GPU-Memory DVFS for Power-Efficient MPSoC in Mobile Cyber Physical Systems." Future Internet 14, no. 3 (March 14, 2022): 91. http://dx.doi.org/10.3390/fi14030091.

Full text
Abstract:
Most modern mobile cyber-physical systems such as smartphones come equipped with multi-processor systems-on-chip (MPSoCs) with variant computing capacity both to cater to performance requirements and reduce power consumption when executing an application. In this paper, we propose a novel approach to dynamic voltage and frequency scaling (DVFS) on CPU, GPU and RAM in a mobile MPSoC, which caters to the performance requirements of the executing application while consuming low power. We evaluate our methodology on a real hardware platform, Odroid XU4, and the experimental results prove the approach to be 26% more power-efficient and 21% more thermal-efficient compared to the state-of-the-art system.
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Tarawneh, Mutaz, Ziyad Ahmed Al Tarawneh, and Saif E. A. Alnawayseh. "A CPU-Guided Dynamic Voltage and Frequency Scaling (DVFS) of Off-Chip Buses in Homogenous Multicore Processors." International Review on Computers and Software (IRECOS) 10, no. 7 (July 31, 2015): 735. http://dx.doi.org/10.15866/irecos.v10i7.6742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kaur, Nirmal, Savina Bansal, and Rakesh Kumar Bansal. "Survey on energy efficient scheduling techniques on cloud computing." Multiagent and Grid Systems 17, no. 4 (March 7, 2022): 351–66. http://dx.doi.org/10.3233/mgs-220357.

Full text
Abstract:
With ever-growing technical advances, performance of complex scientific and engineering applications has arrived at petaflops and exaflops range. However, massive power drawn from the large scale computing infrastructure has caused commensurate rise in electricity consumption, escalating data center ownership costs besides leaving carbon footprints. Judicious scheduling of complex applications with an objective to reduce overall makespan and reduced energy consumption has become one of the biggest confront in the realm of computing architectures. This paper presents a survey on energy efficient scheduling algorithms based on dynamic voltage and frequency scaling (DVFS) and dynamic power management (DPM) techniques. The parameters considered are mainly the makespan, processor energy (dynamic and static) consumption, and network energy (communication) consumption, wherever appropriate during task scheduling.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Hoeseok, and Soonhoi Ha. "Power Optimization of Multimode Mobile Embedded Systems with Workload-Delay Dependency." Mobile Information Systems 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/2010837.

Full text
Abstract:
This paper proposes to take the relationship between delay and workload into account in the power optimization of microprocessors in mobile embedded systems. Since the components outside a device continuously change their values or properties, the workload to be handled by the systems becomes dynamic and variable. This variable workload is formulated as a staircase function of the delay taken at the previous iteration in this paper and applied to the power optimization of DVFS (dynamic voltage-frequency scaling). In doing so, a graph representation of all possible workload/mode changes during the lifetime of a device, Workload Transition Graph (WTG), is proposed. Then, the power optimization problem is transformed into finding a cycle (closed walk) in WTG which minimizes the average power consumption over it. Out of the obtained optimal cycle of WTG, one can derive the optimal power management policy of the target device. It is shown that the proposed policy is valid for both continuous and discrete DVFS models. The effectiveness of the proposed power optimization policy is demonstrated with the simulation results of synthetic and real-life examples.
APA, Harvard, Vancouver, ISO, and other styles
33

Qiu, Weiming, Yonghao Chen, Dihu Chen, Tao Su, and Simei Yang. "Run-Time Hierarchical Management of Mapping, Per-Cluster DVFS and Per-Core DPM for Energy Optimization." Electronics 11, no. 7 (March 30, 2022): 1094. http://dx.doi.org/10.3390/electronics11071094.

Full text
Abstract:
Heterogeneous cluster-based multi/many-core systems (e.g., ARM big.LITTLE, supporting dynamic voltage and frequency scaling (DVFS) at cluster level and dynamic power management (DPM) at core level) have attracted much attention to optimize energy on modern embedded systems. For concurrently executing applications on such a platform, this paper aims to study how to appropriately apply the three system configurations (mapping, DVFS, and DPM) to reduce both dynamic and static energy. To this end, this paper first formulates the dependence of the three system configurations on heterogeneous cluster-based systems as a 0–1 integrated linear programming (ILP) model, taking into account run-time configuration overheads (e.g., costs of DPM mode switching and task migration). Then, with the 0–1 ILP model, different run-time strategies (e.g., considering the three configurations in fully separate, partially separate, and holistic manners) are compared based on a hierarchical management structure and design-time prepared data. Experimental case studies offer insights into the effectiveness of different management strategies on different platform sizes (e.g., #cluster × #core, 2 × 4, 2 × 8, 4 × 4, 4 × 8), in terms of application migration, energy efficiency, resource efficiency, and complexity.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Tiefeng, Caiwen Ma, and WenHua Li. "The System Power Control Unit Based on the On-Chip Wireless Communication System." Scientific World Journal 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/939254.

Full text
Abstract:
Currently, the on-chip wireless communication system (OWCS) includes 2nd-generation (2G), 3rd-generation (3G), and long-term evolution (LTE) communication subsystems. To improve the power consumption of OWCS, a typical architecture design of system power control unit (SPCU) is given in this paper, which can not only make a 2G, a 3G, and an LTE subsystems enter sleep mode, but it can also wake them up from sleep mode via the interrupt. During the sleep mode period, either the real-time sleep timer or the global system for mobile (GSM) communication sleep timer can be used individually to arouse the corresponding subsystem. Compared to previous sole voltage supplies on the OWCS, a 2G, a 3G, or an LTE subsystem can be independently configured with three different voltages and frequencies in normal work mode. In the meantime, the voltage supply monitor, which is an important part in the SPCU, can significantly guard the voltage of OWCS in real time. Finally, the SPCU may implement dynamic voltage and frequency scaling (DVFS) for a 2G, a 3G, or an LTE subsystem, which is automatically accomplished by the hardware.
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Cong Feng, Ying Hui Zhao, and Jian Wan. "Towards Dynamic Voltage/Frequency Scaling for Power Reduction in Data Centers." Applied Mechanics and Materials 20-23 (January 2010): 1148–56. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.1148.

Full text
Abstract:
Higher power consumption in data centers results in more heat dissipation, cooling costs and degrades the system reliability. Conventional power reduction techniques such as dynamic voltage/frequency scaling (DVS/DFS) have disadvantages when they are ported to current data centers with virtualization deployments. In this paper, we give a short survey and discussion on some issues and aspects of DVS/DFS in data centers. This paper also presents a simple comparison of four power management schemes in virtualization environments.
APA, Harvard, Vancouver, ISO, and other styles
36

Park, Jurn-Gyu, Nikil Dutt, and Sung-Soo Lim. "An Interpretable Machine Learning Model Enhanced Integrated CPU-GPU DVFS Governor." ACM Transactions on Embedded Computing Systems 20, no. 6 (November 30, 2021): 1–28. http://dx.doi.org/10.1145/3470974.

Full text
Abstract:
Modern heterogeneous CPU-GPU-based mobile architectures, which execute intensive mobile gaming/graphics applications, use software governors to achieve high performance with energy-efficiency. However, existing governors typically utilize simple statistical or heuristic models, assuming linear relationships using a small unbalanced dataset of mobile games; and the limitations result in high prediction errors for dynamic and diverse gaming workloads on heterogeneous platforms. To overcome these limitations, we propose an interpretable machine learning (ML) model enhanced integrated CPU-GPU governor: (1) It builds tree-based piecewise linear models (i.e., model trees) offline considering both high accuracy (low error) and interpretable ML models based on mathematical formulas using a simulatability operation counts quantitative metric. And then (2) it deploys the selected models for online estimation into an integrated CPU-GPU Dynamic Voltage Frequency Scaling governor. Our experiments on a test set of 20 mobile games exhibiting diverse characteristics show that our governor achieved significant energy efficiency gains of over 10% (up to 38%) improvements on average in energy-per-frame with a surprising-but-modest 3% improvement in Frames-per-Second performance, compared to a typical state-of-the-art governor that employs simple linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
37

Fanfakhri, Ahmed Badri Muslim, Ali Yakoob Yousif, and Esraa Alwan. "Multi-objective Optimization of Grid Computing for Performance, Energy and Cost." Kurdistan Journal of Applied Research 2, no. 3 (August 27, 2017): 74–79. http://dx.doi.org/10.24017/science.2017.3.31.

Full text
Abstract:
In this paper, new multi-objective optimization algorithm is proposed. It optimizes the execution time, the energy consumption and the cost of booked nodes in the grid architecture at the same time. The proposed algorithm selects the best frequencies depends on a new optimization function that optimized these three objectives, while giving equivalent trade-off for each one. Dynamic voltage and frequency scaling (DVFS) is used to reduce the energy consumption of the message passing parallel iterative method executed over grid. DVFS is also reduced the computing power of each processor executing the parallel applications. Therefore, the performance of these applications is decreased and so on the payed cost for the booking nodes is increased. However, the proposed multi-objective algorithm gives the minimum energy consumption and minimum cost with maximum performance at the same time. The proposed algorithm is evaluated on the SimGrid/SMPI simulator while running the parallel iterative Jacobi method. The experiments show that it reduces on average the energy consumption by up to 19.7 %, while limiting the performance and cost degradations to 3.2 % and 5.2 % respectively.
APA, Harvard, Vancouver, ISO, and other styles
38

Ren, Yi Cheng, Junichi Suzuki, Shingo Omura, and Ryuichi Hosoya. "Leveraging Active-Guided Evolutionary Games for Adaptive and Stable Deployment of DVFS-Aware Cloud Applications." International Journal of Software Engineering and Knowledge Engineering 25, no. 05 (June 2015): 851–70. http://dx.doi.org/10.1142/s0218194015400239.

Full text
Abstract:
This paper proposes and evaluates a multi-objective evolutionary game theoretic framework for adaptive and stable application deployment in clouds that support dynamic voltage and frequency scaling (DVFS) for CPUs. The proposed algorithm, called AGEGT, aids cloud operators to adapt the resource allocation to applications and their locations according to the operational conditions in a cloud (e.g. workload and resource availability) with respect to multiple conflicting objectives such as response time, resource utilization and power consumption. In AGEGT, evolutionary multiobjective games are performed on application deployment strategies (i.e. solution candidates) with an aid of guided local search. AGEGT theoretically guarantees that each application performs an evolutionarily stable deployment strategy, which is an equilibrium solution under given operational conditions. Simulation results verify this theoretical analysis; applications seek equilibria to perform adaptive and evolutionarily stable deployment strategies. AGEGT allows applications to successfully leverage DVFS to balance their response time, resource utilization and power consumption. AGEGT gains performance improvement via guided local search and outperforms existing heuristics such as first-fit and best-fit algorithms (FFA and BFA) as well as NSGA-II.
APA, Harvard, Vancouver, ISO, and other styles
39

Yassa, Sonia, Rachid Chelouah, Hubert Kadima, and Bertrand Granado. "Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments." Scientific World Journal 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/350934.

Full text
Abstract:
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Rossi, Daniele, and Vasileios Tenentes. "Run-Time Thermal Management for Lifetime Optimization in Low-Power Designs." Electronics 11, no. 3 (January 29, 2022): 411. http://dx.doi.org/10.3390/electronics11030411.

Full text
Abstract:
In this paper, the magnitude of the temperature and stress variability of dynamic voltage and frequency scaling (DVFS) designs is analyzed, and their impact on the bias temperature instability (BTI) degradation and lifetime of DVFS designs is assessed. For this purpose, a design-time evaluation framework for BTI degradation was developed, which considered the statistical workload and die temperature profiles of DVFS operating modes. The performed analysis showed that, together with high stress variability, DVFS designs exhibited even higher temperature variability, depending on the workload and utilized operating modes, and the impact of temperature variability on lifetime could be up to 2× higher than that due to stress. In order to account for temperature variability on aging detrimental effects, a thermal management run-time system is proposed that honors the desired lifetime constraints by properly selecting temperature constraints that govern the utilized operating modes. The proposed run-time system was applied on the largest benchmark circuit from the IWLS 2005 suite, Ethernet circuit, synthesized with the 32 nm CMOS technology. The proposed system was verified to obtain lifetime and performance estimation and the trade-off with up to 35.8% and 26.3% higher accuracy, respectively, when compared to a system that ignores temperature variability and accounts for average temperature only. The proposed framework can be suitably utilized for tuning run-time throttling policies of low-power designs, thus allowing designers to optimize lifetime–performance trade-offs, depending on the requirements mandated by specific applications and operating environments.
APA, Harvard, Vancouver, ISO, and other styles
41

Silva, Vitor Ramos Gomes da, Carlos Valderrama, Pierre Manneback, and Samuel Xavier-de-Souza. "Analytical Energy Model Parametrized by Workload, Clock Frequency and Number of Active Cores for Share-Memory High-Performance Computing Applications." Energies 15, no. 3 (February 7, 2022): 1213. http://dx.doi.org/10.3390/en15031213.

Full text
Abstract:
Energy consumption is crucial in high-performance computing (HPC), especially to enable the next exascale generation. Hence, modern systems implement various hardware and software features for power management. Nonetheless, due to numerous different implementations, we can always push the limits of software to achieve the most efficient use of our hardware. To be energy efficient, the software relies on dynamic voltage and frequency scaling (DVFS), as well as dynamic power management (DPM). Yet, none have privileged information on the hardware architecture and application behavior, which may lead to energy-inefficient software operation. This study proposes analytical modeling for architecture and application behavior that can be used to estimate energy-optimal software configurations and provide knowledgeable hints to improve DVFS and DPM techniques for single-node HPC applications. Additionally, model parameters, such as the level of parallelism and dynamic power, provide insights into how the modeled application consumes energy, which can be helpful for energy-efficient software development and operation. This novel analytical model takes the number of active cores, the operating frequencies, and the input size as inputs to provide energy consumption estimation. We present the modeling of 13 parallel applications employed to determine energy-optimal configurations for several different input sizes. The results show that up to 70% of energy could be saved in the best scenario compared to the default Linux choice and 14% on average. We also compare the proposed model with standard machine-learning modeling concerning training overhead and accuracy. The results show that our approach generates about 10 times less energy overhead for the same level of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
42

Mohammed, Mohammed Sultan, Ali A. M. Al-Kubati, Norlina Paraman, Ab Al-Hadi Ab Rahman, and M. N. Marsono. "DTaPO: Dynamic Thermal-Aware Performance Optimization for Dark Silicon Many-Core Systems." Electronics 9, no. 11 (November 23, 2020): 1980. http://dx.doi.org/10.3390/electronics9111980.

Full text
Abstract:
Future many-core systems need to handle high power density and chip temperature effectively. Some cores in many-core systems need to be turned off or ‘dark’ to manage chip power and thermal density. This phenomenon is also known as the dark silicon problem. This problem prevents many-core systems from utilizing and gaining improved performance from a large number of processing cores. This paper presents a dynamic thermal-aware performance optimization of dark silicon many-core systems (DTaPO) technique for optimizing dark silicon a many-core system performance under temperature constraint. The proposed technique utilizes both task migration and dynamic voltage frequency scaling (DVFS) for optimizing the performance of a many-core system while keeping system temperature in a safe operating limit. Task migration puts hot cores in low-power states and moves tasks to cooler dark cores to aggressively reduce chip temperature while maintaining high overall system performance. To reduce task migration overhead due to cold start, the source core (i.e., active core) keeps its L2 cache content during the initial migration phase. The destination core (i.e., dark core) can access it to reduce the impact of cold start misses. Moreover, the proposed technique limits tasks migration among cores that share the last level cache (LLC). In the case of major thermal violation and no cooler cores being available, DVFS is used to reduce the hot cores temperature gradually by reducing their frequency. Experimental results for different threshold temperatures show that DTaPO can keep the average system temperature below the thermal limit. Affirmatively, the execution time penalty is reduced by up to 18% compared with using only DVFS for all thermal thresholds. Moreover, the average peak temperature is reduced by up to 10.8°C. In addition, the experimental results show that DTaPO improves the system’s performance by up to 80% compared to optimal sprinting patterns (OSP) and reduces the temperature by up to 13.6°C.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Kuo Yi, Fuh Gwo Chen, and Jr Shian Chen. "A Cost-Effective Hardware Approach for Measuring Power Consumption of Modern Multi-Core Processors." Applied Mechanics and Materials 110-116 (October 2011): 4569–73. http://dx.doi.org/10.4028/www.scientific.net/amm.110-116.4569.

Full text
Abstract:
Multiple processor cores are built within a chip by advanced VLSI technology. With the decreasing prices, multi-core processors are widely deployed in both server and desktop systems. The workload of multi-threaded applications could be separated to different cores by multiple threads, such that application threads can run concurrently to maximize overall execution speed of the applications. Moreover, for the green trend of computing nowadays, most of modern multi-core processors have a functionality of dynamic frequency turning. The power-level tuning techniques are based on Dynamic Voltage and Frequency Scaling (DVFS). In order to evaluate the performance of various power-saving approaches, an appropriate technique to measure the power consumption of multi-core processors is important. However, most of approaches estimate CPU power consumption only from CMOS power consumption data and CPU frequency. These approaches only estimate the dynamic power consumption of multi-core processors, the static power consumption is not be included. In this study, a hardware approach for the power consumption measurement of multi-core processors is proposed. Thus the power consumption of a CPU could be measured precisely, and the performance of CPU power-saving approaches can be evaluated well.
APA, Harvard, Vancouver, ISO, and other styles
44

Rajput, Ravindra Kumar Singh, Dinesh Goyal, Anjali Pant, Gajanand Sharma, Varsha Arya, and Marjan Kuchaki Rafsanjani. "Cloud Data Centre Energy Utilization Estimation." International Journal of Cloud Applications and Computing 12, no. 1 (January 1, 2022): 1–16. http://dx.doi.org/10.4018/ijcac.311035.

Full text
Abstract:
Due to the growth of the internet and internet-based software applications, cloud data center demand has increased. Cloud data centers have thousands of servers that are 24×7 working for users; it is the strong witness of enormous energy consumption for the operation of the cloud data center. However, server utilization is not remaining the same all the time, so, from an economic feasibility point of view, energy management is an essential activity for cloud resource management. Some well-known energy management techniques for cloud data centers generally used are dynamic voltage and frequency scaling (DVFS), dynamic power management (DPM), and task scheduling-based techniques. The present work is based on an analytical approach to integrating resource provisioning with sophisticated task scheduling; the authors estimate energy utilization by cloud data centers using iDR cloud simulator. The work is intended to optimize power consumption in the cloud data center.
APA, Harvard, Vancouver, ISO, and other styles
45

Haririan, Parham. "DVFS and Its Architectural Simulation Models for Improving Energy Efficiency of Complex Embedded Systems in Early Design Phase." Computers 9, no. 1 (January 7, 2020): 2. http://dx.doi.org/10.3390/computers9010002.

Full text
Abstract:
Dealing with resource constraints is an inevitable feature of embedded systems. Power and performance are the main concerns beside others. Pre-silicon analysis of power and performance in today’s complex embedded designs is a big challenge. Although RTL (Register-Transfer Level) models are more precise and reliable, system-level modeling enables the power and performance analysis of complex and dense designs in the early design phase. Virtual prototypes of systems prepared through architectural simulation provide a means of evaluating non-existing systems with more flexibility and minimum cost. Efficient interplay between power and performance is a key feature within virtual platforms. This article focuses on dynamic voltage and frequency scaling (DVFS), which is a well-known system-level low-power design technique together with its more efficient implementations modeled through architectural simulation. With advent of new computing paradigms and modern application domains with strict resource demands, DVFS and its efficient hardware-managed solutions get even more highlighted. This is mainly because they can react faster to resource demands and thus reduce induced overhead. To that end, they entail an effective collaboration between software and hardware. A case review in the end wraps up the discussed topics.
APA, Harvard, Vancouver, ISO, and other styles
46

QIN, Zhiwei, Juan LI, Wei LIU, and Xiao YU. "Mobility-Aware and Energy-Efficient Task Offloading Strategy for Mobile Edge Workflows." Wuhan University Journal of Natural Sciences 27, no. 6 (December 2022): 476–88. http://dx.doi.org/10.1051/wujns/2022276476.

Full text
Abstract:
With the rapid growth of the Industrial Internet of Things (IIoT), the Mobile Edge Computing (MEC) has coming widely used in many emerging scenarios. In MEC, each workflow task can be executed locally or offloaded to edge to help improve Quality of Service (QoS) and reduce energy consumption. However, most of the existing offloading strategies focus on independent applications, which cannot be applied efficiently to workflow applications with a series of dependent tasks. To address the issue, this paper proposes an energy-efficient task offloading strategy for large-scale workflow applications in MEC. First, we formulate the task offloading problem into an optimization problem with the goal of minimizing the utility cost, which is the trade-off between energy consumption and the total execution time. Then, a novel heuristic algorithm named Green DVFS-GA is proposed, which includes a task offloading step based on the genetic algorithm and a further step to reduce the energy consumption using Dynamic Voltage and Frequency Scaling (DVFS) technique. Experimental results show that our proposed strategy can significantly reduce the energy consumption and achieve the best trade-off compared with other strategies.
APA, Harvard, Vancouver, ISO, and other styles
47

Anselmi, Jonatha, Bruno Gaujal, and Louis-S´ebastien Rebuffi. "Optimal Speed Profile of a DVFS Processor under Soft Deadlines." ACM SIGMETRICS Performance Evaluation Review 49, no. 3 (March 22, 2022): 71–72. http://dx.doi.org/10.1145/3529113.3529139.

Full text
Abstract:
Minimizing the energy consumption of embedded systems with real-time execution constraints is becoming more and more important. More functionalities and better performance/ cost tradeoffs are expected from such systems because of the increased use of real-time applications and the fact that batteries are becoming standard power supplies. Dynamically changing the speed of the processor is a common and efficient way to reduce energy consumption and remarkable gains can be obtained when considering cacheintensive and/or CPU-bound applications as the CPU energy consumption may dominate the overall energy consumption. In fact, this is the reason why modern processors are equipped with Dynamic Voltage and Frequency Scaling (DVFS) technology [7]. In the deterministic case where job sizes and arrival times are known, a vast literature addressed the problem of designing both off-line and on-line algorithms to compute speed profiles that minimize the energy consumption subject to hard real-time constraints (deadlines) on job execution times; e.g., [5]. In a stochastic environment where only statistical information is available about job sizes and arrival times, it turns out that combining hard deadlines and energy minimization via DVFS-based techniques is much more difficult. In fact, forcing hard deadlines requires to be very conservative, i.e., to consider the worst cases. Matter of fact, existing approaches work within a finite number of jobs [6, 3].
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Kaijie, Biagio Cosenza, and Ben Juurlink. "Accurate Energy and Performance Prediction for Frequency-Scaled GPU Kernels." Computation 8, no. 2 (April 27, 2020): 37. http://dx.doi.org/10.3390/computation8020037.

Full text
Abstract:
Energy optimization is an increasingly important aspect of today’s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Dabhade, Kiran Bhimrao, and C. M. Mankar. "An Optimization Framework of Adaptive Computing-plus-Communication for Multimedia Processing in Cloud: A Review." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 2872–76. http://dx.doi.org/10.22214/ijraset.2022.42887.

Full text
Abstract:
Abstract: Clear trend within the evolution of network-based services is that the ever-increasing amount of multimedia system data concerned. This trend towards big-data multimedia system process finds its natural placement at the side of the adoption of the cloud computing paradigm, that looks the most effective solution to the strain of a extremely fluctuating work that characterizes this sort of services. However, as cloud data centers become a lot of and a lot of powerful, energy consumption becomes a significant challenge each for environmental concerns and for economic reasons. An effective approach to improve energy efficiency in cloud data centers is to rely on traffic engineering techniques to dynamically adapt the number of active servers to the current workload. Towards this aim, we propose a joint computing-plus-communication improvement framework exploiting virtualization technologies. Our proposal specifically addresses the everyday situation of data processing processing with computationally intensive tasks and exchange of a giant volume of data. The proposed framework not only ensures users the Quality of Service, but also achieves maximum energy saving and attains green cloud computing goals in a fully distributed fashion by utilizing the DVFS-based CPU frequencies Keywords: Energy efficiency, Multimedia data processing, Cloud resource management, Load balancing, Dynamic voltage and frequency scaling (DVFS), Traffic engineering
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Xing, Haiying Zhou, Jianwen Xiang, Shengwu Xiong, Kun Mean Hou, Christophe de Vaulx, Huan Wang, Tianhui Shen, and Qing Wang. "Energy and Delay Optimization of Heterogeneous Multicore Wireless Multimedia Sensor Nodes by Adaptive Genetic-Simulated Annealing Algorithm." Wireless Communications and Mobile Computing 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/7494829.

Full text
Abstract:
Energy efficiency and delay optimization are significant for the proliferation of wireless multimedia sensor network (WMSN). In this article, an energy-efficient, delay-efficient, hardware and software cooptimization platform is researched to minimize the energy cost while guaranteeing the deadline of the real-time WMSN tasks. First, a multicore reconfigurable WMSN hardware platform is designed and implemented. This platform uses both the heterogeneous multicore architecture and the dynamic voltage and frequency scaling (DVFS) technique. By this means, the nodes can adjust the hardware characteristics dynamically in terms of the software run-time contexts. Consequently, the software can be executed more efficiently with less energy cost and shorter execution time. Then, based on this hardware platform, an energy and delay multiobjective optimization algorithm and a DVFS adaption algorithm are investigated. These algorithms aim to search out the global energy optimization solution within the acceptable calculation time and strip the time redundancy in the task executing process. Thus, the energy efficiency of the WMSN node can be improved significantly even under strict constraint of the execution time. Simulation and real-world experiments proved that the proposed approaches can decrease the energy cost by more than 29% compared to the traditional single-core WMSN node. Moreover, the node can react quickly to the time-sensitive events.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography