To see the other types of publications on this topic, follow the link: Energy efficiency in Data Centers.

Dissertations / Theses on the topic 'Energy efficiency in Data Centers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Energy efficiency in Data Centers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Baccour, Emna. "Network architectures and energy efficiency for high performance data centers." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK009/document.

Full text
Abstract:
L’évolution des services en ligne et l’avènement du big data ont favorisé l’introduction de l’internet dans tous les aspects de notre vie : la communication et l’échange des informations (exemple, Gmail et Facebook), la recherche sur le web (exemple, Google), l’achat sur internet (exemple, Amazon) et le streaming vidéo (exemple, YouTube). Tous ces services sont hébergés sur des sites physiques appelés centres de données ou data centers qui sont responsables de stocker, gérer et fournir un accès rapide à toutes les données. Tous les équipements constituants le système d’information d’une entreprise (ordinateurs centraux, serveurs, baies de stockage, équipements réseaux et de télécommunications, etc) peuvent être regroupés dans ces centres de données. Cette évolution informatique et technologique a entrainé une croissance exponentielle des centres de données. Cela pose des problèmes de coût d’installation des équipements, d’énergie, d’émission de chaleur et de performance des services offerts aux clients. Ainsi, l’évolutivité, la performance, le coût, la fiabilité, la consommation d’énergie et la maintenance sont devenus des défis importants pour ces centres de données. Motivée par ces défis, la communauté de recherche a commencé à explorer de nouveaux mécanismes et algorithmes de routage et des nouvelles architectures pour améliorer la qualité de service du centre de données. Dans ce projet de thèse, nous avons développé de nouveaux algorithmes et architectures qui combinent les avantages des solutions proposées, tout en évitant leurs limitations. Les points abordés durant ce projet sont: 1) Proposer de nouvelles topologies, étudier leurs propriétés, leurs performances, ainsi que leurs coûts de construction. 2) Conception des algorithmes de routage et des modèles pour réduire la consommation d’énergie en prenant en considération la complexité, et la tolérance aux pannes. 3) Conception des protocoles et des systèmes de gestion de file d’attente pour fournir une bonne qualité de service. 4) Évaluation des nouveaux systèmes en les comparants à d’autres architectures et modèles dans des environnements réalistes<br>The increasing trend to migrate applications, computation and storage into more robust systems leads to the emergence of mega data centers hosting tens of thousands of servers. As a result, designing a data center network that interconnects this massive number of servers, and providing efficient and fault-tolerant routing service are becoming an urgent need and a challenge that will be addressed in this thesis. Since this is a hot research topic, many solutions are proposed like adapting new interconnection technologies and new algorithms for data centers. However, many of these solutions generally suffer from performance problems, or can be quite costly. In addition, devoted efforts have not focused on quality of service and power efficiency on data center networks. So, in order to provide a novel solution that challenges the drawbacks of other researches and involves their advantages, we propose to develop new data center interconnection networks that aim to build a scalable, cost-effective, high performant and QoS-capable networking infrastructure. In addition, we suggest to implement power aware algorithms to make the network energy effective. Hence, we will particularly investigate the following issues: 1) Fixing architectural and topological properties of the new proposed data centers and evaluating their performances and capacities of providing robust systems under a faulty environment. 2) Proposing routing, load-balancing, fault-tolerance and power efficient algorithms to apply on our architectures and examining their complexity and how they satisfy the system requirements. 3) Integrating quality of service. 4) Comparing our proposed data centers and algorithms to existing solutions under a realistic environment. In this thesis, we investigate a quite challenging topic where we intend, first, to study the existing models, propose improvements and suggest new methodologies and algorithms
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Yongqiang. "Energy efficient virtual machine placement in data centers." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61408/1/Yongqiang_Wu_Thesis.pdf.

Full text
Abstract:
Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindberg, Therese. "Modelling and Evaluation of Distributed Airflow Control in Data Centers." Thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-36479.

Full text
Abstract:
In this work a suggested method to reduce the energy consumption of the cooling system in a data center is modelled and evaluated. Introduced is different approaches to distributed airflow control, in which different amounts of airflow can be supplied in different parts of the data center (instead of an even airflow distribution). Two different kinds of distributed airflow control are compared to a traditional approach without airflow control. The difference between the two control approaches being the type of server rack used, either traditional ones or a new kind of rack with vertically placed servers. A model capable of describing the power consumption of the data center cooling system for these different approaches to airflow control was constructed. Based on the model, MATLAB simulations of three different server work load scenarios were then carried out. It was found that introducing distributed airflow control reduced the power consumption for all scenarios and that the control approach with the new kind of rack had the largest reduction. For this case the power consumption of the cooling system could be reduced to 60% - 69% of the initial consumption, depending on the workload scenario. Also examined was the effect on the data center of different parameters and process variables (parameters held fixed with the help of feedback loops), as well as optimal set point values.
APA, Harvard, Vancouver, ISO, and other styles
4

Jawad, Muhammad. "Energy Efficient Data Centers for On-Demand Cloud Services." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/25198.

Full text
Abstract:
The primary objective of the Data Centers (DCs) is to provide in-time services to the cloud customers. For in-time services, DCs required an uninterruptable power supply at low cost. The DCs? power supply is directly linked with the stability and steady-state performance of the power system under faults and disturbances. Smart Grids (SGs) also known as the next generation power systems utilize communication and information technology to optimize power generation, distribution, and consumption. Therefore, it is beneficial to run DCs under SG environment. We present a thorough study of the wide area smart grid architecture, design, network, and control. The goal was to familiarize with the smart grid operation, monitoring, and control. We analyze different control mechanisms proposed in the past to study the behavior of the wide area smart grid symmetric and asymmetric grid faults conditions. The Study of the SG architecture was a first step to design power management and energy cost reduction models for the DC running under SGs. At first, we present a Power Management Model (PMM) for the DCs to estimate energy consumption cost. The PMM is a comprehensive model that considers many important quantities into account, such as DC power consumption, data center battery bank charging/discharging, backup generation operation during power outages, and power transactions between the main grid and the SG. Second, renewable energy, such as wind energy is integrated with the SG to minimize DC energy consumption cost. Third, forecasting algorithms are introduced in the PMM to predict DC power consumption, wind energy generation, and main grid power availability for the SG. The forecasting algorithms are employed for day-ahead and week-ahead prediction horizons. The purpose of the forecasting algorithms is to manage power generation and consumption, and reduce energy prices. Fourth, we formulate chargeback model for the DC customers to calculate on-demand cloud services cost. The DC energy consumption cost estimated through PMM is integrated with the other operational and capital expenditures to calculate per server utilization cost for the DC customers. Finally, the effectiveness of the proposed models is evaluated on real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
5

Somani, Ankit. "Advanced thermal management strategies for energy-efficient data centers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/26527.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.<br>Committee Chair: Joshi, Yogendra; Committee Member: ghiaasiaan, mostafa; Committee Member: Schwan, Karsten. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Bergqvist, Sofia. "Energy Efficiency Improvements Using DC in Data Centres." Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155096.

Full text
Abstract:
The installed power usage in a data centre will often amount to several megawatts (MW). Thetotal power consumption of the data centres in the world is comparable to that of the airtraffic. The high energy costs and carbon dioxide emissions resulting from the operation of adata centre call for alternative, more efficient, solutions for the power supply design. Oneproposed solution to decrease the energy usage is to use a direct current power supply (DCUPS) for all the servers in the data centre and thereby reduce the number of conversionsbetween AC and DC.The aim with this thesis was to determine whether such a DC solution brings reduced powerconsumption compared to a traditional setup and, if this is the case, how big the savings are.Measurements were carried out on different equipment and thereafter the power consumptionwas calculated. The conclusion was that up to 25 % in electricity use can be saved when usinga DC power supply compared to the traditional design.Other benefits that come with the DC technology are simplified design, improved reliabilityand lowered investments costs. Moreover the use of DC in data centres enables a moreefficient integration of renewable energy technologies into the power supply design
APA, Harvard, Vancouver, ISO, and other styles
7

Feller, Eugen. "Autonomic and Energy-Efficient Management of Large-Scale Virtualized Data Centers." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00785090.

Full text
Abstract:
Large-scale virtualized data centers require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis provides four main contributions. The first one proposes Snooze, a novel Infrastructure-as-a-Service (IaaS) cloud management system, which is designed to scale across many thousands of servers and virtual machines (VMs) while being easy to configure, highly available, and energy efficient. For scalability, Snooze performs distributed VM management based on a hierarchical architecture. To support ease of configuration and high availability Snooze implements self-configuring and self-healing features. Finally, for energy efficiency, Snooze integrates a holistic energy management approach via VM resource (i.e. CPU, memory, network) utilization monitoring, underload/overload detection and mitigation, VM consolidation (by implementing a modified version of the Sercon algorithm), and power management to transition idle servers into a power saving mode. A highly modular Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. Results show that: (i) distributed VM management does not impact submission time; (ii) fault tolerance mechanisms do not impact application performance and (iii) the system scales well with an increasing number of resources thus making it suitable for managing large-scale data centers. We also show that the system is able to dynamically scale the data center energy consumption with its utilization thus allowing it to conserve substantial power amounts with only limited impact on application performance. Snooze is an open-source software under the GPLv2 license. The second contribution is a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. ACO is interesting for VM placement due to its polynomial worst-case time complexity, close to optimal solutions and ease of parallelization. Simulation results show that while the scalability of the current algorithm implementation is limited to a smaller number of servers and VMs, the algorithm outperforms the evaluated First-Fit Decreasing greedy approach in terms of the number of required servers and computes close to optimal solutions. In order to enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based consolidation algorithm; (ii) a fully decentralized consolidation system based on an unstructured peer-to-peer network. The key idea is to apply consolidation only in small, randomly formed neighbourhoods of servers. We evaluated our approach by emulation on the Grid'5000 testbed using two state-of-the-art consolidation algorithms (i.e. Sercon and V-MAN) and our ACO-based consolidation algorithm. Results show our system to be scalable as well as to achieve a data center utilization close to the one obtained by executing a centralized consolidation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Takouna, Ibrahim. "Energy-efficient and performance-aware virtual machine management for cloud data centers." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/texte_eingeschraenkt_verlag/2014/7239/.

Full text
Abstract:
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten.<br>Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
APA, Harvard, Vancouver, ISO, and other styles
9

Samadiani, Emad. "Energy efficient thermal management of data centers via open multi-scale design." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37218.

Full text
Abstract:
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
APA, Harvard, Vancouver, ISO, and other styles
10

RUIU, PIETRO. "Energy Management in Large Data Center Networks." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2706336.

Full text
Abstract:
In the era of digitalization, one of the most challenging research topic regards the energy consumption reduction of ICT equipment to contrast the global climate change. The ICT world is very sensitive to the problem of Greenhouse Gas emissions (GHG) and for several years has begun to implement some countermeasures to reduce consumption waste and increase efficiency of infrastructure: the total embodied emissions of end-use devices have significantly decreased, networks have become more energy efficient, and trends such as virtualization and dematerialization will continue to make equipment more efficient. One of the main contributor to GHG emissions is data centers industry, which provision end users with the necessary computing and communication resources to access the vast majority of services online and on a pay-as-you-go basis. Data centers require a tremendous amount of energy to operate, since the efficiency of cooling systems is increasing, more research efforts should be put in making green the IT system, which is becoming the major contributor to energy consumption. Being the network one of the non-negligible contributors to energy consumption in data centers, several architectures have been designed with the goal of improving energy-efficient of data centers. These architectures are called Data Center Networks (DCNs) and provide interconnections among the computing servers and between the servers and the Internet, according to specific layouts.In my PhD I have extensively investigated on energy efficiency of data center, working on different projects which try to tackle the problems from different views. The research can be divided into two main parts with the Energy Proportionality as connection argument. The main focus of the work is about the trade-off between size and energy efficiency of data centers, with the aim to find a relationship between scalability and energy proportionality of data centers. In this regard, the energy consumption of different data center architectures have been analyzed, varying the dimension in terms of number of server and switches. Extensive simulation experiments, performed in small and large scale scenarios, unveil the ability of network-aware allocation policies in loading the the data center in a energy-proportional manner and the robustness of classical two- and three-tier design under network-oblivious allocation strategies. The concept of energy proportionality, applied to the whole DCN and used as efficiency metric, is one of the main contributions of the work. Energy proportionality is a property defining the degree of proportionality between load and the energy spent to support such load, thus devices are energy proportional when any increase of the load corresponds to a proportional increase of energy consumption. A peculiar feature of our analysis is in the consideration of the whole data center, i.e., both computing and communication devices are taken into account. Our methodology consists of an asymptotic analysis of data center consumption, whenever its size (in terms of servers) become very large. In our analysis, we investigate the impact of three different allocation policies on the energy proportionality of computing and networking equipment for different DCNs, including 2-Tier, 3-Tier and Jupiter topologies. For evaluation, the size of the DCNs varies to accommodate up to several thousands of computing servers. Validation of the analysis is conducted through simulations. We propose new metrics with the objective to characterize in a holistic manner the energy proportionality in data centers. The experiments unveil that, when consolidation policies are in place and regardless of the type of architecture, the size of the DCN plays a key role, i.e., larger DCNs containing thousands of servers are more energy proportional than small DCNs.
APA, Harvard, Vancouver, ISO, and other styles
11

Feller, Eugen. "Automatic and energy-efficient management of large scale virtualized data centers." Rennes 1, 2012. http://www.theses.fr/2012REN1S136.

Full text
Abstract:
Cette thèse propose Snooze, un système autonome et économique en énergie pour des clouds "Infrastructure-as-a-Service" (IaaS). Pour le passage à l’échelle, la facilité d’administration et la haute disponibilité, Snooze repose sur une architecture hiérarchique auto-configurable et auto-réparante. Pour la gestion de l’énergie, Snooze intègre la surveillance des ressources utilisées par les machines virtuelles (VM), la résolution des situations de sous-charge et de surcharge des serveurs, la gestion de leur alimentation électrique et le regroupement de VMs. Un prototype robuste du système Snooze a été développé et évalué avec des applications réalistes sur la plate-forme Grid’5000. Pour favoriser les périodes d’inactivité des serveurs dans un cloud IaaS, il faut placer les VMs judicieusement et les regrouper. Cette thèse propose un algorithme de placement de VMs fondé sur la méta-heuristique d’optimisation par colonies de fourmis (ACO). Des simulations ont montré que cet algorithme calcule des solutions proches de l’optimal, meilleures que celles de l’algorithme "First-Fit-Decreasing" au prix d’un moins bon passage à l’échelle. Pour le passage à l’échelle du regroupement de VMs, cette thèse apporte deux autres contributions : un algorithme de regroupement de VMs fondé sur l'ACO et un système de regroupement de VMs complètement décentralisé fondé sur un réseau pair-à-pair non structuré de serveurs. Les résultats d’émulation ont montré que notre système passe à l’échelle et qu’il permet d’atteindre un taux d’utilisation du centre de données proche de celui obtenu avec un système centralisé<br>Large-scale virtualized data centers now require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis proposes Snooze, a novel highly available, easy to configure, and energy-efficient Infrastructure-as-a-Service (IaaS) cloud management system. For scalability and high availability Snooze integrates a self-configuring and healing hierarchical architecture. To achieve energy efficiency Snooze integrates a holistic energy management approach via virtual machine (VM) resource utilization monitoring, server underload/overload mitigation, VM consolidation, and power management. A robust Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. The experiments have proven Snooze to be scalable, highly available and energy-efficient. One way to favor servers idle times in IaaS clouds is to perform energy-efficient VM placement and consolidation. This thesis proposes a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. Simulation results have shown that the proposed algorithm computes close to optimal solutions and outperforms the evaluated First-Fit Decreasing algorithm at the cost of decreased scalability. To enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based VM consolidation algorithm; (ii) a fully decentralized VM consolidation system based on an unstructured peer-to-peer network of servers. Emulation conducted on the Grid'5000 testbed has proven our system to be scalable as well as to achieve data center utilization close to the one of a centralized system
APA, Harvard, Vancouver, ISO, and other styles
12

Penumetsa, Swetha. "A comparison of energy efficient adaptation algorithms in cloud data centers." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17374.

Full text
Abstract:
Context: In recent years, Cloud computing has gained a wide range of attention in both industry and academics as Cloud services offer pay-per-use model, due to increase in need of factors like reliability and computing results with immense growth in Cloud-based companies along with a continuous expansion of their scale. However, the rise in Cloud computing users can cause a negative impact on energy consumption in the Cloud data centers as they consume huge amount of overall energy. In order to minimize the energy consumption in virtual datacenters, researchers proposed various energy efficient resources management strategies. Virtual Machine dynamic Consolidation is one of the prominent technique and an active research area in recent time, used to improve resource utilization and minimize the electric power consumption of a data center. This technique monitors the data centers utilization, identify overloaded, and underloaded hosts then migrate few/all Virtual Machines (VMs) to other suitable hosts using Virtual Machine selection and Virtual Machine placement, and switch underloaded hosts to sleep mode.   Objectives: Objective of this study is to define and implement new energy-aware heuristic algorithms to save energy consumption in Cloud data centers and show the best-resulted algorithm then compare performances of proposed heuristic algorithms with old heuristics.   Methods: Initially, a literature review is conducted to identify and obtain knowledge about the adaptive heuristic algorithms proposed previously for energy-aware VM Consolidation, and find the metrics to measure the performance of heuristic algorithms. Based on this knowledge, for our thesis we have proposed 32 combinations of novel adaptive heuristics for host overload detection (8) and VM selection algorithms (4), one host underload detection and two adaptive heuristic for VM placement algorithms which helps in minimizing both energy consumption and reducing overall Service Level Agreement (SLA) violation of Cloud data center. Further, an experiment is conducted to measure the performances of all proposed heuristic algorithms. We have used the CloudSim simulation toolkit for the modeling, simulation, and implementation of proposed heuristics. We have evaluated the proposed algorithms using PlanetLab VMs real workload traces.   Results: The results were measured using metrics energy consumption of data center (power model), Performance Degradation due to Migration (PDM), Service Level Agreement violation Time per Active Host (SLATAH), Service Level Agreement Violation (SLAV = PDM . SLATAH) and, Energy consumption and Service level agreement Violation (ESV).  Here for all four categories of VM Consolidation, we have compared the performances of proposed heuristics with each other and presented the best heuristic algorithm proposed in each category. We have also compared the performances of proposed heuristic algorithms with existing heuristics which are identified in the literature and presented the number of newly proposed algorithms work efficiently than existing algorithms. This comparative analysis is done using T-test and Cohen's d effect size.   From the comparison results of all proposed algorithms, we have concluded that Mean absolute Deviation around median (MADmedain) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified First Fit Decreasing VM placement (MFFD), and Standard Deviation (STD) host overload detection algorithm equipped with Maximum requested RAM VM selection (MaxR) using Modified Last Fit decreasing VM placement (MLFD) respectively performed better than other 31 combinations of proposed overload detection and VM selection heuristic algorithms, with regards to Energy consumption and Service level agreement Violation (ESV). However, from the comparative study between existing and proposed algorithms, 23 and 21 combinations of proposed host overload detection and VM selection algorithms using MFFD and MLFD VM placements respectively performed efficiently compared to existing (baseline) heuristic algorithms considered for this study.   Conclusions: This thesis presents novel proposed heuristic algorithms that are useful for minimization of both energy consumption and Service Level Agreement Violation in virtual datacenters. It presents new 23 combinations of proposed host overloading detection and VM selection algorithms using MFFD VM placement and 21 combinations of proposed host overloading detection and VM selection algorithms using MLFD VM placement, which consumes the minimum amount of energy with minimal SLA violation compared to the existing algorithms. It gives scope for future researchers related to improving resource utilization and minimizing the electric power consumption of a data center. This study can be extended in further by implementing the work on other Cloud software platforms and developing much more efficient algorithms for all four categories of VM consolidation.
APA, Harvard, Vancouver, ISO, and other styles
13

Alharbi, Fares Abdi H. "Profile-based virtual machine management for more energy-efficient data centers." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129871/8/Fares%20Abdi%20H%20Alharbi%20Thesis.pdf.

Full text
Abstract:
This research develops a resource management framework for improved energy efficiency in cloud data centers through energy-efficient virtual machine placement to physical machines as well as application assignment to virtual machines. The study investigates static virtual machine placement, dynamic virtual machine placement and application assignment using ant colony optimization to minimize the total energy consumption in data centers.
APA, Harvard, Vancouver, ISO, and other styles
14

Cheocherngngarn, Tosmate. "Cross-Layer Design for Energy Efficiency on Data Center Network." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/730.

Full text
Abstract:
Energy efficient infrastructures or green IT (Information Technology) has recently become a hot button issue for most corporations as they strive to eliminate every inefficiency from their enterprise IT systems and save capital and operational costs. Vendors of IT equipment now compete on the power efficiency of their devices, and as a result, many of the new equipment models are indeed more energy efficient. Various studies have estimated the annual electricity consumed by networking devices in the U.S. in the range of 6 - 20 Terra Watt hours. Our research has the potential to make promising solutions solve those overuses of electricity. An energy-efficient data center network architecture which can lower the energy consumption is highly desirable. First of all, we propose a fair bandwidth allocation algorithm which adopts the max-min fairness principle to decrease power consumption on packet switch fabric interconnects. Specifically, we include power aware computing factor as high power dissipation in switches which is fast turning into a key problem, owing to increasing line speeds and decreasing chip sizes. This efficient algorithm could not only reduce the convergence iterations but also lower processing power utilization on switch fabric interconnects. Secondly, we study the deployment strategy of multicast switches in hybrid mode in energy-aware data center network: a case of famous Fat-tree topology. The objective is to find the best location to deploy multicast switch not only to achieve optimal bandwidth utilization but also minimize power consumption. We show that it is possible to easily achieve nearly 50% of energy consumption after applying our proposed algorithm. Finally, although there exists a number of energy optimization solutions for DCNs, they consider only either the hosts or network, but not both. We propose a joint optimization scheme that simultaneously optimizes virtual machine (VM) placement and network flow routing to maximize energy savings. The simulation results fully demonstrate that our design outperforms existing host- or network-only optimization solutions, and well approximates the ideal but NP-complete linear program. To sum up, this study could be crucial for guiding future eco-friendly data center network that deploy our algorithm on four major layers (with reference to OSI seven layers) which are physical, data link, network and application layer to benefit power consumption in green data center.
APA, Harvard, Vancouver, ISO, and other styles
15

Shao, Zhuo. "Employing UPS for Safe and Efficient Power Oversubscription in Data Centers." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366050387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Soares, Maria José. "Data center - a importância de uma arquitectura." Master's thesis, Universidade de Évora, 2011. http://hdl.handle.net/10174/11604.

Full text
Abstract:
Este trabalho, apresenta um estudo sob a forma de overview, abordando a temática dos Data Centers no que concerne à importância da sua arquitectura. Foram elencados os principais factores críticos a considerar numa arquitectura, bem como as melhores práticas a implementar no sentido de se avaliar a importância de uma certificação pela entidade de certificação - Uptime Institute. Aborda-se ainda o eventual interesse em como expandir essa certificação/qualificação aos recursos humanos, como garantia de qualidade de serviços e estratégia de marketing. Como forma de consubstanciar a temática, foi criado um Case Study, observando-se um universo de sete Data Centers em Portugal, pertencentes ao sector público e privado, permitindo a verificação e comparação de boas práticas, bem como os aspectos menos positivos a considerar dentro da área. Finalmente, são deixadas algumas reflexões sobre o que pode ser a tendência de evolução dos Data Centers numa perspectiva de qualidade; ### Abstract: This is presents a study, in the form of overview, addressing the issue of the importance of architecture in Data Centers. The main critical factors in architecture were considered as well as the best practices to implement it in order to assess the value of a recognized certificate. It also discusses the possible interest in expanding the certification/qualification of human resources as a guarantee for quality of the services provided and marketing strategies. To support this work we analyzed seven Case Studies, where it was possible to observe a representative universe of Data Centers in Portugal, belonging to the public and private sectors, allowing the verification and comparison of good practices as well as the less positive aspects to consider within this area. At the end of the document we present conclusions on what may be the trend for the evolution of Data Center as far as quality is concerned.
APA, Harvard, Vancouver, ISO, and other styles
17

Douchet, Fabien. "Optimisation énergétique de data centers par utilisation de liquides pour le refroidissement des baies informatiques." Thesis, Lorient, 2015. http://www.theses.fr/2015LORIS386/document.

Full text
Abstract:
Les data centers sont des infrastructures qui hébergent un grand nombre d’équipements informatiques. Plus de 99% de la puissance électrique consommée par les composants électroniques est transformée en chaleur. Pour assurer leur bon fonctionnement il est donc nécessaire de les refroidir. Cette opération est majoritairement réalisée par l’emploi de systèmes de climatisation à air très énergivores. De plus, la densité de puissance dissipée au sein des baies informatiques est en augmentation permanente. Nous arrivons alors aux limites de l’utilisation de l’air comme fluide caloporteur pour le refroidissement. Les études réalisées durant cette thèse concernent l’amélioration de l’efficacité énergétique des systèmes de refroidissement des baies électroniques par l’exploitation de liquides comme fluides caloporteurs. Cette approche permet de bénéficier de coefficients d’échanges thermiques et de capacités de refroidissement plus importants, avec des perspectives plus viables pour la revalorisation de la chaleur issue des data centers.Durant la thèse, quatre solutions de refroidissement ont été évaluées. Des expérimentations ont été menées à l’échelle de serveurs et d’une baie informatique. Une instrumentation conséquente permet de mettre en évidence le bon refroidissement des composants et de déterminer des indicateurs d’efficacité énergétique des systèmes étudiés. A partir des résultats expérimentaux, deux modèles numériques sont développés par une approche nodale et une identification des paramètres par méthode inverse. Ces modèles pourront être dupliqués à l’échelle d’une salle informatique afin de quantifier les gains potentiels de deux solutions de refroidissement liquide<br>Data centers are facilities that house a large numbers of computer equipment. More than 99% of the electrical power consumed by the electronic components is converted into heat. To ensure their good working, it is necessary to keep them under their recommended temperatures. This is mainly achieved by the use of air conditioning systems which consume a lot of electrical power. In addition, the power density of computer racks is constantly increasing. So the limits of air as a coolant for electronic equipment cooling are reached.Studies conducted during this thesis concern the improvement of energy efficiency of cooling systems for electronic rack by using liquids as heat transfer fluids. This approach gives higher heat exchange coefficients and larger cooling capacity with more viable aspects for the recovering of heat from data centers.Four cooling solutions are evaluated. Experiments are conducted on several servers and on a computer rack. A consistent instrumentation helps to highlight the efficiency of components cooling and allows us to identify energy efficiency indicators of the studied systems. From the experimental results, two numerical models are developed by a nodal approach and a parameter identification by inverse method is carried out. These models can be duplicated at the scale of a data center room in order to quantify the potential gains of two liquid cooling solutions
APA, Harvard, Vancouver, ISO, and other styles
18

Tesfatsion, Kostentinos Selome. "A Combined Frequency Scaling and Application Elasticity Approach for Energy-Efficient Virtualized Data Centers." Thesis, Umeå universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-85211.

Full text
Abstract:
At present, large-scale data centers are typically over-provisioned in order to handle peak load requirements. The resulting low utilization of resources contribute to a huge amounts of power consumption in data centers. The effects of high power consumption manifest in a high operational cost in data centers and carbon footprints to the environment. Therefore, the management solutions for large-scale data centers must be designed to effectively take power consumption into account. In this work, we combine three management techniques that can be used to control systems in an energy-efficient manner: changing the number of virtual machines, changing the number of cores, and scaling the CPU frequencies. The proposed system consists of a controller that combines feedback and feedforward information to determine a configuration that minimizes power consumption while meeting the performance target. The controller can also be configured to accomplish power minimization in a stable manner, without causing large oscillations in the resource allocations. Our experimental evaluation based on the Sysbench benchmark combined with workload traces from production systems shows that our approach achieves the lowest energy consumption among the compared three approaches while meeting the performance target.
APA, Harvard, Vancouver, ISO, and other styles
19

Ghuman, Karanjot Singh. "Improving Energy Efficiency and Bandwidth Utilization in Data Center Networks Using Segment Routing." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35846.

Full text
Abstract:
In today’s scenario, energy efficiency has become one of the most crucial issues for Data Center Networks (DCN). This paper analyses the energy saving capability of a Data center network using Segment Routing (SR) based model within a Software Defined Network (SDN) architecture. Energy efficiency is measured in terms of number of links turned off and for how long the links remain in sleep mode. Apart from saving the energy by turning off links, our work further efficiently manages the traffic within the available links by using Per-packet based load balancing approach. Aiming to avoid congestion within DCN’s and increase the sleeping time of inactive links. An algorithm for deciding the particular set of links to be turned off within a network is presented. With the introduction of per-packet approach within SR/SDN model, we have successfully saved 21 % of energy within DCN topology. Results show that the proposed Per-packet SR model using Random Packet Spraying (RPS) saves more energy and provides better performance as compared to Per-flow based SR model, which uses Equal Cost Multiple Path (ECMP) for load balancing. But, certain problems also come into picture using per-packet approach, like out of order packets and longer end to end delay. To further solidify the effect of SR in saving energy within DCN and avoid previously introduced problems, we have used per-flow based Flow Reservation approach along with a proposed Flow Scheduling Algorithm. Flow rate of all incoming flows can be deduced using Flow reservation approach, which is further used by Flow Scheduling Algorithm to increase Bandwidth utilization Ratio of links. Ultimately, managing the traffic more efficiently and increasing the sleeping time of links, leading to more energy savings. Results show that, the energy savings are almost similar in per-packet based approach and per-flow based approach with bandwidth reservation. Except, the average sleeping time of links in per-flow based approach with bandwidth reservation decreases less severely as compared to per-packet based approach, as overall traffic load increases.
APA, Harvard, Vancouver, ISO, and other styles
20

Wilde, Torsten [Verfasser], Arndt [Akademischer Betreuer] Bode, Arndt [Gutachter] Bode, and Dieter [Gutachter] Kranzlmüller. "Assessing the Energy Efficiency of High Performance Computing (HPC) Data Centers / Torsten Wilde ; Gutachter: Arndt Bode, Dieter Kranzlmüller ; Betreuer: Arndt Bode." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/115646191X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stewart, Jeremy M. (Jeremy Matthew). "Developing a low-cost, systematic approach to increase an existing data center's Energy Efficiency." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59178.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2010.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 87-91).<br>Data centers consume approximately 1.5% of total US electricity and 0.8% of the total world electricity, and this percentage will increase with the integration of technology into daily lives. In typical data centers, valued added IT equipment such as memory, servers, and networking account for less than one half of the electricity consumed, while support equipment consumes the remaining electricity. The purpose of this thesis is to present the results of developing, testing, and implementing a low-cost, systematic approach for increasing the energy efficiency of data centers. The pilot process was developed using industry best practices, and was piloted at a Raytheon site in Garland, TX. Because the approach is low-cost, there is an emphasis on increasing the energy efficiency of data centers' heat removal and lighting equipment. The result of implementing the low-cost systematic approach, consisting of both technical and behavior modifications, was a 23% reduction in electricity consumption, leading to annual savings of over $53,000. The improvement to the heat removal equipment's energy efficiency was 54%. In addition to presenting the results of the pilot, recommendations for replicating the pilot's success are provided. Two major managerial techniques are described - creating an aligned incentive structure in both Facilities and IT departments during the data center design phase, and empowering employees to make improvements during the use phase. Finally, a recommended roll out plan, which included a structure for Data Center Energy Efficiency Rapid Results Teams, is provided.<br>by Jeremy M. Stewart.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
22

Milad, Muftah A. "UPS system : how current and future technologies can improve energy efficiency in data centres." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/14664.

Full text
Abstract:
A data centre can consist of a large group of networked servers and associated power distribution, networking, and cooling equipment, all that application consumes enormous amounts of energy as a small city, which are driving to a significant increase in energy inefficiency problems in data centre, and high operational costs. Also the massive amounts of computation power contained in these systems results in many interesting distributed systems and resource management problems. In recent years, research and technologies in electrical engineering and computer science have made fast progress in various fields. One of the most important fields is the energy consumption in data centre. In recent years the energy consumption of electronic devices in data centre, as reported by. Choa, Limb and Kimb, nearly 30000000 kWh of power in a year, may consume by a large data centre and cost its operator around £3,000,000 for electricity alone. Some of the UK sites consume more than this. In the UK data centre the total power required are amid 2-3TWh per year. Energy is the largest single component of operating costs for data centres, varying from 25-60%. Agreeing to many types of research, one of the largest losses and causes of data centre energy inefficiency power distribution is from the uninterruptible power supply (UPS). So a detailed study characterized the efficiencies of various types of UPSs under a variety of operating conditions, proposed an efficiency label for UPSs, also investigate challenges related to data centre efficiency, and how all new technologies can be used to simplify deployment, improve resource efficiency, and saving cost. Data centre energy consumption is an important and increasing concern for data centre managers and operators. Inefficient UPS systems can contribute to this concern with 15 percent or more of utility input going to electrical waste within the UPS itself. For that reason, maximizing energy efficiencies, and reduce the power consumption in a data centre has become an important issue in saving costs and reducing carbon footprint, and it is necessary to reduce the operational costs. This study attempts to answer the question of how can future UPS topology and technology improve the efficiency and reduce the cost of data centre. In order to study the impact of different UPS technologies and their operating efficiencies. A model for a medium size data centre is developed, and load schedules and worked diagrams were created to examine in detail and test the components of each of the UPS system topologies. The electrical infrastructure topology to be adopted is configured to ‘2N’ and ‘N+1’ redundancy configuration for each UPS systems technologies, where ‘N’ stands for the number of UPS modules that are required to supply power to data centre. This work done at RED engineering designs company. They are professionals for designing and construction of a new Tier III and Tier IV data centres. The aim of this work is to provide data centre managers with a clearer understanding of key factors and considerations involved in selecting the right UPS to meet present and future requirements.
APA, Harvard, Vancouver, ISO, and other styles
23

Brady, Gemma Ann. "Energy efficiency in data centres and the barriers to further improvements : an interdisciplinary investigation." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/12359/.

Full text
Abstract:
Creation, storage and sharing of data throughout the world is rapidly increasing alongside rising demands for access to the internet, communications and digital services, leading to increasing levels of energy consumption in data centres. Steps have already been taken towards lower energy consumption, however there is still some way to go. To gain a better understanding of what barriers there are to further energy saving, a cross-section of industry representatives were interviewed. Generally, it was found that efforts are being made to reduce energy consumption, albeit to varying degrees. Those interviewed face various problems when attempting to improve their energy consumption including financial difficulties, lack of communication, tenant/landlord type relationships and physical restrictions. The findings show that the data centre industry would benefit from better access to information such as which technologies or management methods to invest in and how other facilities have reduced energy, along with a greater knowledge of the problem of energy consumption. Metrics commonly used in the industry are not necessarily helping facilities to reach higher levels of energy efficiency, and are not suited to their purpose. A case study was conducted to critically assess the Power Utilisation Effectiveness (PUE) metric, the most commonly used metric, through using open source information. The work highlights the fact that whilst the metric is valuable to the industry in terms of creating awareness and competition between companies regarding energy use, it does not give a complete representation of energy efficiency. Crucially the metric also does not consider the energy use of the server, which forms the functional component of the data centre. By taking a closer look at the fans within a server and by focussing on this hidden parameter within the PUE measurement, experimental work in this thesis has also considered one technological way in which a data centre may save energy. Barriers such as those found in the interviews may however restrict such potential energy saving interventions. Overall, this thesis has provided evidence of barriers that may be preventing further energy savings in data centres and provided recommendations for improvement. The industry would benefit from a change in the way that metrics are employed to assess energy efficiency, and new tools to encourage better choices of which technologies and methodologies to employ. The PUE metric is useful to assess supporting infrastructure energy use during design and operation. However when assessing overall impacts of IT energy use, businesses need more indicators such as life cycle carbon emissions to be integrated into the overall energy assessment.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Hao. "Host and Network Optimizations for Performance Enhancement and Energy Efficiency in Data Center Networks." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/735.

Full text
Abstract:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
APA, Harvard, Vancouver, ISO, and other styles
25

Atchukatla, Mahammad suhail. "Algorithms for efficient VM placement in data centers : Cloud Based Design and Performance Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17221.

Full text
Abstract:
Content: Recent trends show that cloud computing adoption is continuously increasing in every organization. So, demand for the cloud datacenters tremendously increases over a period, resulting in significantly increased resource utilization of the datacenters. In this thesis work, research was carried out on optimizing the energy consumption by using packing of the virtual machines in the datacenter. The CloudSim simulator was used for evaluating bin-packing algorithms and for practical implementation OpenStack cloud computing environment was chosen as the platform for this research.   Objectives:  In this research, our objectives are as follows <ul type="disc">Perform simulation of algorithms in CloudSim simulator. Estimate and compare the energy consumption of different packing algorithms. Design an OpenStack testbed to implement the Bin packing algorithm.   Methods: We use CloudSim simulator to estimate the energy consumption of the First fit, the First fit decreasing, Best fit and Enhanced best-fit algorithms. Design a heuristic model for implementation in the OpenStack environment for optimizing the energy consumption for the physical machines. Server consolidation and live migration are used for the algorithms design in the OpenStack implementation. Our research also extended to the Nova scheduler functionality in an OpenStack environment.   Results: Most of the case the enhanced best-fit algorithm gives the better results. The results are obtained from the default OpenStack VM placement algorithm as well as from the heuristic algorithm developed in this simulation work. The comparison of results indicates that the total energy consumption of the data center is reduced without affecting potential service level agreements.   Conclusions: The research tells that energy consumption of the physical machines can be optimized without compromising the offered service quality. A Python wrapper was developed to implement this model in the OpenStack environment and minimize the energy consumption of the Physical machine by shutdown the unused physical machines. The results indicate that CPU Utilization does not vary much when live migration of the virtual machine is performed.
APA, Harvard, Vancouver, ISO, and other styles
26

Lionello, Michele. "Modelling and control of cooling systems for data center applications." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3424786.

Full text
Abstract:
Nowadays, the Data Center industry is playing a leading role in the world economic development and it is growing rapidly and constantly. Beside this, it has become more concerned with energy consumption and the associated environmental effects. Since about half of the total energy consumption in a typical Data Center is devoted to cooling the IT equipment, energy efficiency must be the primary focus in the design and management of the cooling infrastructure. In this Thesis, we consider the problem of optimizing the operation of cooling systems in Data Centers. The main objective is that of maximizing the energy efficiency of the systems, while provisioning the required cooling demand. For this purpose, we propose a two-layer hierarchical control approach, where a supervisory high-level layer determines the optimal set-points for the local low-level controllers. The supervisory layer exploits an Extremum Seeking model-free optimization algorithm, which ensures flexibility and robustness against changes in the operating conditions. In particular, a Newton-like Phasor-based Extremum Seeking scheme is presented to improve the convergence properties and the robustness of the algorithm. The proposed control architecture is tested in silico in optimizing the operation of an Indirect Evaporative Cooling system and a Liquid Immersion Cooling unit. Simulations are performed by exploiting First-Principle Data-Driven models of the considered systems and the results demonstrate the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
27

VIE, Isaak. "Energy for information: the green promise of the Node Pole data centres." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324674.

Full text
Abstract:
Data centres are key to high availability and around the clock access to information. As the number of data centres increases to satisfy the demand for data, so does their energy consumption. This thesis is a case study of the data centres located in the Node Pole region in the North of Sweden. It aims to look at aspects of both the energy supply of Norrbotten and the actual technologies used by the data centres to utilise this energy supply. Using a literature review to gather primary data, the first research question analyses the energy supply of Norrbotten, investigating its specificities through energy security theories, particularly looking through the aspects of availability, accessibility and affordability. The second question examines the Node Pole’s implementation response to the specific energy supply of the North of Sweden, and whether this response is efficient and sustainable, using the four Rs theory and the Energy Efficiency Directive (EED). The results of the analysis show that the North of Sweden is currently in a privileged position: the energy produced in Norrbotten benefits from high availability criteria, is in oversupply, and thanks to the prevalence of hydropower and wind power in the energy mix, is very low in GHG emissions. The Swedish grid is reliable and robust, and Norbotten is no exception to that rule, providing the Node Pole with an accessible “plug and play” module to the electricity grid. In addition, the recent tax rebate aimed at the data centre industry means that the energy is affordable, more so in fact than in many other European countries. This assessment makes for a favourable breeding ground for data centres in the region from an energy security perspective. Meanwhile, the Node Pole data centres use ground-breaking cooling technologies consisting of airside cooling combined with adiabatic pads for humidity control (no separate humidification system), simple air filtration facilities (thanks to the outstanding air quality of the area),  and aerodynamic architectural premises layouts for better flow, reducing the cooling costs by increasing the efficiency of the overall air conditioning system. This technology is paired with innovative power distribution solutions (non-standard voltage and less UPS batteries), thereby considerably reducing the electricity consumption again and the waste of energy caused by voltage conversion. Combining the auspicious energy offerings of the Norrbotten region with the ingenious practical implementations of the data centres thus unleashes a new potential for more efficient and sustainable data centres.
APA, Harvard, Vancouver, ISO, and other styles
28

Wolke, Andreas [Verfasser], Martin [Akademischer Betreuer] Bichler, and Georg [Akademischer Betreuer] Carle. "Energy efficient capacity management in virtualized data centers / Andreas Wolke. Gutachter: Georg Carle ; Martin Bichler. Betreuer: Martin Bichler." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1070372390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bilal, Kashif. "Analysis and Characterization of Cloud Based Data Center Architectures for Performance, Robustness, Energy Efficiency, and Thermal Uniformity." Diss., North Dakota State University, 2014. https://hdl.handle.net/10365/27323.

Full text
Abstract:
Cloud computing is anticipated to revolutionize the Information and Communication Technology (ICT) sector and has been a mainstream of research over the last decade. Today, the contemporary society relies more than ever on the Internet and cloud computing. However, the advent and enormous adoption of cloud computing paradigm in various domains of human life also brings numerous challenges to cloud providers and research community. Data Centers (DCs) constitute the structural and operational foundations of cloud computing platforms. The legacy DC architectures are inadequate to accommodate the enormous adoption and increasing resource demands of cloud computing. The scalability, high cross-section bandwidth, Quality of Service (QoS) guarantees, privacy, and Service Level Agreement (SLA) assurance are some of the major challenges faced by today?s cloud DC architectures. Similarly, reliability and robustness are among the mandatory features of cloud paradigm to handle the workload perturbations, hardware failures, and intentional attacks. The concerns about the environmental impacts, energy demands, and electricity costs of cloud DCs are intensifying. Energy efficiency is one of mandatory features of today?s DCs. Considering the paramount importance of characterization and performance analysis of the cloud based DCs, we analyze the robustness and performance of the state-of-the-art DC architectures and highlight the advantages and drawbacks of such DC architecture. Moreover, we highlight the potentials and techniques that can be used to achieve energy efficiency and propose an energy efficient DC scheduling strategy based on a real DC workload analysis. Thermal uniformity within the DC also brings energy savings. Therefore, we propose thermal-aware scheduling policies to deliver the thermal uniformity within the DC to ensure the hardware reliability, elimination of hot spots, and reduction in power consumed by cooling infrastructure. One of the salient contributions of our work is to deliver the handy and adaptable experimentation tools and simulators for the research community. We develop two discrete event simulators for the DC research community: (a) for the detailed DC network analysis under various configurations, network loads, and traffic patterns, and (b) a cloud scheduler to analyze and compare various scheduling strategies and their thermal impact.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhuang, Hao. "Performance Evaluation of Virtualization in Cloud Data Center." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104206.

Full text
Abstract:
Amazon Elastic Compute Cloud (EC2) has been adopted by a large number of small and medium enterprises (SMEs), e.g. foursquare, Monster World, and Netflix, to provide various kinds of services. There has been some existing work in the current literature investigating the variation and unpredictability of cloud services. These work demonstrated interesting observations regarding cloud offerings. However, they failed to reveal the underlying essence of the various appearances for the cloud services. In this thesis, we looked into the underlying scheduling mechanisms, and hardware configurations, of Amazon EC2, and investigated their impact on the performance of virtual machine instances running atop. Specifically, several instances with the standard and high-CPU instance families are covered to shed light on the hardware upgrade and replacement of Amazon EC2. Then large instance from the standard family is selected to conduct focus analysis. To better understand the various behaviors of the instances, a local cluster environment is set up, which consists of two Intel Xeon servers, using different scheduling algorithms. Through a series of benchmark measurements, we observed the following findings: (1) Amazon utilizes highly diversified hardware to provision different instances. It results in significant performance variation, which can reach up to 30%. (2) Two different scheduling mechanisms were observed, one is similar to Simple Earliest Deadline Fist (SEDF) scheduler, whilst the other one analogies Credit scheduler in Xen hypervisor. These two scheduling mechanisms also arouse variations in performance. (3) By applying a simple "trial-and-failure" instance selection strategy, the cost saving is surprisingly significant. Given certain distribution of fast-instances and slow-instances, the achievable cost saving can reach 30%, which is attractive to SMEs which use Amazon EC2 platform.<br>Amazon Elastic Compute Cloud (EC2) har antagits av ett stort antal små och medelstora företag (SMB), t.ex. foursquare, Monster World, och Netflix, för att ge olika typer av tjänster. Det finns en del tidigare arbeten i den aktuella litteraturen som undersöker variationen och oförutsägbarheten av molntjänster. Dessa arbetenhar visat intressanta iakttagelser om molnerbjudanden, men de har misslyckats med att avslöja den underliggande kärnan hos de olika utseendena för molntjänster. I denna avhandling tittade vi på de underliggande schemaläggningsmekanismerna och maskinvarukonfigurationer i Amazon EC2, och undersökte deras inverkan på resultatet för de virtuella maskiners instanser som körs ovanpå. Närmare bestämt är det flera fall med standard- och hög-CPU instanser som omfattas att belysa uppgradering av hårdvara och utbyte av Amazon EC2. Stora instanser från standardfamiljen är valda för att genomföra en fokusanalys. För att bättre förstå olika beteenden av de olika instanserna har lokala kluster miljöer inrättas, dessa klustermiljöer består av två Intel Xeonservrar och har inrättats med hjälp av olika schemaläggningsalgoritmer. Genom en serie benchmarkmätningar observerade vi följande slutsatser: (1) Amazon använder mycket diversifierad hårdvara för att tillhandahållandet olika instanser. Från de olika instans-sub-typernas perspektiv leder hårdvarumångfald till betydande prestationsvariation som kan nå upp till 30%. (2) Två olika schemaläggningsmekanismer observerades, en liknande Simple Earliest Deadline Fist(SEDF) schemaläggare, medan den andra mer liknar Credit-schemaläggaren i Xenhypervisor. Dessa två schemaläggningsmekanismer ger även upphov till variationer i prestanda. (3) Genom att tillämpa en enkel "trial-and-failure" strategi för val av instans, är kostnadsbesparande förvånansvärt stor. Med tanke på fördelning av snabba och långsamma instanser kan kostnadsbesparingen uppgå till 30%, vilket är attraktivt för små och medelstora företag som använder Amazon EC2 plattform.
APA, Harvard, Vancouver, ISO, and other styles
31

Birgonul, Zeynep. "Symbiotic data platform. A receptive-responsive tool for customizing thermal comfort & optimizing energy efficiency." Doctoral thesis, Universitat Internacional de Catalunya, 2020. http://hdl.handle.net/10803/669180.

Full text
Abstract:
Symbiotic Data Platform is a receptive-responsive tool for ‘personalized’ thermal comfort optimization. The research focuses on; searching new possibilities of how to upgrade BIM methodology to be interactive, the possibility of using existing BIM data during the occupation phase of the building, and also, researching on the potential of enhancing energy efficiency &amp; comfort optimization together by taking benefit of BIM material data. The objective of the research is to take benefit of the massive existing data that is embedded in Building Information Models, by exporting the information and using in other mediums as input. The research addresses both energy efficiency and sustainable environment concerns due to augmenting the accuracy of analysis by material data and real-time information, while focusing on personalized comfort optimization. The final product is an interface that addresses the contemporary concerns of global facts and the new generations responsible society. The research is developed by designing and testing via Prototyping thanks to IoT technology, and investigating the possibilities of adding BIM data to the prototypes’ algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Vasudevan, Meera. "Profile-based application management for green data centres." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/98294/1/Meera_Vasudevan_Thesis.pdf.

Full text
Abstract:
This thesis presents a profile-based application management framework for energy-efficient data centres. The framework is based on a concept of using Profiles that provide prior knowledge of the run-time workload characteristics to assign applications to virtual machines. The thesis explores the building of profiles for applications, virtual machines and servers from real data centre workload logs. This is then used to inform static and dynamic application assignment, and consolidation of applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Tena, Frezewd Lemma. "Energy-Efficient Key/Value Store." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-228586.

Full text
Abstract:
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
APA, Harvard, Vancouver, ISO, and other styles
34

Knauth, Thomas. "Energy Efficient Cloud Computing: Techniques and Tools." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-164391.

Full text
Abstract:
Data centers hosting internet-scale services consume megawatts of power. Mainly for cost reasons but also to appease environmental concerns, data center operators are interested to reduce their use of energy. This thesis investigates if and how hardware virtualization helps to improve the energy efficiency of modern cloud data centers. Our main motivation is to power off unused servers to save energy. The work encompasses three major parts: First, a simulation-driven analysis to quantify the benefits of known reservation times in infrastructure clouds. Virtual machines with similar expiration times are co-located to increase the probability to power down unused physical hosts. Second, we propose and prototyped a system to deliver truly on-demand cloud services. Idle virtual machines are suspended to free resources and as a first step to power off the physical server. Third, a novel block-level data synchronization tool enables fast and efficient state replication. Frequent state synchronization is necessary to prevent data unavailability: powering down a server disables access to the locally attached disks and any data stored on them. The techniques effectively reduce the overall number of required servers either through optimized scheduling or by suspending idle virtual machines. Fewer live servers translate into proportional energy savings, as the unused servers must no longer be powered.
APA, Harvard, Vancouver, ISO, and other styles
35

Da, Silva Ralston A. "Green Computing – Power Efficient Management in Data Centers Using Resource Utilization as a Proxy for Power." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259760420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Moser, Philip [Verfasser], Dieter [Akademischer Betreuer] Bimberg, and Anders [Akademischer Betreuer] Larsson. "Energy efficient oxide confined VCSELs for optical interconnects in data centers and supercomputers / Philip Moser. Gutachter: Dieter Bimberg ; Anders Larsson. Betreuer: Dieter Bimberg." Berlin : Technische Universität Berlin, 2015. http://d-nb.info/1070276774/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Silva, Newton Rocha da. "TI verde – o armazenamento de dados e a eficiência energética no data center de um banco brasileiro." Universidade Nove de Julho, 2015. https://bibliotecatede.uninove.br/handle/tede/1155.

Full text
Abstract:
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-27T16:22:43Z No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5)<br>Made available in DSpace on 2015-07-27T16:22:43Z (GMT). No. of bitstreams: 1 Newton Rocha da Silva.pdf: 1739667 bytes, checksum: 9f957689d728b32603a096b0af84765b (MD5) Previous issue date: 2015-03-04<br>The Green IT focuses on the study and design practice, manufacturing, use and disposal of computers, servers, and associated subsystems, efficiently and effectively, with less impact to the environment. It´s major goal is to improve performance computing and reduce energy consumption and carbon footprint. Thus, the green information technology is the practice of environmentally sustainable computing and aims to minimize the negative impact of IT operations to the environment. On the other hand, the exponential growth of digital data is a reality for most companies, making them increasingly dependent on IT to provide sufficient and real-time information to support the business. This growth trend causes changes in the infrastructure of data centers giving focus on the capacity of the facilities issues due to energy, space and cooling for IT activities demands. In this scenario, this research aims to analyze whether the main data storage solutions such as consolidation, virtualization, deduplication and compression, together with the solid state technologies SSD or Flash Systems are able to contribute to an efficient use of energy in the main data center organization. The theme was treated using qualitative and exploratory research method, based on the case study, empirical and documentary research such as technique to data collect, and interviews with IT key suppliers solutions. The case study occurred in the main Data Center of a large Brazilian bank. As a result, we found that energy efficiency is sensitized by technological solutions presented. Environmental concern was evident and showed a shared way between partners and organization studied. The maintaining of PUE - Power Usage Effectiveness, as energy efficiency metric, at a level of excellence reflects the combined implementation of solutions, technologies and best practices. We conclude that, in addition to reducing the consumption of energy, solutions and data storage technologies promote efficiency improvements in the Data Center, enabling more power density for the new equipment installation. Therefore, facing the digital data demand growth is crucial that the choice of solutions, technologies and strategies must be appropriate not only by the criticality of information, but by the efficient use of resources, contributing to a better understanding of IT importance and its consequences for the environment.<br>A TI Verde concentra-se em estudo e prática de projeto, fabricação, utilização e descarte de computadores, servidores e subsistemas associados, de forma eficiente e eficaz, com o mínimo ou nenhum impacto ao meio ambiente. Seu objetivo é melhorar o desempenho da computação e reduzir o consumo de energia e a pegada de carbono. Nesse sentido, a tecnologia da informação verde é a prática da computação ambientalmente sustentável e tem como objetivo minimizar o impacto negativo das operações de TI no meio ambiente. Por outro lado, o crescimento exponencial de dados digitais é uma realidade para a maioria das empresas, tornando-as cada vez mais dependentes da TI para disponibilizar informações em tempo real e suficiente para dar suporte aos negócios. Essa tendência de crescimento provoca mudanças na infraestrutura dos Data Centers dando foco na questão da capacidade das instalações devido à demanda de energia, espaço e refrigeração para as atividades de TI. Nesse cenário, esta pesquisa objetiva analisar se as principais soluções de armazenamento de dados, como a consolidação, a virtualização, a deduplicação e a compactação, somadas às tecnologias de discos de estado sólido do tipo SSD ou Flash são capazes de colaborar para um uso eficiente de energia elétrica no principal Data Center da organização. A metodologia de pesquisa foi qualitativa, de caráter exploratório, fundamentada em estudo de caso, levantamento de dados baseado na técnica de pesquisa bibliográfica e documental, além de entrevista com os principais fornecedores de soluções de TI. O estudo de caso foi o Data Center de um grande banco brasileiro. Como resultado, foi possível verificar que a eficiência energética é sensibilizada pelas soluções tecnológicas apresentadas. A preocupação ambiental ficou evidenciada e mostrou um caminho compartilhado entre parceiros e organização estudada. A manutenção do PUE - Power Usage Effectiveness (eficiência de uso de energia) como métrica de eficiência energética mantida em um nível de excelência é reflexo da implementação combinada de soluções, tecnologias e melhores práticas. Conclui-se que, além de reduzir o consumo de energia elétrica, as soluções e tecnologias de armazenamento de dados favorecem melhorias de eficiência no Data Center, viabilizando mais densidade de potência para a instalação de novos equipamentos. Portanto, diante do crescimento da demanda de dados digitais é crucial que a escolha das soluções, tecnologias e estratégias sejam adequadas, não só pela criticidade da informação, mas pela eficiência no uso dos recursos, contribuindo para um entendimento mais evidente sobre a importância da TI e suas consequências para o meio ambiente.
APA, Harvard, Vancouver, ISO, and other styles
38

Bayati, Léa. "Data centers energy optimization." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC0063.

Full text
Abstract:
Pour garantir à la fois une bonne performance des services offerts par des centres de données, et une consommation énergétique raisonnable, une analyse détaillée du comportement de ces systèmes est essentielle pour la conception d'algorithmes d'optimisation efficaces permettant de réduire la consommation énergétique. Cette thèse, s'inscrit dans ce contexte, et notre travail principal consiste à concevoir des systèmes de gestion dynamique de l'énergie basés sur des modèles stochastiques de files d'attente contrôlées. Le but est de rechercher les politiques de contrôle optimales afin de les appliquer sur des centres de données, ce qui devrait répondre aux demandes croissantes de réduction de la consommation énergétique et de la pollution numérique tout en préservant la qualité de service. Nous nous sommes intéressés d’abord à la modélisation de la gestion dynamique de l’énergie par un modèle stochastique pour un centre de données homogène, principalement pour étudier certaines propriétés structurelles de la stratégie optimale, telle que la monotonie. Après, comme des centres de données présentent un niveau non négligeable d'hétérogénéité de serveurs en termes de consommation d'énergie et de taux de service, nous avons généralisé le modèle homogène à un modèle hétérogène. De plus, comme le réveil (resp. l'arrêt) d’un serveur de centre de données n’est pas instantané et nécessite un peu plus de temps pour passer du mode veille au mode prêt à fonctionner, nous avons étendu le modèle dans le but d'inclure cette latence temporelle des serveurs. Tout au long de cette optimisation exacte, les arrivées et les taux de service sont spécifiés avec des histogrammes pouvant être obtenus à partir de traces réelles, de données empiriques ou de mesures de trafic entrant. Nous avons montré que la taille du modèle MDP est très grande et conduit au problème de l’explosion d’espace d'états et à un temps de calcul important. Ainsi, nous avons montré que l'optimisation optimale nécessitant le passage par un MDP est souvent difficile, voire pratiquement impossible pour les grands centres de données. Surtout si nous prenons en compte des aspects réels tels que l'hétérogénéité ou la latence des serveurs. Alors, nous avons suggéré ce que nous appelons l’algorithme greedy-window qui permet de trouver une stratégie sous-optimale meilleure que celle produite lorsqu’on envisage un mécanisme spécial comme les approches à seuil. Et plus important encore, contrairement à l’approche MDP, cet algorithme n’exige pas la construction complète de la structure qui encode toutes les stratégies possibles. Ainsi, cette algorithme donne une stratégie très proche de la stratégie optimale avec des complexités spatio-temporelles très faibles. Cela rend cette solution pratique, évolutive, dynamique et peut être mise en ligne<br>To ensure both good data center service performance and reasonable power consumption, a detailed analysis of the behavior of these systems is essential for the design of efficient optimization algorithms to reduce energy consumption. This thesis fits into this context, and our main work is to design dynamic energy management systems based on stochastic models of controlled queues. The goal is to search for optimal control policies for data center management, which should meet the growing demands of reducing energy consumption and digital pollution while maintaining quality of service. We first focused on the modeling of dynamic energy management by a stochastic model for a homogeneous data center, mainly to study some structural properties of the optimal strategy, such as monotony. Afterwards, since data centers have a significant level of server heterogeneity in terms of energy consumption and service rates, we have generalized the homogeneous model to a heterogeneous model. In addition, since the data center server's wake-up and shutdown are not instantaneous and a server requires a little more time to go from sleep mode to ready-to-work mode, we have extended the model to the purpose of including this server time latency. Throughout this exact optimization, arrivals and service rates are specified with histograms that can be obtained from actual traces, empirical data, or traffic measurements. We have shown that the size of the MDP model is very large and leads to the problem of the explosion of state space and a large computation time. Thus, we have shown that optimal optimization requiring a MDP is often difficult or almost impossible to apply for large data centers. Especially if we take into account real aspects such as server heterogeneity or latency. So, we have suggested what we call the greedy-window algorithm that allows to find a sub-optimal strategy better than that produced when considering a special mechanism like threshold approaches. And more importantly, unlike the MDP approach, this algorithm does not require the complete construction of the structure that encodes all possible strategies. Thus, this algorithm gives a strategy very close to the optimal strategy with very low space-time complexities. This makes this solution practical, scalable, dynamic and can be put online
APA, Harvard, Vancouver, ISO, and other styles
39

Ostapenco, Vladimir. "Modélisation, évaluation et orchestration des leviers hétérogènes pour la gestion des centres de données cloud à grande échelle." Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0096.

Full text
Abstract:
Le secteur des Technologies de l’Information et de la Communication (TIC) est en pleine croissance en raison de l'augmentation du nombre d’utilisateurs d’Internet et de la démocratisation des services numériques, entraînant une empreinte carbone non négligeable et toujours croissante. La part des émissions de gaz à effet de serre (GES) liées aux TIC est estimée entre 1,8% et 3,9% des émissions mondiales en 2020, avec un risque de presque doubler et d’atteindre plus de 7% d'ici à 2025. Les datacenters sont au cœur de cette croissance, estimés d'être responsables d'une part importante des émissions de GES du secteur des TIC (allant de 17% à 45% en 2020) et à consommer environ 1% de l'électricité mondiale en 2018.De nombreux leviers existent et peuvent aider les fournisseurs de cloud et les gestionnaires de datacenters à réduire certains de ces impacts. Ces leviers peuvent opérer sur de multiples facettes telles que l’extinction de ressources inutilisées, le ralentissement de ressources pour s’adapter aux besoins réels des applications et services, l’optimisation ou la consolidation des services pour réduire le nombre de ressources physiques mobilisées. Ces leviers peuvent être très hétérogènes et impliquer du matériel informatique, des couches logicielles ou des contraintes plus logistiques à l’échelle des datacenters. Activer, désactiver et orchestrer ces leviers à grande échelle est un réel enjeu permettant des gains potentiels en termes de réduction de la consommation énergétique et des émissions de dioxyde de carbone.Dans cette thèse, nous abordons la modélisation, évaluation et gestion de leviers hétérogènes dans le contexte d'un datacenter cloud à grande échelle en proposant pour la première fois la combinaison de leviers hétérogènes : à la fois technologiques (allumage/extinction de ressources, migration, ralentissement) et logistiques (installation de nouvelles machines, décommissionnement, changement fonctionnels ou géographiques de ressources IT).Dans un premier temps, nous proposons une modélisation des leviers hétérogènes couvrant les impacts, les coûts et les combinaisons des leviers, les concepts de Gantt Chart environnemental contenant des leviers appliqués à l'infrastructure du fournisseur de cloud et d'un environnement logiciel de gestion des leviers qui vise à améliorer les performances énergétiques et environnementales globales de l'ensemble de l'infrastructure d'un fournisseur de cloud. Ensuite, nous abordons le suivi et la collecte de métriques, incluant des données énergétiques et environnementales. Nous discutons de la mesure de la puissance et de l’énergie et effectuons une comparaison expérimentale des wattmètres logiciels. Par la suite, nous étudions un levier technologique unique en effectuant une analyse approfondie du levier Intel RAPL à des fins de plafonnement de la puissance sur un ensemble de nœuds hétérogènes pour une variété de charges de travail gourmandes en CPU et en mémoire. Finalement, nous validons la modélisation des leviers hétérogènes proposée à grande échelle en explorant trois scénarios distincts qui montrent la pertinence de l’approche proposée en termes de gestion des ressources et de réduction des impacts potentiels<br>The Information and Communication Technology (ICT) sector is constantly growing due to the increasing number of Internet users and the democratization of digital services, leading to a significant and ever-increasing carbon footprint. The share of greenhouse gas (GHG) emissions related to ICT is estimated to be between 1.8% and 3.9% of global GHG emissions in 2020, with a risk of almost doubling and reaching more than 7% by 2025. Data centers are at the center of this growth, estimated to be responsible for a significant portion of the ICT industry's global GHG emissions (ranging from 17% to 45% in 2020) and to consume approximately 1% of global electricity in 2018.Numerous leverages exist and can help cloud providers and data center managers to reduce some of these impacts. These leverages can operate on multiple facets such as turning off unused resources, slowing down resources to adapt to the real needs of applications and services, optimizing or consolidating services to reduce the number of physical resources mobilized. These leverages can be very heterogeneous and involve hardware, software layers or more logistical constraints at the data center scale. Activating, deactivating and orchestrating these heterogeneous leverages on a large scale can be a challenging task, allowing for potential gains in terms of reducing energy consumption and GHG emissions.In this thesis, we address the modeling, evaluation and orchestration of heterogeneous leverages in the context of a large-scale cloud data center by proposing for the first time the combination of heterogeneous leverages: both technological (turning on/off resources, migration, slowdown) and logistical (installation of new machines, decommissioning, functional or geographical changes of IT resources).First, we propose a novel heterogeneous leverage modeling approach covering leverages impacts, costs and combinations, the concepts of an environmental Gantt Chart containing leverages applied to the cloud provider's infrastructure and of a leverage management framework that aims to improve the overall energy and environmental performance of a cloud provider's entire infrastructure. Then, we focus on metric monitoring and collection, including energy and environmental data. We discuss power and energy measurement and conduct an experimental comparison of software-based power meters. Next, we study of a single technological leverage by conducting a thorough analysis of Intel RAPL leverage for power capping purposes on a set of heterogeneous nodes for a variety of CPU- and memory-intensive workloads. Finally, we validate the proposed heterogeneous leverage modeling approach on a large scale by exploring three distinct scenarios that show the pertinence of the proposed approach in terms of resource management and potential impacts reduction
APA, Harvard, Vancouver, ISO, and other styles
40

Mohammad, Ali Howraa. "Disaggregated servers for future energy efficient data centres." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/17737/.

Full text
Abstract:
The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today’s data centres can no longer support. This thesis examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm known as disaggregated server (DS). The DS design arranges data centres resources in physical pools such as processing, memory and IO module pools; rather than packing each subset in a single server. In this work, we study energy efficient resource provisioning and virtual machine (VM) allocation in the DS based data centres compared to CS based data centres. First, we developed a mixed integer linear programming (MILP) model to optimise VM allocation for DS based data centre. Our results indicate that considering pooled resources yields up to 62% total saving in power consumption compared to the CS approach. Due to the MILP high computational complexity, we developed an energy efficient, fast and scalable resource provisioning heuristic (EERP-DS), based on the MILP insights, with comparable power efficiency to the MILP. Second, we extended the resources provisioning and VM allocation MILP to include the data centre communication fabric power consumption. The results show that the inclusion of the communication fabric still yields considerable power savings compared to the CS approach, up to 48% power saving. Third, we developed an energy efficient resource provisioning for DS with communication fabric heuristic (EERP-DSCF). EERP-DSCF achieved comparable results to the second MILP and with it we can extend the number of served VMs where the MILP scalability for big number of VMs is challenging. Finally, we present our new design for the photonic DS based data centre architecture supplemented with a complete description of the architecture components, communication patterns and some recommendations for the design implementation challenges.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Lei. "Fast Algorithms for Mining Co-evolving Time Series." Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/112.

Full text
Abstract:
Time series data arise in many applications, from motion capture, environmental monitoring, temperatures in data centers, to physiological signals in health care. In the thesis, I will focus on the theme of learning and mining large collections of co-evolving sequences, with the goal of developing fast algorithms for finding patterns, summarization, and anomalies. In particular, this thesis will answer the following recurring challenges for time series: 1. Forecasting and imputation: How to do forecasting and to recover missing values in time series data? 2. Pattern discovery and summarization: How to identify the patterns in the time sequences that would facilitate further mining tasks such as compression, segmentation and anomaly detection? 3. Similarity and feature extraction: How to extract compact and meaningful features from multiple co-evolving sequences that will enable better clustering and similarity queries of time series? 4. Scale up: How to handle large data sets on modern computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models, in particular, including Linear Dynamical Systems (LDS) and Hidden Markov Models (HMM). We also develop a distributed algorithm for finding patterns in large web-click streams. Our thesis will present special models and algorithms that incorporate domain knowledge. For motion capture, we will describe the natural motion stitching and occlusion filling for human motion. In particular, we provide a metric for evaluating the naturalness of motion stitching, based which we choose the best stitching. Thanks to domain knowledge (body structure and bone lengths), our algorithm is capable of recovering occlusions in mocap sequences, better in accuracy and longer in missing period. We also develop an algorithm for forecasting thermal conditions in a warehouse-sized data center. The forecast will help us control and manage the data center in a energy-efficient way, which can save a significant percentage of electric power consumption in data centers.
APA, Harvard, Vancouver, ISO, and other styles
42

Tatchell-Evans, Morgan Rhys. "Energy efficient operation of data centres : technical, computational and political challenges." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/19349/.

Full text
Abstract:
The aim of this study was to investigate the technologies and policy instruments available to improve efficiency in data centres. Data centres consume a significant and increasing proportion of the world’s electricity, and much of this electricity is consumed in cooling the computing equipment housed in these facilities. Significant potential exists to improve the efficiency of cooling in data centres. One popular method for improving efficiency in data centre cooling is to physically separate the hot and cold air streams using ‘aisle containment’ systems. This has been shown to reduce ‘bypass’ (cold air returning to the air conditioning system without having passed through any computing equipment) and ‘recirculation’ (rejected hot air returning to computing equipment inlets, leading to over-heating). However, the benefits of aisle containment have not previously been extensively quantified, nor have the optimal operational conditions been investigated. Experimental investigations were undertaken to determine the extents of bypass and recirculation in data centres employing aisle containment. Effective measures for minimising bypass and recirculation in such data centres were identified. A system model was developed to predict the impacts of this bypass and recirculation on data centre electricity consumption. The system model results showed that taking action to minimise bypass could reduce electricity consumption by up to 36%, whilst minimising the pressurisation of the cold aisles could reduce electricity consumption by up to 58%. Computational fluid dynamics models were developed to further investigate the implications of aisle containment, for both electricity consumption and cooling efficacy. Significant advancements in the techniques used to model bypass and recirculation within contained systems have been made. Finally, interviews were undertaken with data centre operators, in order to enable an assessment of the current policy environment pertaining to energy efficiency in UK data centres. Recommendations have been made for potential improvements to this policy environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Rincon, Mateus Cesar Augusto. "Dynamic resource allocation for energy management in data centers." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Padala, Praneel Reddy. "Virtualization of Data Centers : study on Server Energy Consumption Performance." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16011.

Full text
Abstract:
Due to various reasons data centers have become ubiquitous in our society. Energy costs are significant portion of data centers total lifetime costs which also makes financial sense to operators. This increases huge concern towards the energy costs and environmental impacts of data center. Power costs and energy efficiency are the major challenges front of us.From overall cyber energy used, 15% is used by networking portion ofa data center. Its estimated that the energy used by network infrastructure in a data center world wide is 15.6 billion kWh and is expected to increase to around 50%. Power costs and Energy Consumption plays a major role through out the life time of a data center, which also leads to the increase in financial costs for data center operators and increased usage of power resources. So, resource utilization has become a major issue in the data centers. The main aim of this thesis study is to find the efficient way for utilization of resources and decrease the energy costs to the operators in the data centers using virtualization. Virtualization technology is used to deploy virtual servers on physical servers which uses the same resources and helps to decrease the energy consumption of a data center.
APA, Harvard, Vancouver, ISO, and other styles
45

Ding, Zhe. "Profile-based virtual machine placement for energy optimization of data centers." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103845/1/Zhe_Ding_Thesis.pdf.

Full text
Abstract:
This thesis provides a framework for virtual resource placement to optimize energy consumption in data centers. The framework consists of three phases: profiling, task classification, and virtual machine placement, and automatically conducts virtual resource placement for given jobs and tasks. It is shown to achieve a 12% cut of energy consumption in comparison with benchmark methods, implying a save of millions of dollars for an enterprise.
APA, Harvard, Vancouver, ISO, and other styles
46

Massana, i. Raurich Joaquim. "Data-driven models for building energy efficiency monitoring." Doctoral thesis, Universitat de Girona, 2018. http://hdl.handle.net/10803/482148.

Full text
Abstract:
Nowadays, energy is absolutely necessary all over the world. Taking into account the advantages that it presents in transport and the needs of homes and industry, energy is transformed into electricity. Bearing in mind the expansion of electricity, initiatives like Horizon 2020, pursue the objective of a more sustainable future: reducing the emissions of carbon and electricity consumption and increasing the use of renewable energies. As an answer to the shortcomings of the traditional electrical network, such as large distances to the point of consumption, low levels of flexibility, low sustainability, low quality of energy, the difficulties of storing electricity, etc., Smart Grids (SG), a natural evolution of the classical network, has appeared. One of the main components that will allow the SG to improve the traditional grid is the Energy Management System (EMS). The EMS is necessary to carry out the management of the power network system, and one of the main needs of the EMS is a prediction system: that is, to know in advance the electricity consumption. Besides, the utilities will also require predictions to manage the generation, maintenance and their investments. Therefore, it is necessary to dispose of the systems of prediction of the electrical consumption that, based on the available data, forecast the consumption of the next hours, days or months, in the most accurate way possible. It is in this field where the present research is placed since, due to the proliferation of sensor networks and more powerful computers, more precise prediction systems have been developed. Having said that, a complete study has been realized in the first work, taking into account the need to know, in depth, the state of the art, in relation to the load forecasting topic. On the basis of acquired knowledge, the installation of sensor networks, the collection of consumption data and modelling, using Autoregressive (AR) models, were performed in the second work. Once this model was defined, in the third work, another step was made, collecting new data, such as building occupancy, meteorology and indoor ambience, testing several paradigmatic models, such as Multiple Linear Regression (MLR), Artificial Neural Network (ANN) and Support Vector Regression (SVR), and establishing which exogenous data improves the prediction accuracy of the models. Reaching this point, and having corroborated that the use of occupancy data improves the prediction, there was the necessity of generating techniques and methodologies, in order to have the occupancy data in advance. Therefore, several attributes of artificial occupancy were designed, in order to perform long-term hourly consumption predictions, in the fourth work.<br>A dia d’avui l’energia és un bé completament necessari arreu del món. Degut als avantatges que presenta en el transport i a les necessitats de les llars i la indústria, l’energia és transformada en energia elèctrica. Tenint en compte la total expansió i domini de l’electricitat, iniciatives com Horitzó 2020, tenen per objectiu un futur més sostenible: reduint les emissions de carboni i el consum i incrementant l’ús de renovables. Partint dels defectes de la xarxa elèctrica clàssica, com són gran distància al punt de consum, poca flexibilitat, baixa sostenibilitat, baixa qualitat de l’energia, dificultats per a emmagatzemar energia, etc. apareixen les Smart Grid (SG), una evolució natural de la xarxa clàssica. Un dels principals elements que permetrà a les SG millorar les xarxes clàssiques és l’Energy Management System (EMS). Així doncs, per a que l’EMS pugui dur a terme la gestió dels diversos elements, una de les necessitats bàsiques dels EMS serà un sistema de predicció, o sigui, saber per endavant quin consum hi haurà en un entorn determinat. A més, les empreses subministradores d’electricitat també requeriran de prediccions per a gestionar la generació, el manteniment i fins i tot les inversions a llarg termini. Així doncs ens calen sistemes de predicció del consum elèctric que, partint de les dades disponibles, ens subministrin el consum que hi haurà d’aquí a unes hores, uns dies o uns mesos, de la manera més aproximada possible. És dins d’aquest camp on s’ubica la recerca que presentem. Degut a la proliferació de xarxes de sensors i computadors més potents, s’han pogut desenvolupar sistemes de predicció més precisos. A tall de resum, en el primer treball, i tenint en compte que s’havia de conèixer en profunditat l’estat de la qüestió en relació a la predicció del consum elèctric, es va fer una anàlisi completa de l’estat de l’art. Un cop fet això, i partint del coneixement adquirit, en el segon treball es va dur a terme la instal•lació de les xarxes de sensors, la recollida de dades de consum i el modelatge amb models lineals d’auto-regressió (AR). En el tercer treball, un cop fets els models es va anar un pas més enllà recollint dades d’ocupació, de meteorologia i ambient interior, provant diferents models paradigmàtics com Multiple Linear Regression (MLR), Artificial Neural Network (ANN) i Support Vector Regression (SVR) i establint quines dades exògenes milloren la predicció dels models. Arribat a aquest punt, i havent corroborat que l’ús de dades d’ocupació millora la predicció, es van generar tècniques per tal de disposar de les dades d’ocupació per endavant, o sigui a hores vista. D’aquesta manera es van dissenyar diferents atributs d’ocupació artificials, permetent-nos fer prediccions horàries de consum a llarg termini. Aquests conceptes s’expliquen en profunditat al quart treball.
APA, Harvard, Vancouver, ISO, and other styles
47

Castro, Pedro Henrique Pires de. "Estratégias para uso eficiente de recursos em centros de dados considerando consumo de CPU e RAM." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4124.

Full text
Abstract:
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T19:59:39Z No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-02-05T20:00:48Z (GMT) No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Made available in DSpace on 2015-02-05T20:00:48Z (GMT). No. of bitstreams: 2 Dissertação - Pedro Henrique Pires de Castro - 2014.pdf: 1908182 bytes, checksum: edac87bddd8346a2bcce5d9b5f00301d (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-08-04<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>Cloud computing is being consolidated as a new distributed systems paradigm, offering computing resources in a virtualized way and with unprecedented levels of flexibility, reliability, and scalability. Unfortunately, the benefits of cloud computing come at a high cost with regard to energy, mainly because of one of its core enablers, the data center. There are a number of proposals that seek to enhance energy efficiency in data centers. However, most of them focus only on the energy consumed by CPU and ignore the remaining hardware, e.g., RAM. In this work, we show the considerable impact that RAM can have on total energy consumption, particularly in servers with large amounts of this memory. We also propose three new approaches for dynamic consolidation of virtual machines (VMs) that take into account both CPU and RAM usage. We have implemented and evaluated our proposals in the CloudSim simulator using real-world traces and compared the results with state-of-the-art solutions. By adopting a wider view of the system, our proposals are able to reduce not only energy consumption but also the number of SLA violations, i.e., they provide a better service at a lower cost.<br>A computação em nuvem tem levado os sistemas distribuídos a um novo patamar, oferecendo recursos computacionais de forma virtualizada, flexível, robusta e escalar. Essas vantagens, no entanto, surgem juntamente com um alto consumo de energia nos centros de dados, ambientes que podem ter até centenas de milhares de servidores. Existem muitas propostas para alcançar eficiência energética em centros de dados para computação em nuvem. Entretanto, muitas propostas consideram apenas o consumo proveniente do uso de CPU e ignoram os demais componentes de hardware, e.g., RAM. Neste trabalho, mostramos o impacto considerável que RAM pode ter sobre o consumo total de energia, principalmente em servidores com grandes quantidades dessa memória. Também propomos três novas abordagens para consolidação dinâmica de máquinas virtuais, levando em conta tanto o consumo de CPU quanto de RAM. Nossas propostas foram implementadas e avaliadas no simulador CloudSim utilizando cargas de trabalho do mundo real. Os resultados foram comparados com soluções do estado-da-arte. Pela adoção de uma visão mais ampla do sistema, nossas propostas não apenas são capazes de reduzir o consumo de energia como também reduzem violações de SLA, i.e., proveem um serviço melhor a um custo mais baixo.
APA, Harvard, Vancouver, ISO, and other styles
48

Geiberger, Philipp. "Monitoring energy efficiency of heavy haul freight trains with energy meter data." Thesis, KTH, Spårfordon, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299421.

Full text
Abstract:
In this MSc thesis, it is investigated what parameters are relevant for describing energy consumption of heavy haul freight trains and how these can be used to develop key performance indicators (KPIs) for energy efficiency. The possible set of KPI is bounded by data available from energy meters used in electric IORE class locomotives hauling iron ore trains in northern Sweden. Furthermore, the analysis is only concerned with energy efficiency at the rolling stock level, excluding losses in the electric power supply network. Based on a literature study, parameters of interest describing driver, operations and rolling stock energy efficiency have been identified. By means of simulation, a parametric study is performed, simulating a 30 ton axle load iron ore train with 68 wagons. Train modelling input is obtained from technical documentation or estimated through measurements and statistical analysis. A multi-particle representation of the train is used to calculate gradient resistance for the simulation, which is also applied to determine the curve resistance.  Results show that the motion resistance is simulated quite accurately, while the lack of a driver model in the simulation tool leads to overestimation of energy consumption. Taking this into account, the importance of the driver for energy efficiency can still clearly be showcased in the parametric study. Especially on long steep downhill sections, prioritising the electric brakes over mechanical brakes is demonstrated to have a huge influence on net energy consumption, as has the amount of coasting applied. With the same driver behaviour in all simulations, the savings in specific energy from increasing axle load to 32.5 tons is estimated. Moreover, a comparison of increased train length and axle load points towards higher savings for the latter. In the end, parametric study results are used to recommend a structure for a monitoring system of energy efficiency based on a set of KPIs. With a sufficiently high sampling rate of energy meter data, it is adequate for calculating driver related KPIs and some additional KPIs. More KPIs can be tracked with access to additional data, e.g. cargo load.<br>I detta examensarbete undersöks vilka parametrar som är relevanta för att beskriva energiförbrukning för tunga godståg och hur dessa kan nyttjas för att utveckla nyckeltal för energieffektivitet. Antalet möjliga nyckeltal avgränsas till sådana som kan beräknas med data från elmätare som används i elektriska littera IORE lok som drar tunga malmtåg i norra Sverige. Vidare så tar analysen endast hänsyn till energieffektivitet för rullande materiel, vilket utesluter förluster i elektriska kraftmatningsnätet. Baserad på en litteraturstudie har relevanta parametrar som beskriver förare, drift och rullande materiel identifierats. Med hjälp av simuleringar av ett malmtåg med 30 tons axellast och 68 vagnar så utförs en parameterstudie. Indata för tågmodelleringen erhålls från teknisk dokumentation respektive uppskattas genom mätningar och statistisk analys. En representation av tåget som flertalet partiklar tillämpas i simulering för att beräkna lutningsmotståndet. Dessutom används densamma för att ta fram kurvmotståndet. Resultaten visar att gångmotstånd simuleras ganska exakt, medan avsaknad av en förarmodell i simuleringsvertyget leder till överskattad energiförbrukning. Med hänsyn tagen till detta så kan betydelsen av föraren för energieffektivitet fortfarande påvisas mycket tydligt i parameterstudien. I synnerhet i långa branta nedförsbackar har prioritering av den elektriska bromsen framför den mekaniska bromsen mycket stor påverkan på nettoenergiförbrukningen, likaväl som hur mycket tåget frirullar. Med samma förarbeteende i samtliga simuleringar har besparingar i specifik energiförbrukning kunnat uppskattats för en ökning av axellasten till 32,5 ton. Dessutom pekar en jämförelse av ökad tåglängd och axellast mot att sistnämnda ger större besparingar. Slutligen så har resultaten från parameterstudien nyttjats för att rekommendera en struktur för ett uppföljningssystem av energieffektivitet baserad på en uppsättning av nyckeltal. Med tillräckligt hög samplingsfrekvens på data från elmätare är den adekvat för att beräkna vissa nyckeltal, framförallt relaterad till förare. Fler nyckeltal kan följas upp med mer tillgänglig data så som lastvikter.
APA, Harvard, Vancouver, ISO, and other styles
49

Vialetto, Giulio. "Energy efficiency in industrial facilities - Improvements on energy transformation and data analysis." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425926.

Full text
Abstract:
During these years of my Ph.D. studies the main aim of the research work was to improve the efficiency on energy generation into industrial facilities. Novelties are proposed both on the devices used for energy generation and on energy consumption data analytics. In the first part of the thesis, Solid Oxide Fuel Cell (SOFC) and Reversible Solid Oxide Cell (RSOC) are proposed: these technologies have many advantages such as high efficiency on energy generation, heat available at high temperature, and modularity. A new heat recovery for a modular micro-cogeneration system based on SOFC is presented with the main goal of improving the efficiency of an air source heat pump with unused heat of fuel cell exhausted gases. The novelty of the system proposed is that exhaust gases after the fuel cell are firstly used to heat water and/or used to produce steam, then they are mixed with the external air to feed the evaporator of the heat pump with the aim of increasing energy efficiency of the latter. This system configuration decreases the possibility of freezing of the evaporator as well, which is one of the drawbacks for air source heat pump in climates where temperature close to 0 °C and high humidity could occur. Results show that the performance of the air source heat pump increases considerably during cold season for climates with high relative humidity and for users with high electric power demand. As previously cited, not only SOFC but also RSOC are deeply analysed in the thesis to define innovative energy generation system with the possibility of varying H/P ratio to match energy generation and demand in order to avoid mismatching and, consequently, integration system with a lower system. The aim is to define a modular system where each RSOC module can be switched between energy generation mode (fuel consumption to produce electricity and heat) and energy consumption (electricity and heat are consumed to produce hydrogen, working as Solid Oxide Electrolysis Cells) to vary overall H/P of the overall system. Hydrogen is a sub-product of the system and can be used for many purposes such as fuel and/or for transport sector. Then a re-vamping of the energy generation system of a paper mill by means of RSOCS is proposed and analysed: a real industrial facility, based in Italy with a production capacity of 60000 t/y of paper, is used as case study. Even if the complexity of the system increases, results show that saving between 2% and 6% occurs. Hydrogen generation is assessed, comparing the RSOC integrated system with PEM electrolysis, in terms of both primary energy and economics. Results exhibit significant primary energy and good economic performance on hydrogen production with the novel system proposed. In the thesis novelties are proposed not only on energy system “hardware” (component for energy generation) but also on “software”. In the second part of the thesis, artificial intelligence and machine learning methods are analysed to perform analytics on energy consumption data and consequently to improve performances on energy generation and operation strategy. A study on how cluster analysis could be applied to analyse energy demand data is depicted. The aim of the method is to design cogeneration systems that suit more efficiently energy demand profiles, choosing the correct type of cogeneration technology, operation strategy and, if they are necessary, energy storages. A case study of a wood industry that requires low temperature heat to dry wood into steam-powered kilns that already uses cogeneration is proposed to apply the methodology in order to design and measure improvements. An alternative cogeneration system is designed and proposed, thermodynamics benchmarks are defined to evaluate differences between as-is and alternative scenarios. Results show that the proposed innovative method allows to choose a more suitable cogeneration technology compared to the adopted one, giving suggestions on the operation strategy in order to decrease energy losses and, consequently, primary energy consumption. Finally, clustering is suggested for short-term forecasting of energy demand in industrial facilities. A model based on clustering and kNN is proposed to find similar pattern of consumption, to identify average consumption profiles, and then to use them to forecast consumption data. Novelties on model parameters definition such as data normalisation and clustering hyperparameters are presented to improve its accuracy. The model is then applied to the energy dataset of the wood industry previously cited. Analysis on the parameters and the results of the model are performed, showing a forecast of electricity demand with an error of 3%.
APA, Harvard, Vancouver, ISO, and other styles
50

Lazaar, Nouhaila. "Optimisation des alimentations électriques des Data Centers." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC206.

Full text
Abstract:
Les data centers, des usines abritant des milliers de serveurs informatiques, fonctionnent en permanence pour échanger, stocker, traiter des données et les rendre accessibles via l'internet. Avec le développement du secteur numérique, leur consommation énergétique, en grande partie d’origine fossile, n’a cessé de croitre au cours de la dernière décennie, représentant une réelle menace pour l’environnement. Le recours aux énergies renouvelables constitue un levier prometteur pour limiter l’empreinte écologique des data centers. Néanmoins, le caractère intermittent de ces sources freine leur intégration dans un système nécessitant un degré de fiabilité élevée. L’hybridation de plusieurs technologies pour la production d’électricité verte, couplée à des dispositifs de stockage est actuellement une solution efficace pour pallier ce problème. De ce fait, ce travail de recherche étudie un système multi-sources, intégrant des hydroliennes, des panneaux photovoltaïques, des batteries et un système de stockage d’hydrogène pour alimenter un data center à l’échelle du MW. L'objectif principal de cette thèse est l’optimisation de l'alimentation électrique d'un data center, aussi bien pour des sites isolés que des installations raccordées au réseau. Le premier axe de ce travail est la modélisation des différents composants du système à l’aide de la représentation énergétique macroscopique (REM). Une gestion d’énergie reposant sur le principe de séparation fréquentielle est adoptée dans un premier temps pour répartir l’énergie entre des organes de stockage présentant des caractéristiques dynamiques différentes. Le deuxième axe concerne le dimensionnement optimal du système proposé afin de trouver la meilleure configuration qui satisfasse les contraintes techniques imposées à un coût minimum, en utilisant l’optimisation par essaims particulaires (PSO) et l’algorithme génétique (AG). Ici, une technique de gestion d’énergie basée sur des règles simples est utilisée pour des raisons de simplicité et de réduction de temps de calcul. Le dernier axe se focalise sur l’optimisation de la gestion d’énergie via l’AG, en tenant compte des problèmes de dégradation des systèmes de stockage en vue de réduire leur coût d’exploitation et de prolonger leur durée de vie. Il est bien entendu que chaque axe précédemment abordé a fait l’objet d’une analyse de sensibilité spécifique, afin d’évaluer les performances du système hybride dans différentes conditions de fonctionnement<br>Data centers, factories housing thousands of computer servers that work permanently to exchange, store, process data and make it accessible via the Internet. With the digital sector development, their energy consumption, which is largely fossil fuel-based, has grown continuously over the last decade, posing a real threat to the environment. The use of renewable energy is a promising way to limit the ecological footprint of data centers. Nevertheless, the intermittent nature of these sources hinders their integration into a system requiring a high reliability degree. The hybridization of several technologies for green electricity production, coupled with storage devices, is currently an effective solution to this problem. As a result, this research work studies a multi-source system, integrating tidal turbines, photovoltaic panels, batteries and a hydrogen storage system to power an MW-scale data center. The main objective of this thesis is the optimization of a data center power supply, both for isolated sites and grid-connected ones. The first axis of this work is the modeling of the system components using the energetic macroscopic representation (EMR). Energy management strategy based on the frequency separation principle is first adopted to share power between storage devices with different dynamic characteristics. The second axis concerns the optimal sizing of the proposed system, in order to find the best configuration that meets the technical constraints imposed at minimum cost, using particle swarm optimization (PSO) and genetic algorithm (GA). Here, a rules-based energy management technique is used for simplicity and reduced computing time purposes. The last axis focuses on the energy management optimization through GA, taking into account the storage systems degradation in order to reduce their operating costs and extend their lifetime. It should be noted that each axis previously discussed has been the subject of a specific sensitivity analysis, which aims to evaluate the performance of the hybrid system under different operating conditions
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!