Academic literature on the topic 'Energy efficiency in Data Centers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Energy efficiency in Data Centers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Energy efficiency in Data Centers"

1

Chong, Frederic T., Martijn J. R. Heck, Parthasarathy Ranganathan, Adel A. M. Saleh, and Hassan M. G. Wassel. "Data Center Energy Efficiency:Improving Energy Efficiency in Data Centers Beyond Technology Scaling." IEEE Design & Test 31, no. 1 (2014): 93–104. http://dx.doi.org/10.1109/mdat.2013.2294466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Redondo Gil, Carlos. "Energy Efficiency in Data Processing Centers." Renewable Energy and Power Quality Journal 1, no. 08 (2010): 1051–60. http://dx.doi.org/10.24084/repqj08.580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Doty, Steve. "Energy Efficiency In Computer Data Centers." Energy Engineering 103, no. 5 (2006): 50–76. http://dx.doi.org/10.1080/01998590609509477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abdullah, Hamed AlQahtani. "Waste Energy in Data Centers." International Journal of Computer Science and Information Technology Research 11, no. 3 (2023): 172–75. https://doi.org/10.5281/zenodo.8383544.

Full text
Abstract:
<strong>Abstract:</strong> Data centres are the backbone of the digital age, powering the storage and processing of vast amounts of information. However, their relentless demand for energy has raised concerns about environmental sustainability. This essay explores the concept of waste energy in data centres as a promising solution to mitigate their environmental footprint and improve overall energy efficiency. The exponential growth of data centre operations has led to significant energy consumption, resulting in carbon emissions and resource depletion. To address these challenges, data centre operators and researchers are increasingly focusing on harnessing waste energy, which refers to the energy generated as a byproduct of data centre operations that would otherwise go unused. This studyexamines various sources of waste energy, including excess heat and motion, and discusses innovative techniques to capture and repurpose this energy. Furthermore, the study delves into the benefits of waste energy utilization, including reduced operating costs, decreased carbon emissions, and improved energy resilience. The integration of renewable energy sources and advanced cooling systems plays a pivotal role in maximizing waste energy recovery. In addition to the technical aspects, this paper explores the economic and environmental implications of waste energy initiatives, highlighting their potential to transform data centres into sustainable, green computing hubs. It also discusses the challenges and barriers that must be overcome to achieve widespread adoption of waste energy solutions. In conclusion, waste energy in data centres represents a critical pathway towards a more sustainable and efficient digital infrastructure. By reimagining data centre operations and leveraging waste energy sources, we can significantly reduce the environmental impact of the digital age while ensuring the continued growth of our interconnected world. <strong>Keywords:</strong> Waste Energy, Data Centers, Sustainability, Renewable Energy, Energy Efficiency, Green Computing, Environmental Impact. &nbsp; <strong>Title:</strong> Waste Energy in Data Centers <strong>Author:</strong> Abdullah Hamed AlQahtani <strong>International Journal of Computer Science and Information Technology Research</strong> <strong>ISSN 2348-1196 (print), ISSN 2348-120X (online)</strong> <strong>Vol. 11, Issue 3, July 2023 - September 2023</strong> <strong>Page No: 172-175</strong> <strong>Research Publish Journals</strong> <strong>Website: www.researchpublish.com</strong> <strong>Published Date: 27-September-2023</strong> <strong>DOI: </strong><strong>https://doi.org/10.5281/zenodo.8383544</strong> <strong>Paper Download Link (Source)</strong> <strong>https://www.researchpublish.com/papers/waste-energy-in-data-centers</strong>
APA, Harvard, Vancouver, ISO, and other styles
5

Hernandez, Lenonel, Genett Jimenez, and Piedad Marchena. "Energy Efficiency Metrics of University Data Centers." Knowledge Engineering and Data Science 1, no. 2 (2018): 64. http://dx.doi.org/10.17977/um018v1i22018p64-73.

Full text
Abstract:
The data centers are fundamental pieces in the network and computing infrastructure, and evidently today more than ever they are relevant. Since they support the processing, analysis, assurance of the data generated in the network and by the applications in the cloud, which every day increases its volume thanks to technologies such as Internet of Things, Virtualization, and cloud computing, among others. Precisely the management of this large volume of information makes the data centers consume a lot of energy, generating great concern to owners and administrators. Green Data Centers offer a solution to this problem, reducing the impact produced by the data centers in the environment, through the monitoring and control of these. The metrics are the tools that allow us to measure in our case the energy efficiency of the data center and evaluate if it is friendly to the environment. These metrics will be applied to the data centers of the ITSA University Institution, Barranquilla and Soledad campus, and the analysis of these will be carried out. In previous research, the most common metric (PUE) was analyzed to measure the efficiency of the data centers, to verify if the University's data center is friendly to the environment. It is planned to extend this study by carrying out an analysis of several metrics to conclude which is the most efficient and which allows defining the guidelines to update or convert the data center in a friendly environment.
APA, Harvard, Vancouver, ISO, and other styles
6

Karamat Khan, Tehmina, Mohsin Tanveer, and Asadullah Shah. "Energy Efficiency in Virtualized Data Center." International Journal of Engineering & Technology 7, no. 4.15 (2018): 315. http://dx.doi.org/10.14419/ijet.v7i4.15.23019.

Full text
Abstract:
Industrial and academic communities have been trying to get more computational power out of their investments. Data centers have recently received huge attention due to its increased business value and achievable scalability on public/private clouds. Infra-structure and applications of modern data center is being virtualized to achieve energy efficient operation on servers. Despite of data center advantages on performance, there is a tradeoff between power and performance especially with cloud data centers. Today, these cloud application-based organizations are facing many energy related challenges. In this paper, through survey it has been analyzed how virtualization and networking related challenges affects energy efficiency of data center with suggested optimization strategies.
APA, Harvard, Vancouver, ISO, and other styles
7

Fernández-Cerero, Damián, Alejandro Fernández-Montes, and Francisco Velasco. "Productive Efficiency of Energy-Aware Data Centers." Energies 11, no. 8 (2018): 2053. http://dx.doi.org/10.3390/en11082053.

Full text
Abstract:
Information technologies must be made aware of the sustainability of cost reduction. Data centers may reach energy consumption levels comparable to many industrial facilities and small-sized towns. Therefore, innovative and transparent energy policies should be applied to improve energy consumption and deliver the best performance. This paper compares, analyzes and evaluates various energy efficiency policies, which shut down underutilized machines, on an extensive set of data-center environments. Data envelopment analysis (DEA) is then conducted for the detection of the best energy efficiency policy and data-center characterization for each case. This analysis evaluates energy consumption and performance indicators for natural DEA and constant returns to scale (CRS). We identify the best energy policies and scheduling strategies for high and low data-center demands and for medium-sized and large data-centers; moreover, this work enables data-center managers to detect inefficiencies and to implement further corrective actions.
APA, Harvard, Vancouver, ISO, and other styles
8

Hamann, H. F., T. G. van Kessel, M. Iyengar, et al. "Uncovering energy-efficiency opportunities in data centers." IBM Journal of Research and Development 53, no. 3 (2009): 10:1–10:12. http://dx.doi.org/10.1147/jrd.2009.5429023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

V, Amarnath, Mallikarjuna K, Nagendra J, Umesha D M, and Nalina V. "Review on Energy Efficiency Green Data Centers." International Journal of Recent Engineering Science 5, no. 2 (2018): 21–26. http://dx.doi.org/10.14445/23497157/ijres-v5i2p105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Digra, Lakshmi, and Sharanjeet Singh. "Survey on Energy Efficiency in Cloud Computing." Asian Journal of Computer Science and Technology 8, no. 1 (2019): 18–21. http://dx.doi.org/10.51983/ajcst-2019.8.1.2125.

Full text
Abstract:
Data centers are serious, energy-hungry infrastructures that can run large scale Internet based services. Energy ingesting representations are essential in designing and improving energy-efficient operations to reduce excessive energy consumption in data centers. This paper presents a survey on Energy efficiency in data centers, importance of energy efficiency. It also describes the increasing demands for data center in worldwide and the reasons for data centers energy inefficient? In this paper we define the challenges for implementing changes in data centers and explain why and how the energy requirements of data centers are growing. After that we compare the German data center market at international level and we see the energy consumption of data centers and servers in Germany from 2010 -2016.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Energy efficiency in Data Centers"

1

Baccour, Emna. "Network architectures and energy efficiency for high performance data centers." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK009/document.

Full text
Abstract:
L’évolution des services en ligne et l’avènement du big data ont favorisé l’introduction de l’internet dans tous les aspects de notre vie : la communication et l’échange des informations (exemple, Gmail et Facebook), la recherche sur le web (exemple, Google), l’achat sur internet (exemple, Amazon) et le streaming vidéo (exemple, YouTube). Tous ces services sont hébergés sur des sites physiques appelés centres de données ou data centers qui sont responsables de stocker, gérer et fournir un accès rapide à toutes les données. Tous les équipements constituants le système d’information d’une entreprise (ordinateurs centraux, serveurs, baies de stockage, équipements réseaux et de télécommunications, etc) peuvent être regroupés dans ces centres de données. Cette évolution informatique et technologique a entrainé une croissance exponentielle des centres de données. Cela pose des problèmes de coût d’installation des équipements, d’énergie, d’émission de chaleur et de performance des services offerts aux clients. Ainsi, l’évolutivité, la performance, le coût, la fiabilité, la consommation d’énergie et la maintenance sont devenus des défis importants pour ces centres de données. Motivée par ces défis, la communauté de recherche a commencé à explorer de nouveaux mécanismes et algorithmes de routage et des nouvelles architectures pour améliorer la qualité de service du centre de données. Dans ce projet de thèse, nous avons développé de nouveaux algorithmes et architectures qui combinent les avantages des solutions proposées, tout en évitant leurs limitations. Les points abordés durant ce projet sont: 1) Proposer de nouvelles topologies, étudier leurs propriétés, leurs performances, ainsi que leurs coûts de construction. 2) Conception des algorithmes de routage et des modèles pour réduire la consommation d’énergie en prenant en considération la complexité, et la tolérance aux pannes. 3) Conception des protocoles et des systèmes de gestion de file d’attente pour fournir une bonne qualité de service. 4) Évaluation des nouveaux systèmes en les comparants à d’autres architectures et modèles dans des environnements réalistes<br>The increasing trend to migrate applications, computation and storage into more robust systems leads to the emergence of mega data centers hosting tens of thousands of servers. As a result, designing a data center network that interconnects this massive number of servers, and providing efficient and fault-tolerant routing service are becoming an urgent need and a challenge that will be addressed in this thesis. Since this is a hot research topic, many solutions are proposed like adapting new interconnection technologies and new algorithms for data centers. However, many of these solutions generally suffer from performance problems, or can be quite costly. In addition, devoted efforts have not focused on quality of service and power efficiency on data center networks. So, in order to provide a novel solution that challenges the drawbacks of other researches and involves their advantages, we propose to develop new data center interconnection networks that aim to build a scalable, cost-effective, high performant and QoS-capable networking infrastructure. In addition, we suggest to implement power aware algorithms to make the network energy effective. Hence, we will particularly investigate the following issues: 1) Fixing architectural and topological properties of the new proposed data centers and evaluating their performances and capacities of providing robust systems under a faulty environment. 2) Proposing routing, load-balancing, fault-tolerance and power efficient algorithms to apply on our architectures and examining their complexity and how they satisfy the system requirements. 3) Integrating quality of service. 4) Comparing our proposed data centers and algorithms to existing solutions under a realistic environment. In this thesis, we investigate a quite challenging topic where we intend, first, to study the existing models, propose improvements and suggest new methodologies and algorithms
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Yongqiang. "Energy efficient virtual machine placement in data centers." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61408/1/Yongqiang_Wu_Thesis.pdf.

Full text
Abstract:
Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindberg, Therese. "Modelling and Evaluation of Distributed Airflow Control in Data Centers." Thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-36479.

Full text
Abstract:
In this work a suggested method to reduce the energy consumption of the cooling system in a data center is modelled and evaluated. Introduced is different approaches to distributed airflow control, in which different amounts of airflow can be supplied in different parts of the data center (instead of an even airflow distribution). Two different kinds of distributed airflow control are compared to a traditional approach without airflow control. The difference between the two control approaches being the type of server rack used, either traditional ones or a new kind of rack with vertically placed servers. A model capable of describing the power consumption of the data center cooling system for these different approaches to airflow control was constructed. Based on the model, MATLAB simulations of three different server work load scenarios were then carried out. It was found that introducing distributed airflow control reduced the power consumption for all scenarios and that the control approach with the new kind of rack had the largest reduction. For this case the power consumption of the cooling system could be reduced to 60% - 69% of the initial consumption, depending on the workload scenario. Also examined was the effect on the data center of different parameters and process variables (parameters held fixed with the help of feedback loops), as well as optimal set point values.
APA, Harvard, Vancouver, ISO, and other styles
4

Jawad, Muhammad. "Energy Efficient Data Centers for On-Demand Cloud Services." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/25198.

Full text
Abstract:
The primary objective of the Data Centers (DCs) is to provide in-time services to the cloud customers. For in-time services, DCs required an uninterruptable power supply at low cost. The DCs? power supply is directly linked with the stability and steady-state performance of the power system under faults and disturbances. Smart Grids (SGs) also known as the next generation power systems utilize communication and information technology to optimize power generation, distribution, and consumption. Therefore, it is beneficial to run DCs under SG environment. We present a thorough study of the wide area smart grid architecture, design, network, and control. The goal was to familiarize with the smart grid operation, monitoring, and control. We analyze different control mechanisms proposed in the past to study the behavior of the wide area smart grid symmetric and asymmetric grid faults conditions. The Study of the SG architecture was a first step to design power management and energy cost reduction models for the DC running under SGs. At first, we present a Power Management Model (PMM) for the DCs to estimate energy consumption cost. The PMM is a comprehensive model that considers many important quantities into account, such as DC power consumption, data center battery bank charging/discharging, backup generation operation during power outages, and power transactions between the main grid and the SG. Second, renewable energy, such as wind energy is integrated with the SG to minimize DC energy consumption cost. Third, forecasting algorithms are introduced in the PMM to predict DC power consumption, wind energy generation, and main grid power availability for the SG. The forecasting algorithms are employed for day-ahead and week-ahead prediction horizons. The purpose of the forecasting algorithms is to manage power generation and consumption, and reduce energy prices. Fourth, we formulate chargeback model for the DC customers to calculate on-demand cloud services cost. The DC energy consumption cost estimated through PMM is integrated with the other operational and capital expenditures to calculate per server utilization cost for the DC customers. Finally, the effectiveness of the proposed models is evaluated on real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
5

Somani, Ankit. "Advanced thermal management strategies for energy-efficient data centers." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/26527.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009.<br>Committee Chair: Joshi, Yogendra; Committee Member: ghiaasiaan, mostafa; Committee Member: Schwan, Karsten. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Bergqvist, Sofia. "Energy Efficiency Improvements Using DC in Data Centres." Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155096.

Full text
Abstract:
The installed power usage in a data centre will often amount to several megawatts (MW). Thetotal power consumption of the data centres in the world is comparable to that of the airtraffic. The high energy costs and carbon dioxide emissions resulting from the operation of adata centre call for alternative, more efficient, solutions for the power supply design. Oneproposed solution to decrease the energy usage is to use a direct current power supply (DCUPS) for all the servers in the data centre and thereby reduce the number of conversionsbetween AC and DC.The aim with this thesis was to determine whether such a DC solution brings reduced powerconsumption compared to a traditional setup and, if this is the case, how big the savings are.Measurements were carried out on different equipment and thereafter the power consumptionwas calculated. The conclusion was that up to 25 % in electricity use can be saved when usinga DC power supply compared to the traditional design.Other benefits that come with the DC technology are simplified design, improved reliabilityand lowered investments costs. Moreover the use of DC in data centres enables a moreefficient integration of renewable energy technologies into the power supply design
APA, Harvard, Vancouver, ISO, and other styles
7

Feller, Eugen. "Autonomic and Energy-Efficient Management of Large-Scale Virtualized Data Centers." Phd thesis, Université Rennes 1, 2012. http://tel.archives-ouvertes.fr/tel-00785090.

Full text
Abstract:
Large-scale virtualized data centers require cloud providers to implement scalable, autonomic, and energy-efficient cloud management systems. To address these challenges this thesis provides four main contributions. The first one proposes Snooze, a novel Infrastructure-as-a-Service (IaaS) cloud management system, which is designed to scale across many thousands of servers and virtual machines (VMs) while being easy to configure, highly available, and energy efficient. For scalability, Snooze performs distributed VM management based on a hierarchical architecture. To support ease of configuration and high availability Snooze implements self-configuring and self-healing features. Finally, for energy efficiency, Snooze integrates a holistic energy management approach via VM resource (i.e. CPU, memory, network) utilization monitoring, underload/overload detection and mitigation, VM consolidation (by implementing a modified version of the Sercon algorithm), and power management to transition idle servers into a power saving mode. A highly modular Snooze prototype was developed and extensively evaluated on the Grid'5000 testbed using realistic applications. Results show that: (i) distributed VM management does not impact submission time; (ii) fault tolerance mechanisms do not impact application performance and (iii) the system scales well with an increasing number of resources thus making it suitable for managing large-scale data centers. We also show that the system is able to dynamically scale the data center energy consumption with its utilization thus allowing it to conserve substantial power amounts with only limited impact on application performance. Snooze is an open-source software under the GPLv2 license. The second contribution is a novel VM placement algorithm based on the Ant Colony Optimization (ACO) meta-heuristic. ACO is interesting for VM placement due to its polynomial worst-case time complexity, close to optimal solutions and ease of parallelization. Simulation results show that while the scalability of the current algorithm implementation is limited to a smaller number of servers and VMs, the algorithm outperforms the evaluated First-Fit Decreasing greedy approach in terms of the number of required servers and computes close to optimal solutions. In order to enable scalable VM consolidation, this thesis makes two further contributions: (i) an ACO-based consolidation algorithm; (ii) a fully decentralized consolidation system based on an unstructured peer-to-peer network. The key idea is to apply consolidation only in small, randomly formed neighbourhoods of servers. We evaluated our approach by emulation on the Grid'5000 testbed using two state-of-the-art consolidation algorithms (i.e. Sercon and V-MAN) and our ACO-based consolidation algorithm. Results show our system to be scalable as well as to achieve a data center utilization close to the one obtained by executing a centralized consolidation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Takouna, Ibrahim. "Energy-efficient and performance-aware virtual machine management for cloud data centers." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/texte_eingeschraenkt_verlag/2014/7239/.

Full text
Abstract:
Virtualisierte Cloud Datenzentren stellen nach Bedarf Ressourcen zur Verfügu-ng, ermöglichen agile Ressourcenbereitstellung und beherbergen heterogene Applikationen mit verschiedenen Anforderungen an Ressourcen. Solche Datenzentren verbrauchen enorme Mengen an Energie, was die Erhöhung der Betriebskosten, der Wärme innerhalb der Zentren und des Kohlendioxidausstoßes verursacht. Der Anstieg des Energieverbrauches kann durch ein ineffektives Ressourcenmanagement, das die ineffiziente Ressourcenausnutzung verursacht, entstehen. Die vorliegende Dissertation stellt detaillierte Modelle und neue Verfahren für virtualisiertes Ressourcenmanagement in Cloud Datenzentren vor. Die vorgestellten Verfahren ziehen das Service-Level-Agreement (SLA) und die Heterogenität der Auslastung bezüglich des Bedarfs an Speicherzugriffen und Kommunikationsmustern von Web- und HPC- (High Performance Computing) Applikationen in Betracht. Um die präsentierten Techniken zu evaluieren, verwenden wir Simulationen und echte Protokollierung der Auslastungen von Web- und HPC- Applikationen. Außerdem vergleichen wir unser Techniken und Verfahren mit anderen aktuellen Verfahren durch die Anwendung von verschiedenen Performance Metriken. Die Hauptbeiträge dieser Dissertation sind Folgendes: Ein Proaktives auf robuster Optimierung basierendes Ressourcenbereitstellungsverfahren. Dieses Verfahren erhöht die Fähigkeit der Hostes zur Verfüg-ungsstellung von mehr VMs. Gleichzeitig aber wird der unnötige Energieverbrauch minimiert. Zusätzlich mindert diese Technik unerwünschte Ände-rungen im Energiezustand des Servers. Die vorgestellte Technik nutzt einen auf Intervall basierenden Vorhersagealgorithmus zur Implementierung einer robusten Optimierung. Dabei werden unsichere Anforderungen in Betracht gezogen. Ein adaptives und auf Intervall basierendes Verfahren zur Vorhersage des Arbeitsaufkommens mit hohen, in kürzer Zeit auftretenden Schwankungen. Die Intervall basierende Vorhersage ist implementiert in der Standard Abweichung Variante und in der Median absoluter Abweichung Variante. Die Intervall-Änderungen basieren auf einem adaptiven Vertrauensfenster um die Schwankungen des Arbeitsaufkommens zu bewältigen. Eine robuste VM Zusammenlegung für ein effizientes Energie und Performance Management. Dies ermöglicht die gegenseitige Abhängigkeit zwischen der Energie und der Performance zu minimieren. Unser Verfahren reduziert die Anzahl der VM-Migrationen im Vergleich mit den neu vor kurzem vorgestellten Verfahren. Dies trägt auch zur Reduzierung des durch das Netzwerk verursachten Energieverbrauches. Außerdem reduziert dieses Verfahren SLA-Verletzungen und die Anzahl von Änderungen an Energiezus-tänden. Ein generisches Modell für das Netzwerk eines Datenzentrums um die verzö-gerte Kommunikation und ihre Auswirkung auf die VM Performance und auf die Netzwerkenergie zu simulieren. Außerdem wird ein generisches Modell für ein Memory-Bus des Servers vorgestellt. Dieses Modell beinhaltet auch Modelle für die Latenzzeit und den Energieverbrauch für verschiedene Memory Frequenzen. Dies erlaubt eine Simulation der Memory Verzögerung und ihre Auswirkung auf die VM-Performance und auf den Memory Energieverbrauch. Kommunikation bewusste und Energie effiziente Zusammenlegung für parallele Applikationen um die dynamische Entdeckung von Kommunikationsmustern und das Umplanen von VMs zu ermöglichen. Das Umplanen von VMs benutzt eine auf den entdeckten Kommunikationsmustern basierende Migration. Eine neue Technik zur Entdeckung von dynamischen Mustern ist implementiert. Sie basiert auf der Signal Verarbeitung des Netzwerks von VMs, anstatt die Informationen des virtuellen Umstellung der Hosts oder der Initiierung der VMs zu nutzen. Das Ergebnis zeigt, dass unsere Methode die durchschnittliche Anwendung des Netzwerks reduziert und aufgrund der Reduzierung der aktiven Umstellungen Energie gespart. Außerdem bietet sie eine bessere VM Performance im Vergleich zu der CPU-basierten Platzierung. Memory bewusste VM Zusammenlegung für unabhängige VMs. Sie nutzt die Vielfalt des VMs Memory Zuganges um die Anwendung vom Memory-Bus der Hosts zu balancieren. Die vorgestellte Technik, Memory-Bus Load Balancing (MLB), verteilt die VMs reaktiv neu im Bezug auf ihre Anwendung vom Memory-Bus. Sie nutzt die VM Migration um die Performance des gesamtem Systems zu verbessern. Außerdem sind die dynamische Spannung, die Frequenz Skalierung des Memory und die MLB Methode kombiniert um ein besseres Energiesparen zu leisten.<br>Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
APA, Harvard, Vancouver, ISO, and other styles
9

Samadiani, Emad. "Energy efficient thermal management of data centers via open multi-scale design." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37218.

Full text
Abstract:
Data centers are computing infrastructure facilities that house arrays of electronic racks containing high power dissipation data processing and storage equipment whose temperature must be maintained within allowable limits. In this research, the sustainable and reliable operations of the electronic equipment in data centers are shown to be possible through the Open Engineering Systems paradigm. A design approach is developed to bring adaptability and robustness, two main features of open systems, in multi-scale convective systems such as data centers. The presented approach is centered on the integration of three constructs: a) Proper Orthogonal Decomposition (POD) based multi-scale modeling, b) compromise Decision Support Problem (cDSP), and c) robust design to overcome the challenges in thermal-fluid modeling, having multiple objectives, and inherent variability management, respectively. Two new POD based reduced order thermal modeling methods are presented to simulate multi-parameter dependent temperature field in multi-scale thermal/fluid systems such as data centers. The methods are verified to achieve an adaptable, robust, and energy efficient thermal design of an air-cooled data center cell with an annual increase in the power consumption for the next ten years. Also, a simpler reduced order modeling approach centered on POD technique with modal coefficient interpolation is validated against experimental measurements in an operational data center facility.
APA, Harvard, Vancouver, ISO, and other styles
10

RUIU, PIETRO. "Energy Management in Large Data Center Networks." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2706336.

Full text
Abstract:
In the era of digitalization, one of the most challenging research topic regards the energy consumption reduction of ICT equipment to contrast the global climate change. The ICT world is very sensitive to the problem of Greenhouse Gas emissions (GHG) and for several years has begun to implement some countermeasures to reduce consumption waste and increase efficiency of infrastructure: the total embodied emissions of end-use devices have significantly decreased, networks have become more energy efficient, and trends such as virtualization and dematerialization will continue to make equipment more efficient. One of the main contributor to GHG emissions is data centers industry, which provision end users with the necessary computing and communication resources to access the vast majority of services online and on a pay-as-you-go basis. Data centers require a tremendous amount of energy to operate, since the efficiency of cooling systems is increasing, more research efforts should be put in making green the IT system, which is becoming the major contributor to energy consumption. Being the network one of the non-negligible contributors to energy consumption in data centers, several architectures have been designed with the goal of improving energy-efficient of data centers. These architectures are called Data Center Networks (DCNs) and provide interconnections among the computing servers and between the servers and the Internet, according to specific layouts.In my PhD I have extensively investigated on energy efficiency of data center, working on different projects which try to tackle the problems from different views. The research can be divided into two main parts with the Energy Proportionality as connection argument. The main focus of the work is about the trade-off between size and energy efficiency of data centers, with the aim to find a relationship between scalability and energy proportionality of data centers. In this regard, the energy consumption of different data center architectures have been analyzed, varying the dimension in terms of number of server and switches. Extensive simulation experiments, performed in small and large scale scenarios, unveil the ability of network-aware allocation policies in loading the the data center in a energy-proportional manner and the robustness of classical two- and three-tier design under network-oblivious allocation strategies. The concept of energy proportionality, applied to the whole DCN and used as efficiency metric, is one of the main contributions of the work. Energy proportionality is a property defining the degree of proportionality between load and the energy spent to support such load, thus devices are energy proportional when any increase of the load corresponds to a proportional increase of energy consumption. A peculiar feature of our analysis is in the consideration of the whole data center, i.e., both computing and communication devices are taken into account. Our methodology consists of an asymptotic analysis of data center consumption, whenever its size (in terms of servers) become very large. In our analysis, we investigate the impact of three different allocation policies on the energy proportionality of computing and networking equipment for different DCNs, including 2-Tier, 3-Tier and Jupiter topologies. For evaluation, the size of the DCNs varies to accommodate up to several thousands of computing servers. Validation of the analysis is conducted through simulations. We propose new metrics with the objective to characterize in a holistic manner the energy proportionality in data centers. The experiments unveil that, when consolidation policies are in place and regardless of the type of architecture, the size of the DCN plays a key role, i.e., larger DCNs containing thousands of servers are more energy proportional than small DCNs.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Energy efficiency in Data Centers"

1

Klingert, Sonja, Xavier Hesselbach-Serra, Maria Perez Ortega, and Giovanni Giuliani, eds. Energy-Efficient Data Centers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55149-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huusko, Jyrki, Hermann de Meer, Sonja Klingert, and Andrey Somov, eds. Energy Efficient Data Centers. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33645-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Klingert, Sonja, Marta Chinnici, and Milagros Rey Porto, eds. Energy Efficient Data Centers. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15786-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Joshi, Yogendra, and Pramod Kumar, eds. Energy Efficient Thermal Management of Data Centers. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-7124-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Joshi, Yogendra. Energy Efficient Thermal Management of Data Centers. Springer US, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ian, Steiner, and Saunders Winston, eds. Energy Efficient Servers: Blueprints for Data Center Optimization. Apress, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Beck, Fredric A. Energy smart data centers: Applying energy efficient design and technology to the digital information sector. Renewable Energy Policy Project, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

(Firm), ASHRAE. Server efficiency: Metrics for computer servers and storage. ASHRAE, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

American Society of Heating, Refrigerating and Air-Conditioning Engineers., ed. Best practices for datacom facility energy efficiency. 2nd ed. American Society of Heating, Refrigerating, and Air-Conditioning Engineers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Federspiel, Clifford. Recovery Act: Federspiel Controls (now Vigilent) and State of California Department of General Services data center energy efficient cooling control demonstration: Achieving instant energy savings with Vigilent. [California Energy Commission], 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Energy efficiency in Data Centers"

1

Basmadjian, Robert, Pascal Bouvry, Georges Da Costa, et al. "Green Data Centers." In Large-Scale Distributed Systems and Energy Efficiency. John Wiley & Sons, Inc, 2015. http://dx.doi.org/10.1002/9781118981122.ch6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tennina, Stefano, Marco Tiloca, Jan-Hinrich Hauer, et al. "Energy-Efficiency in Data Centers." In SpringerBriefs in Electrical and Computer Engineering. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37368-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gyarmati, László, and Tuan Anh Trinh. "Energy Efficiency of Data Centers." In Green IT: Technologies and Applications. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22179-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baggini, Angelo, and Franco Bua. "Data Centres." In Electrical Energy Efficiency. John Wiley & Sons, Ltd, 2012. http://dx.doi.org/10.1002/9781119990048.ch12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bunse, Christian, Sonja Klingert, and Thomas Schulze. "GreenSLAs: Supporting Energy-Efficiency through Contracts." In Energy Efficient Data Centers. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33645-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patterson, Michael K. "Energy Efficiency Metrics." In Energy Efficient Thermal Management of Data Centers. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-7124-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pernici, Barbara, Cinzia Cappiello, Maria Grazia Fugini, et al. "Setting Energy Efficiency Goals in Data Centers: The GAMES Approach." In Energy Efficient Data Centers. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33645-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Taheri, Javid, and Albert Y. Zomaya. "Energy Efficiency Metrics for Data Centers." In Energy-Efficient Distributed Computing Systems. John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118342015.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

vor dem Berge, Micha, Georges Da Costa, Andreas Kopecki, et al. "Modeling and Simulation of Data Center Energy-Efficiency in CoolEmAll." In Energy Efficient Data Centers. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33645-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

vor dem Berge, Micha, Georges Da Costa, Mateusz Jarus, Ariel Oleksiak, Wojciech Piatek, and Eugen Volk. "Modeling Data Center Building Blocks for Energy-Efficiency and Thermal Simulations." In Energy-Efficient Data Centers. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-55149-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Energy efficiency in Data Centers"

1

Vaccaro, Viviana, Lavinia Chiara Tagliabue, and Marco Aldinucci. "Sustainable Data Centers: Advancing Energy Efficiency and Resource Optimization." In 2025 33rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP). IEEE, 2025. https://doi.org/10.1109/pdp66500.2025.00075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krishnaram, Prasanna Chandran Melnatami. "Green AI: Optimizing Energy Efficiency of Workloads for Sustainable Data Centers." In 2025 IEEE Green Technologies Conference (GreenTech). IEEE, 2025. https://doi.org/10.1109/greentech62170.2025.10977613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Beitelmal, Abdlmonem H., and Drazen Fabris. "Introducing Energy Efficiency Metrics for Server and Data Centers." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-64704.

Full text
Abstract:
New servers and data center metrics are introduced to facilitate proper evaluation of data centers power and cooling efficiency. These metrics will be used to help reduce the cost of operation and to provision data centers cooling resources. The most relevant variables for these metrics are identified and they are: the total facility power, the servers’ idle power, the average servers’ utilization, the cooling resources power and the total IT equipment power. These metrics can be used to characterize and classify servers and data centers performance and energy efficiency regardless of their size and location.
APA, Harvard, Vancouver, ISO, and other styles
4

Patel, Chandrakant, Ratnesh Sharma, Cullen Bash, and Sven Graupner. "Energy Aware Grid: Global Workload Placement Based on Energy Efficiency." In ASME 2003 International Mechanical Engineering Congress and Exposition. ASMEDC, 2003. http://dx.doi.org/10.1115/imece2003-41443.

Full text
Abstract:
Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.
APA, Harvard, Vancouver, ISO, and other styles
5

Dumitru, Iulia, Ioana Fagarasan, Sergiu Iliescu, Yanis Hadj Said, and Stephane Ploix. "Increasing Energy Efficiency in Data Centers Using Energy Management." In 2011 IEEE/ACM International Conference on Green Computing and Communications (GreenCom). IEEE, 2011. http://dx.doi.org/10.1109/greencom.2011.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

N, Preetham, Sourav Kanti Addya, Keerthan Kumar T G, and Saumya Hegde. "LitE: Load Balanced Virtual Data Center Embedding for Energy Efficiency in Data Centers." In ICDCN 2025: 26th International Conference on Distributed Computing and Networking. ACM, 2025. https://doi.org/10.1145/3700838.3700849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhattacharya, Tathagata, and Xiao Qin. "Modeling Energy Efficiency of Future Green Data centers." In 2020 11th International Green and Sustainable Computing Workshops (IGSC). IEEE, 2020. http://dx.doi.org/10.1109/igsc51522.2020.9291049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Feng, Ying Lu, and Zhaohao Ding. "Batch Workloads Management for Data Centers Considering Nodes Efficiency." In 2018 2nd IEEE Conference on Energy Internet and Energy System Integration (EI2). IEEE, 2018. http://dx.doi.org/10.1109/ei2.2018.8582052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cader, Tahir, Levi Westra, and Andres Marquez. "Technologies for the Energy-Efficient Data Center." In ASME 2007 InterPACK Conference collocated with the ASME/JSME 2007 Thermal Engineering Heat Transfer Summer Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/ipack2007-33463.

Full text
Abstract:
Although semiconductor manufacturers have provided temporary relief with lower-power multi-core microprocessors, OEMs and data center operators continue to push the limits for individual rack power densities. It is not uncommon today for data center operators to deploy multiple 20 kW racks in a facility. Such rack densities are exacerbating the major issues of power and cooling in data centers. Data center operators are now forced to take a hard look at the efficiencies of their data centers. Malone and Belady (2006) have proposed three metrics, i.e., Power Usage Effectiveness (PUE), Data Center Efficiency (DCE), and the Energy-to-Acquisition Cost ratio (EAC), to help data center operators quickly quantify the efficiency of their data centers. In their paper, Malone and Belady present nominal values of PUE across a broad cross-section of data centers. PUE values are presented for data centers at four levels of optimization. One of these optimizations involves the use of Computational Fluid Dynamics (CFD). In the current paper, CFD is used to conduct an in-depth investigation of a liquid-cooled data center that would potentially be housed at the Pacific Northwest National Labs (PNNL). The boundary conditions used in the CFD model are based upon actual measurements on a rack of liquid-cooled servers housed at PNNL. The analysis shows that the liquid-cooled facility could achieve a PUE of 1.57 as compared to a PUE of 3.0 for a typical data center (the lower the PUE, the better, with values below 1.6 approaching ideal). The increase in data center efficiency is also translated into an increase in the amount of IT equipment that can be deployed. At a PUE of 1.57, the analysis shows that 91% more IT equipment can be deployed as compared to the typical data center. The paper will discuss the analysis of the PUE, and will also explore the impact of the raising data center efficiency via the use of multiple cooling technologies and CFD analysis. Complete results of the analyses will be presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
10

Klein, Levente J., Sergio A. Bermudez, Fernando J. Marianno, Hendrik F. Hamann, and Prabjit Singh. "Energy Efficiency and Air Quality Considerations in Airside Economized Data Centers." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48349.

Full text
Abstract:
Many data center operators are considering the option to convert from mechanical to free air cooling to improve energy efficiency. The main advantage of free air cooling is the elimination of chiller and Air Conditioning Unit operation when outdoor temperature falls below the data center temperature setpoint. Accidental introduction of gaseous pollutants in the data center along the fresh air and potential latency in response of control infrastructure to extreme events are some of the main concerns for adopting outside air cooling in data centers. Recent developments of ultra-high sensitivity corrosion sensors enable the real time monitoring of air quality and thus allow a better understanding of how airflow, relative humidity, and temperature fluctuations affect corrosion rates. Both the sensitivity of sensors and wireless networks ability to detect and react rapidly to any contamination event make them reliable tools to prevent corrosion related failures. A feasibility study is presented for eight legacy data centers that are evaluated to implement free air cooling.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Energy efficiency in Data Centers"

1

Tschudi, William, Tengfang Xu, Dale Sartor, Jon Koomey, Bruce Nordman, and Osman Sezgen. Energy efficient data centers. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/841561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

none,. Energy Efficiency in Data Centers: Recommendations for Government/Industry Coordination. Office of Scientific and Technical Information (OSTI), 2008. http://dx.doi.org/10.2172/1218528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mahdavi, Rod, and William Tschudi. Wireless Sensor Network for Improving the Energy Efficiency of Data Centers. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1171531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ganguly, Suprotim, Sanyukta Raje, Satish Kumar, Dale Sartor, and Steve Greenberg. Accelerating Energy Efficiency in Indian Data Centers. Final Report for Phase I Activities. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1249186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hendrik Hamann, Levente Klein. A Measurement Management Technology for Improving Energy Efficiency in Data Centers and Telecommunication Facilities. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1044604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shehabi, Arman. Energy Demands and Efficiency Strategies in Data Center Buildings. Office of Scientific and Technical Information (OSTI), 2009. http://dx.doi.org/10.2172/982905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Metzger, I., and O. Van Geet. Data Center Energy Efficiency and Renewable Energy Site Assessment: Anderson Readiness Center; Salem, Oregon. Office of Scientific and Technical Information (OSTI), 2014. http://dx.doi.org/10.2172/1135693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gidwani, Mohit, Narine Avetisyan, Yoonee Jeong, and Christine Apikul. The Emerging Data Center Market in Armenia and Opportunities for Sustainable Growth. Asian Development Bank, 2024. https://doi.org/10.22617/brf240515-2.

Full text
Abstract:
This brief analyzes Armenia’s emerging data center market and shows why it needs to expand its green energy, adopt carbon-efficient technology, and accommodate the latest technology trends to maximize its growth potential. The brief highlights forecasts for Armenia to increase its data center racks from 550 to up to 2,000 by 2028 and explains how its location alongside government support and financing gives it a competitive advantage. It recommends Armenia look to enhance its resilient international connectivity, accommodate generative artificial intelligence and edge computing requirements, and promote greener data centers to help make the energy-intensive sector’s growth more sustainable.
APA, Harvard, Vancouver, ISO, and other styles
9

Mathew, Paul, Steve Greenberg, Srirupa Ganguly, Dale Sartor, and William Tschudi. How Does Your Data Center Measure Up? Energy Efficiency Metrics and Benchmarks for Data Center Infrastructure Systems. Office of Scientific and Technical Information (OSTI), 2009. http://dx.doi.org/10.2172/961535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zuo, Wangda, Michael Wetter, James VanGilder, et al. Improving Data Center Energy Efficiency Through End-to-End Cooling Modeling and Optimization. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1773506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!