Academic literature on the topic 'Datacenter modell'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Datacenter modell.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Datacenter modell"

1

Zhao, Mengmeng, and Xiaoying Wang. "A Synthetic Approach for Datacenter Power Consumption Regulation towards Specific Targets in Smart Grid Environment." Energies 14, no. 9 (May 2, 2021): 2602. http://dx.doi.org/10.3390/en14092602.

Full text
Abstract:
With the large-scale grid connection of renewable energy sources, the frequency stability problem of the power system has become increasingly prominent. At the same time, the development of cloud computing and its applications has attracted people’s attention to the high energy consumption characteristics of datacenters. Therefore, it was proposed to use the characteristics of the high power consumption and high flexibility of datacenters to respond to the demand response signal of the smart grid to maintain the stability of the power system. Specifically, this paper establishes a synthetic model that integrates multiple methods to precisely control and regulate the power consumption of the datacenter while minimizing the total adjustment cost. First, according to the overall characteristics of the datacenter, the power consumption models of servers and cooling systems were established. Secondly, by controlling the temperature, different kinds of energy storage devices, load characteristics and server characteristics, the working process of various regulation methods and the corresponding adjustment cost models were obtained. Then, the cost and penalty of each power regulation method were incorporated. Finally, the proposed dynamic synthetic approach was used to achieve the goal of accurately adjusting the power consumption of the datacenter with least adjustment cost. Through comparative analysis of evaluation experiment results, it can be observed that the proposed approach can better regulate the power consumption of the datacenter with lower adjustment cost than other alternative methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Da Silva, Luana Barreto, Leonardo Henrique da Silva Bomfim, George Leite Junior, and Marcelino Nascimento De Oliveira. "TI Verde: Uma Proposta de Economia Energética usando Virtualização." Interfaces Científicas - Exatas e Tecnológicas 1, no. 2 (May 28, 2015): 57–74. http://dx.doi.org/10.17564/2359-4942.2015v1n2p57-74.

Full text
Abstract:
Information Technology (IT) is one of the main responsable for enviroment troubles. This is a challenge to be overcome by IT Managers to reduce the cust and the maintenance of datacenteres. To promote the user of computer resources on a efficient and less harmfull to enviroment, the Green IT method propose sustainable ways to support a datacenter. One of those ways is the datacenters virtualization, which means that one physical server has virtual ones, working as singles server. It is important to analyze the viability of keep a virtualized datacenter by the analysis of availability of the servers. If the use of virtualization enables a cost reduction, it can also make the system more susceptible to downtime. This work analyzes the availability of two environments, one with a virtualized server and the other with non-virtualized servers. The services offered are e-mail, DNS, Web Server and File Server, a typical scenario in many companies. It is developed a case study using analytical modeling with Fault Tree and Markov Chains. The Fault Tree is used to model the servers and Markov Chains to model the behavior of each component of hardware and software. The non-virtualized environment is composed of four servers, each one providing specific services, while the virtualized consists of a single server with four virtual machines, each one providing a service. By analyzing the models developed, the results show that although the non-virtualized system has less downtime, because has less dependence between the services, the difference in this case is 0.06% annually, becomes irrelevant when compared to the benefits brought by virtualization.
APA, Harvard, Vancouver, ISO, and other styles
3

Osman, Ahmed, Assim Sagahyroon, Raafat Aburukba, and Fadi Aloul. "Optimization of energy consumption in cloud computing datacenters." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 1 (February 1, 2021): 686. http://dx.doi.org/10.11591/ijece.v11i1.pp686-698.

Full text
Abstract:
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
APA, Harvard, Vancouver, ISO, and other styles
4

Lu, Yao, John Panneerselvam, Lu Liu, and Yan Wu. "RVLBPNN: A Workload Forecasting Model for Smart Cloud Computing." Scientific Programming 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/5635673.

Full text
Abstract:
Given the increasing deployments of Cloud datacentres and the excessive usage of server resources, their associated energy and environmental implications are also increasing at an alarming rate. Cloud service providers are under immense pressure to significantly reduce both such implications for promoting green computing. Maintaining the desired level of Quality of Service (QoS) without violating the Service Level Agreement (SLA), whilst attempting to reduce the usage of the datacentre resources is an obvious challenge for the Cloud service providers. Scaling the level of active server resources in accordance with the predicted incoming workloads is one possible way of reducing the undesirable energy consumption of the active resources without affecting the performance quality. To this end, this paper analyzes the dynamic characteristics of the Cloud workloads and defines a hierarchy for the latency sensitivity levels of the Cloud workloads. Further, a novel workload prediction model for energy efficient Cloud Computing is proposed, named RVLBPNN (Rand Variable Learning Rate Backpropagation Neural Network) based on BPNN (Backpropagation Neural Network) algorithm. Experiments evaluating the prediction accuracy of the proposed prediction model demonstrate that RVLBPNN achieves an improved prediction accuracy compared to the HMM and Naïve Bayes Classifier models by a considerable margin.
APA, Harvard, Vancouver, ISO, and other styles
5

Alouf, Sara, and Alain Jean-Marie. "Short-Scale Stochastic Solar Energy Models: A Datacenter Use Case." Mathematics 8, no. 12 (November 27, 2020): 2127. http://dx.doi.org/10.3390/math8122127.

Full text
Abstract:
Modeling the amount of solar energy received by a photovoltaic panel is an essential part of green IT research. The specific motivation of this work is the management of the energy consumption of large datacenters. We propose a new stochastic model for the solar irradiance that features minute-scale variations and is therefore suitable for short-term control of performances. Departing from previous models, we use a weather-oriented classification of days obtained from past observations to parameterize the solar source. We demonstrate through extensive simulations, using real workloads, that our model outperforms the existing ones in predicting performance metrics related to energy storage.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Jitendra, and Ashutosh Kumar Singh. "Cloud datacenter workload estimation using error preventive time series forecasting models." Cluster Computing 23, no. 2 (October 26, 2019): 1363–79. http://dx.doi.org/10.1007/s10586-019-03003-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bliedy, Doaa, Sherif Mazen, and Ehab Ezzat. "Datacentre Total Cost of Ownership (TCO) Models : A Survey." International Journal of Computer Science, Engineering and Applications 8, no. 2/3/4 (August 30, 2018): 47–62. http://dx.doi.org/10.5121/ijcsea.2018.8404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berenjian, Golnaz, Homayun Motameni, Mehdi Golsorkhtabaramiri, and Ali Ebrahimnejad. "Distribution slack allocation algorithm for energy aware task scheduling in cloud datacenters." Journal of Intelligent & Fuzzy Systems 41, no. 1 (August 11, 2021): 251–72. http://dx.doi.org/10.3233/jifs-201696.

Full text
Abstract:
Regarding the ever-increasing development of data and computational centers due to the contribution of high-performance computing systems in such sectors, energy consumption has always been of great importance due to CO2 emissions that can result in adverse effects on the environment. In recent years, the notions such as “energy” and also “Green Computing” have played crucial roles when scheduling parallel tasks in datacenters. The duplication and clustering strategies, as well as Dynamic Voltage and Frequency Scaling (DVFS) techniques, have focused on the reduction of the energy consumption and the optimization of the performance parameters. Concerning scheduling Directed Acyclic Graph (DAG) of a datacenter processors equipped with the technique of DVFS, this paper proposes an energy- and time-aware algorithm based on dual-phase scheduling, called EATSDCDD, to apply the combination of the strategies for duplication and clustering along with the distribution of slack-time among the tasks of a cluster. DVFS and control procedures in the proposed green system are mapped into Petri net-based models, which contribute to designing a multiple decision process. In the first phase, we use an intelligent combined approach of the duplication and clustering strategies to run the immediate tasks of DAG along with monitoring the throughput by concentrating on the reduction of makespan and the energy consumed in the processors. The main idea of the proposed algorithm involves the achievement of a maximum reduction in energy consumption in the second phase. To this end, the slack time was distributed among non-critical dependent tasks. Additionally, we cover the issues of negotiation between consumers and service providers at the rate of μ based on Green Service Level Agreement (GSLA) to achieve a higher saving of the energy. Eventually, a set of data established for conducting the examinations and also different parameters of the constructed random DAG are assessed to examine the efficiency of our proposed algorithm. The obtained results confirms that our algorithm outperforms compared the other algorithms considered in this study.
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Lili, Xiaolin Chang, Runkai Yang, Jelena Mišić, and Vojislav B. Mišić. "Model-Based Comparison of Cloud-Edge Computing Resource Allocation Policies." Computer Journal 63, no. 10 (July 1, 2020): 1564–83. http://dx.doi.org/10.1093/comjnl/bxaa062.

Full text
Abstract:
Abstract The rapid and widespread adoption of internet of things-related services advances the development of the cloud-edge framework, including multiple cloud datacenters (CDCs) and edge micro-datacenters (EDCs). This paper aims to apply analytical modeling techniques to assess the effectiveness of cloud-edge computing resource allocation policies from the perspective of improving the performance of cloud-edge service. We focus on two types of physical device (PD)-allocation policies that define how to select a PD from a CDC/EDC for service provision. The first is randomly selecting a PD, denoted as RandAvail. The other is denoted as SEQ, in which an available idle PD is selected to serve client requests only after the waiting queues of all busy PDs are full. We first present the models in the case of an On–Off request arrival process and verify the approximate accuracy of the proposed models through simulations. Then, we apply analytical models for comparing RandAvail and SEQ policies, in terms of request rejection probability and mean response time, under various system parameter settings.
APA, Harvard, Vancouver, ISO, and other styles
10

Kang, Dong-Ki, Fawaz Al-Hazemi, Seong-Hwan Kim, Min Chen, Limei Peng, and Chan-Hyun Youn. "Adaptive VM Management with Two Phase Power Consumption Cost Models in Cloud Datacenter." Mobile Networks and Applications 21, no. 5 (March 31, 2016): 793–805. http://dx.doi.org/10.1007/s11036-016-0690-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Datacenter modell"

1

Santos, Reginaldo Hugo Szezupior dos [UNESP]. "Modelo de gestão de predição de falhas no gerenciamento da infraestrutura de datacenter." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/111128.

Full text
Abstract:
Made available in DSpace on 2014-12-02T11:16:56Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-02-20Bitstream added on 2014-12-02T11:20:51Z : No. of bitstreams: 1 000797388.pdf: 3444141 bytes, checksum: 6915f96d001d5b810bee2249e898b0ad (MD5)
Neste trabalho propõe-se um modelo de gestão de predição de falhas para gerenciamento de infraestrutura de Datacenter, tendo como base as boas práticas de gerenciamento de infraestrutura de TI (Tecnologia da Informação). O modelo leva a um sistema que monitora o datacenter e detecta eventuais problemas nos equipamentos legados, evitando, assim, que os mesmos entrem em colapso. Baseado na biblioteca ITIL, mas com foco apenas nos módulos mais pertinentes aos objetivos do trabalho, o modelo foi implementado e usado para monitorar equipamentos legados de um determinado Datacenter, permitindo o armazenamento de informações que puderam ser coletadas à distância e usadas para prevenir problemas envolvidos com os dados monitorados. O sistema, obtido a partir do modelo, é composto por 3 (três) dispositivos básicos: End Device, Rack Device e Power Device. Os resultados apresentados mostram que o sistema é viável e pode ser aplicado com eficiência em aplicações reais
This paper proposes a management model for predicting failures in Datacenter infrastructure management, based on the good practices for managing IT (Information Technology) infrastructure. The model refers to a system that monitors the Datacenter and detects faults in legacy equipment, thereby, avoiding that they collapse. Based on the ITIL library, but focusing only on the most relevant modules to the work objectives, the model was implemented and used to monitor legacy equipment for a particular Datacenter, allowing the information storage that could be collected remotely and used to prevent problems involved with the monitored data. The practical system, obtained from the model is composed of three (3) basic devices: End Device, Device Rack and Power Device. The results show that the system is feasible and can be applied effectively in actual applications
APA, Harvard, Vancouver, ISO, and other styles
2

Santos, Reginaldo Hugo Szezupior dos. "Modelo de gestão de predição de falhas no gerenciamento da infraestrutura de datacenter /." Ilha Solteira, 2014. http://hdl.handle.net/11449/111128.

Full text
Abstract:
Orientador: Nobuo Oki
Co-orientador: Jozué Vieira Filho
Banca: Ailton Akira Shinoda
Banca: Anna Diva Plasencia Lotufo
Banca: Valtemir emerencio do Nascimento
Banca: Ruy de Oliveira
Resumo: Neste trabalho propõe-se um modelo de gestão de predição de falhas para gerenciamento de infraestrutura de Datacenter, tendo como base as boas práticas de gerenciamento de infraestrutura de TI (Tecnologia da Informação). O modelo leva a um sistema que monitora o datacenter e detecta eventuais problemas nos equipamentos legados, evitando, assim, que os mesmos entrem em colapso. Baseado na biblioteca ITIL, mas com foco apenas nos módulos mais pertinentes aos objetivos do trabalho, o modelo foi implementado e usado para monitorar equipamentos legados de um determinado Datacenter, permitindo o armazenamento de informações que puderam ser coletadas à distância e usadas para prevenir problemas envolvidos com os dados monitorados. O sistema, obtido a partir do modelo, é composto por 3 (três) dispositivos básicos: End Device, Rack Device e Power Device. Os resultados apresentados mostram que o sistema é viável e pode ser aplicado com eficiência em aplicações reais
Abstract: This paper proposes a management model for predicting failures in Datacenter infrastructure management, based on the good practices for managing IT (Information Technology) infrastructure. The model refers to a system that monitors the Datacenter and detects faults in legacy equipment, thereby, avoiding that they collapse. Based on the ITIL library, but focusing only on the most relevant modules to the work objectives, the model was implemented and used to monitor legacy equipment for a particular Datacenter, allowing the information storage that could be collected remotely and used to prevent problems involved with the monitored data. The practical system, obtained from the model is composed of three (3) basic devices: End Device, Device Rack and Power Device. The results show that the system is feasible and can be applied effectively in actual applications
Doutor
APA, Harvard, Vancouver, ISO, and other styles
3

Bennaceur, Mokhtar Walid. "Formal models for safety analysis of a Data Center system." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV078/document.

Full text
Abstract:
Un Data Center (DC) est un bâtiment dont le but est d'héberger des équipements informatiques pour fournir différents services Internet. Pour assurer un fonctionnement constant de ces équipements, le système électrique fournit de l'énergie, et pour les maintenir à une température constante, un système de refroidissement est nécessaire. Chacun de ces besoins doit être assuré en permanence, car la conséquence de la panne de l'un d'eux entraîne une indisponibilité de l'ensemble du système du DC, ce qui peut être fatal pour une entreprise.A notre connaissance, il n'existe pas de travaux d'étude sur l'analyse de sûreté de fonctionnement et de performance, prenant en compte l'ensemble du système du DC avec les différentes interactions entre ses sous-systèmes. Les études d'analyse existantes sont partielles et se concentrent sur un seul sous-système, parfois deux. L'objectif principal de cette thèse est de contribuer à l'analyse de sûreté de fonctionnement d'un Data Center. Pour cela, nous étudions, dans un premier temps, chaque sous-système (électrique, thermique et réseau) séparément, afin d'en définir ses caractéristiques. Chaque sous-système du DC est un système de production qui transforment les alimentations d'entrée (énergie pour le système électrique, flux d'air pour le système thermique, et paquets pour le réseau) en sorties, qui peuvent être des services Internet. Actuellement, les méthodes d'analyse de sûreté de fonctionnement existantes pour ce type de systèmes sont inadéquates, car l'analyse de sûreté doit tenir compte non seulement de l'état interne de chaque composant du système, mais également des différents flux de production qui circulent entre ces composants. Dans cette thèse, nous considérons une nouvelle technique de modélisation appelée Arbres de Production (AP) qui permet de modéliser la relation entre les composants d'un système avec une attention particulière aux flux circulants entre ces composants.La technique de modélisation en AP permet de traiter un seul type de flux à la fois. Son application sur le sous-système électrique est donc appropriée, car il n'y a qu'un seul type de flux (le courant électrique). Toutefois, lorsqu'il existe des dépendances entre les sous-systèmes, comme c'est le cas pour les sous-systèmes thermiques et les sous-systèmes de réseaux, différents types de flux doivent être pris en compte, ce qui rend l'application de la technique des APs inadéquate. Par conséquent, nous étendons cette technique pour traiter les dépendances entre les différents types de flux qui circulent dans le DC. En conséquence, il est facile d'évaluer les différents indicateurs de sûreté de fonctionnement du système global du DC, en tenant compte des interactions entre ses sous-systèmes. De plus, nous faisons quelques statistiques de performance. Nous validons les résultats de notre approche en les comparant à ceux obtenus par un outil de simulation que nous avons implémenté et qui est basé sur la théorie des files d'attente.Jusqu'à présent, les modèles d'arbres de production n'ont pas d'outils de résolution. C'est pourquoi nous proposons une méthode de résolution basée sur la Distribution de Probabilité de Capacité (Probability Distribution of Capacity - PDC) des flux circulants dans le système du DC. Nous implémentons également le modèle d'AP en utilisant le langage de modélisation AltaRica 3.0, et nous utilisons son simulateur stochastique dédié pour estimer les indices de fiabilité du système. Ceci est très important pour comparer et valider les résultats obtenus avec notre méthode d'évaluation. En parallèle, nous développons un outil qui implémente l'algorithme de résolution des APs avec une interface graphique basée qui permet de créer, éditer et analyser des modèles d'APs. L'outil permet également d'afficher les résultats et génère un code AltaRica, qui peut être analysé ultérieurement à l'aide du simulateur stochastique de l'outil AltaRica 3.0
A Data Center (DC) is a building whose purpose is to host IT devices to provide different internet services. To ensure constant operation of these devices, energy is provided by the electrical system, and to keep them at a constant temperature, a cooling system is necessary. Each of these needs must be ensured continuously, because the consequence of breakdown of one of them leads to an unavailability of the whole DC system, and this can be fatal for a company.In our Knowledge, there exists no safety and performance studies’, taking into account the whole DC system with the different interactions between its sub-systems. The existing analysis studies are partial and focus only on one sub-system, sometimes two. The main objective of this thesis is to contribute to the safety analysis of a DC system. To achieve this purpose, we study, first, each DC sub-system (electrical, thermal and network) separately, in order to define their characteristics. Each DC sub-system is a production system and consists of combinations of components that transform entrance supplies (energy for the electrical system, air flow for the thermal one, and packets for the network one) into exits, which can be internet services. Currently the existing safety analysis methods for these kinds of systems are inadequate, because the safety analysis must take into account not only the internal state of each component, but also the different production flows circulating between components. In this thesis, we consider a new modeling methodology called Production Trees (PT) which allows modeling the relationship between the components of a system with a particular attention to the flows circulating between these components.The PT modeling technique allows dealing with one kind of flow at once. Thus its application on the electrical sub-system is suitable, because there is only one kind of flows (the electric current). However, when there are dependencies between sub-systems, as in thermal and network sub-systems, different kinds of flows need to be taken into account, making the application of the PT modeling technique inadequate. Therefore, we extend this technique to deal with dependencies between the different kinds of flows in the DC. Accordingly it is easy to assess the different safety indicators of the global DC system, taking into account the interactions between its sub-systems. Moreover we make some performance statistics. We validate the results of our approach by comparing them to those obtained by a simulation tool that we have implemented based on Queuing Network theory.So far, Production Trees models are not tool supported. Therefore we propose a solution method based on the Probability Distribution of Capacity (PDC) of flows circulating in the DC system. We implement also the PT model using the AltaRica 3.0 modeling language, and use its dedicated stochastic simulator to estimate the reliability indices of the system. This is very important to compare and validate the obtained results with our assessment method. In parallel, we develop a tool which implements the PT solution algorithm with an interactive graphical interface, which allows creating, editing and analyzing PT models. The tool allows also displaying the results, and generates an AltaRica code, which can be subsequently analyzed using the stochastic simulator of AltaRica 3.0 tool
APA, Harvard, Vancouver, ISO, and other styles
4

Rodríguez, Rodríguez Pablo Andrés. "Rediseño del modelo de negocios del Datacenter de Telefónica Empresas en función de prácticas ITIL." Tesis, Universidad de Chile, 2007. http://www.repositorio.uchile.cl/handle/2250/102971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roozbeh, Amir. "Resource monitoring in a Network Embedded Cloud : An extension to OSPF-TE." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124367.

Full text
Abstract:
The notions of "network embedded cloud", also known as a "network enabled cloud" or a "carrier cloud", is an emerging technology trend aiming to integrate network services while exploiting the on-demand nature of the cloud paradigm. A network embedded cloud is a distributed cloud environment where data centers are distributed at the edge of the operator's network. Distributing data centers or computing resources across the network introduces topological and geographical locality dependency. In the case of a network enabled cloud, in addition to the information regarding available processing, memory, and storage capacity, resource management requires information regarding the network's topology and available bandwidth on the links connecting the different nodes of the distributed cloud. This thesis project designed, implemented, and evaluated the use of open shortest path first with traffic engineering (OSPF-TE) for propagating the resource status in a network enabled cloud. The information carried over OSPF-TE are used for network-aware scheduling of virtual machines. In particular, OSPF-TE was extended to convey virtualization and processing related information to all the nodes in the network enabled cloud. Modeling, emulation, and analysis shows the proposed solution can provide the required data to a cloud management system by sending a data center's resources information in the form of new opaque link-state advertisement with a minimum interval of 5 seconds. In this case, each embedded data centers injects a maximum 38.4 bytes per second of additional traffic in to the network.<p>
Ett "network embedded cloud", även känt som ett "network enabled cloud" eller ett "carrier cloud", är en ny teknik trend som syftar till att tillhandahålla nätverkstjänster medan on-demand egenskapen av moln-paradigmet utnyttjas.  Traditionella telekommunikationsapplikationer bygger ofta på en distributed service model och kan använda ett "network enabled cloud" som dess exekverande plattform. Dock kommer sådana inbäddade servrar av naturliga skäl vara geografiskt utspridda, varför de är beroende av topologisk och geografisk lokalisering. Detta ändrar på resurshanteringsproblemet jämfört med resurshantering i datacentrum. I de fall med ett network enabled cloud, utöver informationen om tillgängliga CPU, RAM och lagring, behöver resursfördelningsfunktionen information om nätverkets topologi och tillgänglig bandbredd på länkarna som förbinder de olika noderna i det distribuerade molnet. Detta examensarbete har utformat, tillämpat och utvärderat ett experiment-orienterad undersökning av användningen av open shortest path first med traffich engineering (OSPF-TE) för resurshantering i det network enabled cloud. I synnerhet utvidgades OSPF-TE till att förmedla virtualisering och behandla relaterad information till alla noder i nätverket. Detta examensarbete utvärderar genomförbarheten och lämpligheten av denna metod, dess flexibilitet och prestanda. Analysen visade att den föreslagna lösningen kan förse nödvändiga uppgifter till cloud management system genom att skicka ett datacenters resursinformation i form av ny opaque LSA (kallat Cloud LSA) med ett minimumintervall av 5 sekunder och maximal nätverksbelastning av 38,4 byte per sekund per inbäddade data center.
APA, Harvard, Vancouver, ISO, and other styles
6

DAMASCENO, Julio Cesar. "UCLOUD: uma abordagem para implantação de nuvem privada para a administração pública federal." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17341.

Full text
Abstract:
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-07-12T19:23:34Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) jcd-phd_thesis_final-entrega-biblioteca.pdf: 10811665 bytes, checksum: 4aae0bc706b64354c18e874517000dcd (MD5)
Made available in DSpace on 2016-07-12T19:23:34Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) jcd-phd_thesis_final-entrega-biblioteca.pdf: 10811665 bytes, checksum: 4aae0bc706b64354c18e874517000dcd (MD5) Previous issue date: 2015-11-03
A crescente adoção e uso de computação em nuvem, revolucionou o processo de aquisição de recursos computacionais, causando uma transferência das despesas em CAPEX para despesas em OPEX. Neste contexto, as organizações, públicas e/ou privadas, tiveram que se adaptar a esta nova realidade de consumo trazida pela computação em nuvem. Deste modo, o caminho natural seria alugar estes recursos computacionais em provedores de nuvem, pois poucas empresas podem investir o capital necessário na construção de data centers para hospedarem suas nuvens (e/ou serviços). Embora nos últimos anos tenha havido uma evolução e adequação da legislação em vigor sobre uso de computação em nuvem, armazenamento de dados, segurança da informação, entre outras questões, surgiu a necessidade da criação de ofertas de serviços de nuvem dentro da administração pública, principalmente graças às imposições causadas pela legislações, que impõem sérias restrições quanto a escolha dos provedores de serviços, armazenamento de dados e segurança. A partir deste cenário, com o objetivo de atender à crescente demanda por gerenciamento de infraestruturas de TIC baseadas em computação em nuvem este trabalho propõe uma solução chamada UCloud, composta por uma metodologia, no formato de um modelo de referência e um conjunto de ferramentas que juntas viabilizarão a transição da infraestrutura de data center tradicional para uma infraestrutura virtualização e, posteriormente, para o ambiente de nuvem. A proposta também viabilizou a possibilidade da oferta de data center como serviço em um cenário onde todos elementos da infraestrutura - redes, armazenamento, CPU e segurança - são virtualizados e entregues como um serviço. A principal contribuição deste trabalho foi a definição e especificação de um modelo de referência para a implementação de ambientes de nuvem na administração publica, respeitando a legislação corrente no tocante às recomendações para o uso de computação em nuvem pela Administração Pública, assim como a introdução e implementação do conceito de data center como serviço. Como resultado obteve-se um modelo de referência para implantação de nuvens privadas na administração pública em conformidade com a legislação brasileira e seguindo boas práticas de mercado. Experimentos de campo mostraram como a plataforma UCloud conseguiu atender aos requisitos do modelo através do gerenciamento de ambientes virtualizados e ambientes nuvem, proporcionando automatização de tarefas antes realizadas manualmente ou de forma ineficiente.
The increasing adoption and use of cloud computing has revolutionized the process of acquiring computing resources, causing a transfer from capital to operational expenditures. In this context, public and private organizations had to adapt to the new reality of consumption and provisioning brought about by cloud computing. In this way, the natural path would be to rent these computing resources in cloud providers because some companies can spent millions in data center construction for hosting his clouds. Although in recent years the law has evolved and adapted on cloud computing usage, data storaging, information security, among other issues, the need to create cloud service offerings has emerged within the public administration, especially because of impositions caused by the laws, which impose severe restrictions on the choice of service providers, data storage and security. From this scenario, in order to meet the growing demand for management of cloud-based IT infrastructure this work proposed a solution called UCloud, consisting of a methodology, presented as reference model and tools that together will enable the transition from traditional data center infrastructure for a virtualized infrastructure and subsequently to cloud environment. The proposal also allowed the data center as a service offering, a scenario where all infrastructure elements - networking, storage, CPU and safety - are virtualized and delivered as a service. The main contribution of this work was the definition and specification of a reference model for implementing cloud environments in public administration, respecting current legislation regarding and following the federal recommendations for cloud computing usage. As well as, the introduction and implementation of data center as a service concept. As a result we obtained a reference model for deploying private clouds in public administration in accordance with Brazilian law and following well know practices. Field experiments have shown how UCloud platform could meet the model requirements through the management of virtualized environments and cloud environments, providing automation of tasks previously made manually or in inefficiently way.
APA, Harvard, Vancouver, ISO, and other styles
7

Khader, Michael. "A fuzzy hierarchical decision model and its application in networking datacenters and in infrastructure acquisitions and design." ScholarWorks, 2009. https://scholarworks.waldenu.edu/dissertations/657.

Full text
Abstract:
According to several studies, an inordinate number of major business decisions to acquire, design, plan, and implement networking infrastructures fail. A networking infrastructure is a collaborative group of telecommunications systems providing services needed for a firm's operations and business growth. The analytical hierarchy process (AHP) is a well established decision-making process used to analyze decisions related to networking infrastructures. AHP is concerned with decomposing complex decisions into a set of factors and solutions. However, AHP has difficulties in handling uncertainty in decision information. This study addressed the research question of solutions to AHP deficiencies. The solutions were accomplished through the development of a model capable of handling decisions with incomplete information and uncertain decision operating environment. This model is based on AHP framework and fuzzy sets theory. Fuzzy sets are sets whose memberships are gradual. A member of a fuzzy set may have a strong, weak, or a moderate membership. The methodology for this study was based primarily on the analytical research design method, which is neither quantitative nor qualitative, but based on mathematical concepts, proofs, and logic. The model's constructs were verified by a simulated practical case study based on current literature and the input of networking experts. To further verify the research objectives, the investigator developed, tested, and validated a software platform. The results showed tangible improvements in analyzing complex networking infrastructure decisions. The ability of this model to analyze decisions with incomplete information and uncertain economic outlook can be employed in the socially important areas such as renewable energy, forest management, and environmental studies to achieve large savings.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Jin. "Chorus: Model Kowledge Base for Perfomance Modeling in Datacenters." Thesis, 2011. http://hdl.handle.net/1807/31717.

Full text
Abstract:
Due to the imperative need to reduce the management costs, operators multiplex several concurrent applications in large datacenters. However, uncontrolled resource sharing between co-hosted applications often results in performance degradation problems, thus creating violations of service level agreements (SLAs) for service providers. Therefore, in order to meet per-application SLAs, per-application performance modeling for dynamic resource allocation in shared resource environments has recently become promising. We introduce Chorus, an interactive performance modeling framework for building application performance models incrementally and on the fly. It can be used to support complex, multi-tier resource allocation, and/or what-if performance inquiry in modern datacenters, such as Clouds. Chorus consists of (i) a declarative high-level language for providing semantic model guidelines, such as model templates, model functions, or sampling guidelines, from a sysadmin or a performance analyst, as model approximations to be learned or refined experimentally, (ii) a runtime engine for iteratively collecting experimental performance samples, validating and refining performance models. Chorus efficiently builds accurate models online, reuses and adjusts archival models over time, and combines them into an ensemble of models. We perform an experimental evaluation on a multi-tier server platform, using several industry- standard benchmarks. Our results show that Chorus is a flexible modeling framework and knowledge base for validating, extending and reusing existing models while adapting to new situations.
APA, Harvard, Vancouver, ISO, and other styles
9

"A fuzzy hierarchical decision model and its application in networking datacenters and in infrastructure acquisitions and design." WALDEN UNIVERSITY, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3344459.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Datacenter modell"

1

Schmid, Stefan, Nicolas Schnepf, and Jiří Srba. "Resilient Capacity-Aware Routing." In Tools and Algorithms for the Construction and Analysis of Systems, 411–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_22.

Full text
Abstract:
AbstractTo ensure a high availability, communication networks provide resilient routing mechanisms that quickly change routes upon failures. However, a fundamental algorithmic question underlying such mechanisms is hardly understood: how to verify whether a given network reroutes flows along feasible paths, without violating capacity constraints, for up to k link failures? We chart the algorithmic complexity landscape of resilient routing under link failures, considering shortest path routing based on link weights as e.g. deployed in the ECMP protocol. We study two models: a pessimistic model where flows interfere in a worst-case manner along equal-cost shortest paths, and an optimistic model where flows are routed in a best-case manner, and we present a complete picture of the algorithmic complexities. We further propose a strategic search algorithm that checks only the critical failure scenarios while still providing correctness guarantees. Our experimental evaluation on a benchmark of Internet and datacenter topologies confirms an improved performance of our strategic search by several orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
2

Tovarňák, Daniel, Filip Nguyen, and Tomáš Pitner. "Distributed Event-Driven Model for Intelligent Monitoring of Cloud Datacenters." In Studies in Computational Intelligence, 87–92. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-01571-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Holmes, Ta’id. "Ming: Model- and View-Based Deployment and Adaptation of Cloud Datacenters." In Communications in Computer and Information Science, 317–38. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-62594-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roy, Swagatam, Ahan Chatterjee, and Trisha Sinha. "An Econometric Overview on Growth and Impact of Online Crime and Analytics View to Combat Them." In Advances in Data Mining and Database Management, 115–57. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4706-9.ch005.

Full text
Abstract:
In this chapter, the authors take a closer look into the economic relation with cybercrime and an analytics method to combat that. At first, they examine whether the increase in the unemployment rate among youths is the prime cause of the growth of cybercrime or not. They proposed a model with the help of the Phillips curve and Okun's law to get hold of the assumptions. A brief discussion of the impact of cybercrime in economic growth is also presented in this paper. Crime pattern detection and the impact of bitcoin in the current digital currency market have also been discussed. They have proposed an analytic method to combat the crime using the concept of game theory. They have tested the vulnerability of the cloud datacenter using game theory where two players will play the game in non-cooperative strategy in the Nash equilibrium state. Through the rational state decisions of the players and implementation MSWA algorithm, they have simulated the results through which they can check the dysfunctionality probabilities of the datacenters.
APA, Harvard, Vancouver, ISO, and other styles
5

"NGDC maturity model." In Next Generation Datacenters in Financial Services, 275–79. Elsevier, 2009. http://dx.doi.org/10.1016/b978-0-12-374956-7.00017-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patra, Sudhansu Shekhar, and R. K. Barik. "Dynamic Dedicated Server Allocation for Service Oriented Multi-Agent Data Intensive Architecture in Biomedical and Geospatial Cloud." In Cloud Technology, 2262–73. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6539-2.ch107.

Full text
Abstract:
Cloud computing has recently received considerable attention, as a promising approach for delivering Information and Communication Technologies (ICT) services as a utility. In the process of providing these services it is necessary to improve the utilization of data centre resources which are operating in most dynamic workload environments. Datacenters are integral parts of cloud computing. In the datacenter generally hundreds and thousands of virtual servers run at any instance of time, hosting many tasks and at the same time the cloud system keeps receiving the batches of task requests. It provides services and computing through the networks. Service Oriented Architecture (SOA) and agent frameworks renders tools for developing distributed and multi agent systems which can be used for the administration of cloud computing environments which supports the above characteristics. This paper presents a SOQM (Service Oriented QoS Assured and Multi Agent Cloud Computing) architecture which supports QoS assured cloud service provision and request. Biomedical and geospatial data on cloud can be analyzed through SOQM and has allowed the efficient management of the allocation of resources to the different system agents. It has proposed a finite heterogeneous multiple vm model which are dynamically allocated depending on the request from biomedical and geospatial stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
7

Chatterjee, Ahan. "A Decadal Walkthrough on Energy Modelling for Cloud Datacenters." In Impacts and Challenges of Cloud Business Intelligence, 195–204. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5040-3.ch012.

Full text
Abstract:
Cloud computing is the growing field in the industry, and every scale industry needs it now. The high scale usage of cloud has resulted in huge power consumption, and this power consumption has led to increase of carbon footprint affecting our mother nature. Thus, we need to optimize the power usage in the cloud servers. Various models are used to tackle this situation, of which one is a model based on link load. It minimized the bit energy consumption of network usage which includes energy efficiency routing and load balancing. Over this, multi-constraint rerouting is also adapted. Other power models which have been adapted are virtualization framework using multi-tenancy-oriented data center. It works by accommodating heterogeneous networks among virtual machines in virtual private cloud. Another strategy that is adopted is cloud partitioning concept using game theory. Other methods that are adopted are load spreading algorithm by shortest path bridging, load balancing by speed scaling, load balancing using graph constraint, and insert ranking method.
APA, Harvard, Vancouver, ISO, and other styles
8

Kella, Abdelaziz, and Ghalem Belalem. "A Stable Matching Algorithm for VM Migration to Improve Energy Consumption and QOS in Cloud Infrastructures." In Cloud Technology, 606–23. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6539-2.ch028.

Full text
Abstract:
Cloud Computing is one of the fast spreading technologies for providing utility-based IT services to its users. Large-scale virtualized datacenters are established in order to provide these services. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, datacenters hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational cost for the service providers as well as for the service users. Energy consumption can be reduced by live migration of virtual machines (VM) as required and by switching off idle physical machines (PM). Therefore, we propose an approach that finds a stable matching fair to both VMs and PMs, to improve the energy consumption without affecting the quality of service, instead of favoring either side because of a deferred acceptance procedure. The approach presumes two dynamics thresholds, and prepares those virtual machines on the physical machines that the load is over one of the two presumed values to be migrated. Before migrating all those VMs, we use the Coase theorem to determine the number of VMs to migrate for optimal costs. Our approach aims to improve energy consumption of the datacenters, while delivering an expected Quality of Service.
APA, Harvard, Vancouver, ISO, and other styles
9

Shukla, Piyush Kumar, and Mahendra Kumar Ahirwar. "Improving Privacy and Security in Multicloud Architectures." In Handbook of Research on Security Considerations in Cloud Computing, 232–57. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8387-7.ch011.

Full text
Abstract:
In this chapter we described the concept of multicloud architecture in which locally distributed clouds are combined to provide combined services of locally distributed clouds to the users. We started with basic of cloud computing and reached to multicloud through single cloud. In this chapter have described four architectural models for multicloud. Architecture models are Repetition of applications, Partition of System architecture into layers, Partition of Security features into segments and Distributing of data into fragments with these models security of the data resides in the datacenters of the cloud computing must be increased which leads to reliability in data storing of data.
APA, Harvard, Vancouver, ISO, and other styles
10

Shukla, Piyush Kumar, and Mahendra Kumar Ahirwar. "Improving Privacy and Security in Multicloud Architectures." In Web-Based Services, 585–609. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9466-8.ch025.

Full text
Abstract:
In this chapter we described the concept of multicloud architecture in which locally distributed clouds are combined to provide combined services of locally distributed clouds to the users. We started with basic of cloud computing and reached to multicloud through single cloud. In this chapter have described four architectural models for multicloud. Architecture models are Repetition of applications, Partition of System architecture into layers, Partition of Security features into segments and Distributing of data into fragments with these models security of the data resides in the datacenters of the cloud computing must be increased which leads to reliability in data storing of data.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Datacenter modell"

1

Vallury, Aparna, Mark E. Steinke, Vinod Kamath, and Lynn Parnell. "A Computational Study to Compare the Data Center Cooling Energy Spent Using Traditional Air Cooled and Hybrid Water Cooled Servers." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48476.

Full text
Abstract:
High performance datacenters that are being built and operated to ensure optimized compute density for high performance computing (HPC) workloads are constrained by the requirement to provide adequate cooling for the servers. Traditional methods of cooling dense high power servers using air cooling imposes a large cooling and power burden on datacenters. Airflow optimization of the datacenter is a constraint subject to a high energy penalty when dense power hungry racks each capable of consuming 30 to 40 kW are populated in a dense datacenter environment. The work documented using a simulation model (TileFlow) in this paper demonstrates the challenges associated with a standard air cooled approach in a HPC datacenter. Alternate cooling approaches to traditional air cooling are simulated as a comparison to traditional air cooling. These include models using a heat exchanger assisted rack cooling solution with conventional chilled water and, a direct to node cooling model simulated for the racks. These three distinct data center models are simulated at varying workloads and the resulting data is presented for typical and maximal inlet temperatures to the racks. For each cooling solution an estimate of the energy spend for the servers is determined based on the estimated PUEs of the cooling solutions chosen.
APA, Harvard, Vancouver, ISO, and other styles
2

Mamun, Abdullah-Al, Iyswarya Narayanan, Di Wang, Anand Sivasubramaniam, and Hosam K. Fathy. "Multi-Objective Optimization to Minimize Battery Degradation and Electricity Cost for Demand Response in Datacenters." In ASME 2015 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/dscc2015-9812.

Full text
Abstract:
This paper presents a Lithium-ion battery control framework to achieve minimum health degradation and electricity cost when batteries are used for datacenter demand response (DR). Demand response in datacenters refers to the adjustment of demand for grid electricity to minimize electricity cost. Utilizing batteries for demand response will reduce the electricity cost but might accelerate health degradation. This tradeoff makes battery control for demand response a multi-objective optimization problem. Current research focuses only on minimizing the cost of demand response and does not capture battery transient and degradation dynamics. We address this multi-objective optimization problem using a second-order equivalent circuit model and an empirical capacity fade model of Lithium-ion batteries. To the best of our knowledge, this is the first study to use a nonlinear Lithium-ion battery and health degradation model for health-aware optimal control in the context of datacenters. The optimization problem is solved using a differential evolution (DE) algorithm and repeated for different battery pack sizes. Simulation results furnish a Pareto front that makes it possible to examine tradeoffs between the two optimization objectives and size the battery pack accordingly.
APA, Harvard, Vancouver, ISO, and other styles
3

Amalfi, Raffaele L., Jackson B. Marcinichen, John R. Thome, and Filippo Cataldo. "Design of Passive Two-Phase Thermosyphons for Server Cooling." In ASME 2019 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/ipack2019-6386.

Full text
Abstract:
Abstract The main objective of this paper is to utilize an improved version of the simulator presented at InterPACK 2017 to design a thermosyphon system for energy-efficient heat removal from 2-U servers used in high-power datacenters. Currently, between 25% and 45% of the total energy consumption of a datacenter (this number does not include the energy required to drive the fans at the server-level) is dedicated to cooling, and with a predicted annual growth rate of about 15% (or higher) coupled with the plan of building numerous new datacenters to handle the “big data” storage and processing demands of emerging 5G networks, artificial intelligence, electrical vehicles, etc., the development of novel, high efficiency cooling technologies becomes extremely important for curbing the use of energy in datacenters. Notably, going from air cooling to two-phase cooling, not only enables the possibility to handle the ever higher heat fluxes and heat loads of new servers, but it also provides an energy-efficient solution to be implemented for all servers of a datacenter to reduce the total energy consumption of the entire cooling system. In that light, a pseudo-chip with a footprint area of 4 × 4 cm2 and a maximum power dissipation of 300 W (corresponding heat flux of about 19 W/cm2), will be assumed as a target design for our novel thermosyphon-based cooling system. The simulator will be first validated against an independent database and then used to find the optimal design of the chip’s thermosyphon. The results demonstrate the capability of this simulator to model all of the thermosyphon’s components (evaporator, condenser, riser and downcomer) together with overall thermal performance and creation of operational maps. Additionally, the simulator is used here to design two types of passive two-phase systems, an air- and a liquid-cooled thermosyphon, which will be compared in terms of thermal-hydraulic performance. Finally, the simulator will be used to perform a sensitivity analysis on the secondary coolant side conditions (inlet temperature and mass flow rate) to evaluate their effect on the system performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Szczukiewicz, Sylwia, Nicolas Lamaison, Jackson B. Marcinichen, John R. Thome, and Peter J. Beucher. "Passive Thermosyphon Cooling System for High Heat Flux Servers." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48288.

Full text
Abstract:
The main aim of the current paper is to demonstrate the capability of a two-phase closed thermosyphon loop system to cool down a contemporary datacenter rack, passively cooling the entire rack including its numerous servers. The effects on the performance of the entire cooling loop with respect to the server orientation, micro-evaporator design, riser and downcomer diameters, working fluid, and approach temperature difference at the condenser have been modeled and simulated. The influence of the thermosyphon height (here from 5 to 20 cm with a horizontally or vertically oriented server) on the driving force that guarantees the system operation whilst simultaneously fulfilling the critical heat flux (CHF) criterion also has been examined. In summary, the thermosyphon height was found to be the most significant design parameter. For the conditions simulated, in terms of CHF, the 10 cm-high thermosyphon was the most advantageous system design with a minimum safety factor of 1.6 relative to the imposed heat flux of 80 W cm−2. Additionally, a case study including an overhead water-cooled heat exchanger to extract heat from the thermosyphon loop has been developed and then the entire rack cooling system evaluated in terms of cost savings, payback period, and net benefit per year. This approximate study provides a general understanding of how the datacenter cooling infrastructure directly impacts the operating budget as well as influencing the thermal/hydraulic operation, performance, and reliability of the datacenter. Finally, the study shows that the passive two-phase closed loop thermosyphon cooling system is a potentially economically sound technology to cool high heat flux servers of datacenters.
APA, Harvard, Vancouver, ISO, and other styles
5

Liberato, Alextian Bartholomeu, Moises Ribeiro, and Magnos Martinello. "RDNA: Arquitetura Definida por Resíduos para Redes de Data Centers." In XXXVII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sbrc_estendido.2019.7784.

Full text
Abstract:
Datacenter (DC) design has been moved towards the edge computing paradigm motivated by the need of bringing cloud resources closer to end users. However, the Software Defined Networking (SDN) architecture offers no clue to the design of Micro Datacenters (MDC) for meeting complex and stringent requirements from next generation 5G networks. This is because canonical SDN lacks a clear distinction between functional network parts, such as core and edge elements. Besides, there is no decoupling between the routing and the network policy. In the thesis, we introduce Residue Defined Networking Architecture (RDNA) as a new approach for enabling key features like ultra-reliable and low-latency communication in MDC networks. RDNA explores theprogrammability of Residues Number System (RNS) as a fundamental concept to define a minimalist forwarding model for core nodes. Instead of forwarding packets based on classical table lookup operations, core nodes are tableless switches that forward packets using merely remainder of the division (modulo) operations. By solving a residue congruence system representing a network topology, we found out the algorithms and their mathematical properties to design RDNAs routing system that (i) supports unicast and multicast communication,(ii) provides resilient routes with protection for the entire route, and (iii) is scalable for 2-tier Clos topologies. Experimental implementations on Mininet and NetFPGA SUME show that RDNA achieves 600 ns switching latency per hop with virtually no jitter at core nodes and sub-millisecond failure recovery time.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Hongfang, Gang Sun, Chunming Qiao, and Jianping Wang. "Survivable virtual infrastructure mappings in multi-datacenter systems." In 2013 5th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT). IEEE, 2013. http://dx.doi.org/10.1109/icumt.2013.6798415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alshammari, Dhahi, Jeremy Singer, and Timothy Storer. "Does CloudSim Accurately Model Micro Datacenters?" In 2017 IEEE 10th International Conference on Cloud Computing (CLOUD). IEEE, 2017. http://dx.doi.org/10.1109/cloud.2017.97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Teixeira, Djalma, Adriano Vogel, and Dalvan Griebler. "Proposta de Monitoramento e Gerenciamento Inteligente de Temperatura em Datacenters." In XVII Escola Regional de Redes de Computadores. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/errc.2019.9209.

Full text
Abstract:
O constante crescimento e desenvolvimentos das infraestruturas computacionais, vem impulsionando uma demanda cada vez maior por monitoramento e gerenciamento inteligente. Em smart datacenters os equipamentos são controlados por meio de ações autonômicas que são executadas sob determinadas condições, sem a necessidade de intervenção humana. O objetivo deste trabalho é propor um modelo conceitual de monitoramento e gerenciamento inteligente para temperatura em smart datacenters, que pode ser aplicado tanto em estruturas básicas quanto complexas, uma vez que o mesmo pode ser adaptado a heterogeneidade dos equipamentos utilizados para climatização.
APA, Harvard, Vancouver, ISO, and other styles
9

Xue, Sixin. "Reconstruction of Records Storage Model on the Cloud Datacenter." In 2017 IEEE 11th International Conference on Semantic Computing (ICSC). IEEE, 2017. http://dx.doi.org/10.1109/icsc.2017.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Daniel, Abishai, and Nishi Ahuja. "Optimizing component reliability in datacenters using predictive models." In 2015 31st Thermal Measurement, Modeling & Management Symposium (SEMI-THERM). IEEE, 2015. http://dx.doi.org/10.1109/semi-therm.2015.7100181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography