To see the other types of publications on this topic, follow the link: Greener Cloud.

Dissertations / Theses on the topic 'Greener Cloud'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Greener Cloud.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aldhahri, Eiman Ali. "Dynamic Voltage and Frequency Scaling Enhanced Task Scheduling Technologies Toward Greener Cloud Computing." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1382.

Full text
Abstract:
The skyrocketing amount of electricity consumed by many data centers around the globe has become a serious issue for the cloud computing and entire IT industry. The demand for data centers is rapidly increasing due to widespread usage of cloud services. It also leads to huge carbon emissions contributing to the global greenhouse effect. The US Environmental Protection Agency has declared that data centers represent a substantial portion of the energy consumption in the US and the whole world. Some of this energy consumption is caused by idle servers or servers running at higher-than-necessary frequencies. Due to the Dynamic Voltage and Frequency Scaling (DVFS) technology enabled in many CPUs, strategically reducing CPU frequency without affecting the Quality of Service (QoS) is desired. Our goal in this paper is to calculate and tune to the best CPU frequency for each running task combined with two commonly-used scheduling approaches, namely round robin and first fit algorithms, given the CPU configuration and the execution deadline. The effectiveness of our algorithms is evaluated under a CloudSim/CloudReport simulation environment as well as real hypervisor computer system with power gauge. The open source CloudReport, based on the CloudSim simulator, has been used to integrate our DVFS algorithm with the two scheduling algorithms to illustrate the efficiency of power saving in different scenarios. Furthermore, electricity consumption is measured and compared using power gauge of Watts Up meter.
APA, Harvard, Vancouver, ISO, and other styles
2

Franci, Alessandro. "Green Cloud Computing: una rassegna comparativa." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1181/.

Full text
Abstract:
La rapida crescita di Internet e del numero di host connessi sta portando sempre di più alla nascita di nuove forme di tecnlogie ed applicazioni serverside, facendo del client un thin-client. Il Cloud Computing offre una valida piattaforma a queste nuove tecnologie, ma esso si deve confrontare con diverse problematiche, fra cui la richiesta energetica sempre più crescente, che si ripercuote su un'inevitabile aumento dei gas serra prodotti indirettamente. In questa tesi analizzeremo i problemi energetici legati al Cloud Computing e le possibili soluzioni, andando infine a creare una tassonomia fra i diversi Cloud Computing più importanti sul mercato attuale.
APA, Harvard, Vancouver, ISO, and other styles
3

Aldawsari, B. M. A. "An energy-efficient multi-cloud service broker for green cloud computing environment." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/7954/.

Full text
Abstract:
The heavy demands on cloud computing resources have led to a substantial growth in energy consumption of the data transferred between cloud computing parties (i.e., providers, datacentres, users, and services) and in datacentre’s services due to the increasing loads on these services. From one hand, routing and transferring large amounts of data into a datacentre located far from the user’s geographical location consume more energy than just processing and storing the same data on the cloud datacentre. On the other hand, when a cloud user submits a job (in the form of a set of functional and non-functional requirements) to a cloud service provider (aka, datacentre) via a cloud services broker; the broker becomes responsible to find the best-fit service to the user request based mainly on the user’s requirements and Quality of Service (QoS) (i.e., response time, latency). Hence, it becomes a high necessity to locate the lowest energy consumption route between the user and the designated datacentre; and the minimum possible number of most energy efficient services that satisfy the user request. In fact, finding the most energy-efficient route to the datacentre, and most energy efficient service(s) to the user are the biggest challenges of multi-cloud broker’s environment. This thesis presents and evaluates a novel multi-cloud broker solution that contains three innovative models and their associated algorithms. The first one is aimed at finding the most energy efficient route, among multiple possible routes, between the user and cloud datacentre. The second model is to find and provide the lowest possible number of most energy efficient services in order to minimise data exchange based on a bin-packing approach. The third model creates an energy-aware composition plan by integrating the most energy efficient services, in order to fulfil user requirements. The results demonstrated a favourable performance of these models in terms of selecting the most energy efficient route and reaching the least possible number of services for an optimum and energy efficient composition.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Full text
Abstract:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué<br>Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
APA, Harvard, Vancouver, ISO, and other styles
5

Safieddine, Ibrahim. "Optimisation d'infrastructures de cloud computing sur des green datacenters." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM083/document.

Full text
Abstract:
Les centres de données verts de dernière génération ont été conçus pour une consommation optimisée et une meilleure qualité du niveau de service SLA. Cependant,ces dernières années, le marché des centres de données augmente rapidement,et la concentration de la puissance de calcul est de plus en plus importante, ce qui fait augmenter les besoins en puissance électrique et refroidissement. Un centre de données est constitué de ressources informatiques, de systèmes de refroidissement et de distribution électrique. De nombreux travaux de recherche se sont intéressés à la réduction de la consommation des centres de données afin d'améliorer le PUE, tout en garantissant le même niveau de service. Certains travaux visent le dimensionnement dynamique des ressources en fonction de la charge afin de réduire le nombre de serveurs démarrés, d'autres cherchent à optimiser le système de refroidissement qui représente un part important de la consommation globale.Dans cette thèse, afin de réduire le PUE, nous étudions la mise en place d'un système autonome d'optimisation globale du refroidissement, qui se base sur des sources de données externes tel que la température extérieure et les prévisions météorologiques, couplé à un module de prédiction de charge informatique globale pour absorber les pics d'activité, pour optimiser les ressources utilisés à un moindre coût, tout en préservant la qualité de service. Afin de garantir un meilleur SLA, nous proposons une architecture distribuée pour déceler les anomalies de fonctionnements complexes en temps réel, en analysant de gros volumes de données provenant des milliers de capteurs du centre de données. Détecter les comportements anormaux au plus tôt, permet de réagir plus vite face aux menaces qui peuvent impacter la qualité de service, avec des boucles de contrôle autonomes qui automatisent l'administration. Nous évaluons les performances de nos contributions sur des données provenant d'un centre de donnée en exploitation hébergeant des applications réelles<br>Next-generation green datacenters were designed for optimized consumption and improved quality of service level Service Level Agreement (SLA). However, in recent years, the datacenter market is growing rapidly, and the concentration of the computing power is increasingly important, thereby increasing the electrical power and cooling consumptions. A datacenter consists of computing resources, cooling systems, and power distribution. Many research studies have focused on reducing the consumption of datacenters to improve the PUE, while guaranteeing the same level of service. Some works aims the dynamic sizing of resources according to the load, to reduce the number of started servers, others seek to optimize the cooling system which represents an important part of total consumption. In this thesis, in order to reduce the PUE, we study the design of an autonomous system for global cooling optimization, which is based on external data sources such as the outside temperature and weather forecasting, coupled with an overall IT load prediction module to absorb the peaks of activity, to optimize activere sources at a lower cost while preserving service level quality. To ensure a better SLA, we propose a distributed architecture to detect the complex operation anomalies in real time, by analyzing large data volumes from thousands of sensors deployed in the datacenter. Early identification of abnormal behaviors, allows a better reactivity to deal with threats that may impact the quality of service, with autonomous control loops that automate the administration. We evaluate the performance of our contributions on data collected from an operating datacenter hosting real applications
APA, Harvard, Vancouver, ISO, and other styles
6

Do, Manh Duc. "Green Cloud - Load Balancing, Load Consolidation using VM Migration." TopSCHOLAR®, 2017. https://digitalcommons.wku.edu/theses/2059.

Full text
Abstract:
Recently, cloud computing is a new trend emerging in computer technology with a massive demand from the clients. To meet all requirements, a lot of cloud data centers have been constructed since 2008 when Amazon published their cloud service. The rapidly growing data center leads to the consumption of a tremendous amount of energy even cloud computing has better improved in the performance and energy consumption, but cloud data centers still absorb an immense amount of energy. To raise company’s income annually, the cloud providers start considering green cloud concepts which gives an idea about how to optimize CPU’s usage while guaranteeing the quality of service. Many cloud providers are paying more attention to both load balancing and load consolidation which are two significant components of a cloud data center. Load balancing is taken into account as a vital part of managing income demand, improving the cloud system’s performance. Live virtual machine migration is a technique to perform the dynamic load balancing algorithm. To optimize the cloud data center, three issues are considered: First, how does the cloud cluster distribute the virtual machine (VM) requests from clients to all physical machine (PM) when each computer has a different capacity. Second, what is the solution to make CPU’s usage of all PMs to be nearly equal? Third, how to handle two extreme scenarios: rapidly rising CPU’s usage of a PM due to sudden massive workload requiring VM migration immediately and resources expansion to respond to substantial cloud cluster through VM requests. In this chapter, we provide an approach to work with those issues in the implementation and results. The results indicated that the performance of the cloud cluster was improved significantly. Load consolidation is the reverse process of load balancing which aims to provide sufficient cloud servers to handle the client requests. Based on the advance of live VM migration, cloud data center can consolidate itself without interrupting the cloud service, and superfluous PMs are turned to save mode to reduce the energy consumption. This chapter provides a solution to approach load consolidation including implementation and simulation of cloud servers.
APA, Harvard, Vancouver, ISO, and other styles
7

Gkikas, Dimitrios. "The Impact of Cloud Computing on Entrepreneurship and Start-ups : Case of Greece." Thesis, KTH, Entreprenörskap och Innovation, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147767.

Full text
Abstract:
The significant advances of technology in the ICT sector the last decades with the most important the improvements on the internet services and virtualization techniques have led to the emergence of several computing paradigms, with the most recent cloud computing. There are several major international cloud service providers which deliver a variety of cloud services and solutions to individuals or companies. As a result, more and more companies are moving to the cloud leading to a growth of cloud services market. Cloud technologies can offer various benefits to organizations but at the same time there are risks and challenges associated with this term. This study examines the benefits of cloud computing on entrepreneurship and startup companies focusing in a specific country, Greece. Greece is inside a long period of economic crisis and access to financing is one on the most problematic factors for doing business. However, the last three years there is a huge increase in the number of startup companies and at the same time an increase in investments in Greek startups. In order to estimate the adoption of cloud computing from Greek startup companies and the potential benefits it may offer to Greek entrepreneurs an online survey was conducted. The analysis of the primary data indicates that Greek entrepreneurs are likely to use cloud computing and they are aware of its potential benefits and risks. Based on the findings of this study, there are serious indications that cloud computing has played a catalytic role in this recent increase of the entrepreneurial activity in Greece, offering multiple benefits to Greek entrepreneurs who are struggling to be more competitive, increase the value of their products and services and decrease costs.
APA, Harvard, Vancouver, ISO, and other styles
8

Ahvar, Ehsan. "Cost-efficient resource allocation for green distributed clouds." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0001.

Full text
Abstract:
L'objectif de cette thèse est de présenter de nouveaux algorithmes de placement de machines virtuelles (VMs) à fin d’optimiser le coût et les émissions de carbone dans les Clouds distribués. La thèse se concentre d’abord sur la rentabilité des Clouds distribués, et développe ensuite les raisons d’optimiser les coûts ainsi que les émissions de carbone. La thèse comprend deux principales parties: la première propose, développe et évalue les algorithmes de placement statiques de VMs (où un premier placement d'une VM détient pendant toute la durée de vie de la VM). La deuxième partie propose des algorithmes de placement dynamiques de VMs où le placement initial de VM peut changer dynamiquement (par exemple, grâce à la migration de VMs et à leur consolidation). Cette thèse comprend cinq contributions. La première contribution est une étude de l'état de l'art sur la répartition des coûts et des émissions de carbone dans les environnements de clouds distribués. La deuxième contribution propose une méthode d'allocation des ressources, appelée NACER, pour les clouds distribués. L'objectif est de minimiser le coût de communication du réseau pour exécuter une tâche dans un cloud distribué. La troisième contribution propose une méthode de placement VM (appelée NACEV) pour les clouds distribués. NACEV est une version étendue de NACER. Tandis que NACER considère seulement le coût de communication parmi les DCs, NACEV optimise en même temps les coûts de communication et de calcul. Il propose également un algorithme de cartographie pour placer des machines virtuelles sur des machines physiques (PM). La quatrième contribution présente une méthode de placement VM efficace en termes de coûts et de carbone (appelée CACEV) pour les clouds distribués verts. CACEV est une version étendue de NACEV. En plus de la rentabilité, CACEV considère l'efficacité des émissions de carbone pour les clouds distribués. Pour obtenir une meilleure performance, la cinquième contribution propose une méthode dynamique de placement VM (D-CACEV) pour les clouds distribués. D-CACEV est une version étendue de notre travail précédent, CACEV, avec des chiffres supplémentaires, une description et également des mécanismes de migration de VM en direct. Nous montrons que notre mécanisme conjoint de réallocation-placement de VM peut constamment optimiser à la fois le coût et l'émission de carbone dans un cloud distribué<br>Virtual machine (VM) placement (i.e., resource allocation) method has a direct effect on both cost and carbon emission. Considering the geographic distribution of data centers (DCs), there are a variety of resources, energy prices and carbon emission rates to consider in a distributed cloud, which makes the placement of VMs for cost and carbon efficiency even more critical and complex than in centralized clouds. The goal of this thesis is to present new VM placement algorithms to optimize cost and carbon emission in a distributed cloud. It first focuses on cost efficiency in distributed clouds and, then, extends the goal to optimization of both cost and carbon emission at the same time. Thesis includes two main parts. The first part of thesis proposes, develops and evaluates static VM placement algorithms to reach the mentioned goal where an initial placement of a VM holds throughout the lifetime of the VM. The second part proposes dynamic VM placement algorithms where the initial placement of VMs is allowed to change (e.g., through VM migration and consolidation). The first contribution is a survey of the state of the art on cost and carbon emission resource allocation in distributed cloud environments. The second contribution targets the challenge of optimizing inter-DC communication cost for large-scale tasks and proposes a Network-Aware Cost-Efficient Resource allocation method, called NACER, for distributed clouds. The goal is to minimize the network communication cost of running a task in a distributed cloud by selecting the DCs to provision the VMs in such a way that the total network distance (hop count or any reasonable measure) among the selected DCs is minimized. The third contribution proposes a Network-Aware Cost Efficient VM Placement method (called NACEV) for Distributed Clouds. NACEV is an extended version of NACER. While NACER only considers inter-DC communication cost, NACEV optimizes both communication and computing cost at the same time and also proposes a mapping algorithm to place VMs on Physical Machines (PMs) inside of the selected DCs. NACEV also considers some aspects such as heterogeneity of VMs, PMs and switches, variety of energy prices, multiple paths between PMs, effects of workload on cost (energy consumption) of cloud devices (i.e., switches and PMs) and also heterogeneity of energy model of cloud elements. The forth contribution presents a Cost and Carbon Emission-Efficient VM Placement Method (called CACEV) for green distributed clouds. CACEV is an extended version of NACEV. In addition to cost efficiency, CACEV considers carbon emission efficiency and green distributed clouds. It is a VM placement algorithm for joint optimization of computing and network resources, which also considers price, location and carbon emission rate of resources. It also, unlike previous contributions of thesis, considers IaaS Service Level Agreement (SLA) violation in the system model. To get a better performance, the fifth contribution proposes a dynamic Cost and Carbon Emission-Efficient VM Placement method (D-CACEV) for green distributed clouds. D-CACEV is an extended version of our previous work, CACEV, with additional figures, description and also live VM migration mechanisms. We show that our joint VM placement-reallocation mechanism can constantly optimize both cost and carbon emission at the same time in a distributed cloud
APA, Harvard, Vancouver, ISO, and other styles
9

Cao, Fei. "Efficient Scientific Workflow Scheduling in Cloud Environment." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/802.

Full text
Abstract:
Cloud computing enables the delivery of remote computing, software and storage services through web browsers following pay-as-you-go model. In addition to successful commercial applications, many research efforts including DOE Magellan Cloud project focus on discovering the opportunities and challenges arising from the computing and data-intensive scientific applications that are not well addressed by the current supercomputers, Linux clusters and Grid technologies. The elastic resource provision, noninterfering resource sharing and flexible customized configuration provided by the Cloud infrastructure has shed light on efficient execution of many scientific applications modeled as Directed Acyclic Graph (DAG) structured workflows to enforce the intricate dependency among a large number of different processing tasks. Meanwhile, the Cloud environment poses various challenges. Cloud providers and Cloud users pursue different goals. Providers aim to maximize profit by achieving higher resource utilization and users want to minimize expenses while meeting their performance requirements. Moreover, due to the expanding Cloud services and emerging newer technologies, the ever-increasing heterogeneity of the Cloud environment complicates the challenges for both parties. In this thesis, we address the workflow scheduling problem from different applications and various objectives. For batch applications, due to the increasing deployment of many data centers and computer servers around the globe escalated by the higher electricity price, the energy cost on running the computing, communication and cooling together with the amount of CO2 emissions have skyrocketed. In order to maintain sustainable Cloud computing facing with ever-increasing problem complexity and big data size in the next decades, we design and develop energy-aware scientific workflow scheduling algorithm to minimize energy consumption and CO2 emission while still satisfying certain Quality of Service (QoS) such as response time specified in Service Level Agreement (SLA). Furthermore, the underlying Cloud hardware/Virtual Machine (VM) resource availability is time-dependent because of the dual operation modes namely on-demand and reservation instances at various Cloud data centers. We also apply techniques such as Dynamic Voltage and Frequency Scaling (DVFS) and DNS scheme to further reduce energy consumption within acceptable performance bounds. Our multiple-step resource provision and allocation algorithm achieves the response time requirement in the step of forward task scheduling and minimizes the VM overhead for reduced energy consumption and higher resource utilization rate in the backward task scheduling step. We also evaluate the candidacy of multiple data centers from the energy and performance efficiency perspectives as different data centers have various energy and cost related parameters. For streaming applications, we formulate scheduling problems with two different objectives, namely one is to maximize the throughput under a budget constraint while another is to minimize execution cost under a minimum throughput constraint. Two different algorithms named as Budget constrained RATE (B-RATE) and Budget constrained SWAP (B-SWAP) are designed under the first objective; Another two algorithms, namely Throughput constrained RATE (TP-RATE) and Throughput constrained SWAP (TP-SWAP) are developed under the second objective.
APA, Harvard, Vancouver, ISO, and other styles
10

Wan, Ariffin Wan Nur Suryani Firuz. "Real-time resource management and energy trading for green cloud-RAN." Thesis, King's College London (University of London), 2017. https://kclpure.kcl.ac.uk/portal/en/theses/realtime-resource-management-and-energy-trading-for-green-cloudran(b576b0a7-0aa3-425e-9b77-407dba2bb6f2).html.

Full text
Abstract:
This thesis considers cloud radio access network (C-RAN), where the remote radio heads (RRHs) are equipped with renewable energy resources and can trade energy with the grid. Due to uneven distribution of mobile radio traffic and inherent intermittent nature of renewable energy resources, the RRHs may need real-time energy provisioning to meet the users demands. Given the amount of available energy resources at RRHs, the main contributions of the thesis begin with introducing realtime resource management strategies to the RRHs with a shortage of power budget to select an optimal number of user terminals based on their available energy budget. On the other hand, sparse beamforming strategies introduced in the second part of the thesis account for all RRHs with or without a shortage of power and take consideration of realistic constraints on fronthaul capacity restrictions. The proposed strategies strike an optimum balance among the total power consumption in the fronthaul through adjusting the degree of partial cooperation among RRHs, RRHs total transmit power and the maximum or total spot-market energy cost. A smart energy management strategy based on the combinatorial multi-armed bandit (CMAB) theory for C-RAN, which is powered by a hybrid of grid and renewable energy sources is studied in the last part of the thesis. A combinatorial upper confidence bound (CUCB) algorithm to maximize the overall rewards, earned as a result of minimizing the cost of energy trading at individual RRHs of the C-RAN has been introduced. Adapting to the dynamic wireless channel conditions, the proposed CUCB algorithm associates a set of optimal energy packages, to be purchased from the day-ahead markets, to a set of RRHs to minimize the total cost of energy purchase from the main power grid by dynamically forming super arms. A super arm is formed on the basis of calculating the instantaneous energy demands at the current time slot, learning from the cooperative energy trading at the previous time slots and adjusting the mean rewards of the individual arms.
APA, Harvard, Vancouver, ISO, and other styles
11

Chinenyeze, Samuel Jaachimma. "Mango : a model-driven approach to engineering green Mobile Cloud Applications." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/976572.

Full text
Abstract:
With the resource constrained nature of mobile devices and the resource abundant offerings of the cloud, several promising optimisation techniques have been proposed by the green computing research community. Prominent techniques and unique methods have been developed to offload resource/computation intensive tasks from mobile devices to the cloud. Most of the existing offloading techniques can only be applied to legacy mobile applications as they are motivated by existing systems. Consequently, they are realised with custom runtimes which incur overhead on the application. Moreover, existing approaches which can be applied to the software development phase, are difficult to implement (based on manual process) and also fall short of overall (mobile to cloud) efficiency in software qualityattributes or awareness of full-tier (mobile to cloud) implications. To address the above issues, the thesis proposes a model-driven architecturefor integration of software quality with green optimisation in Mobile Cloud Applications (MCAs), abbreviated as Mango architecture. The core aim of the architecture is to present an approach which easily integrates software quality attributes (SQAs) with the green optimisation objective of Mobile Cloud Computing (MCC). Also, as MCA is an application domain which spans through the mobile and cloud tiers; the Mango architecture, therefore, takesinto account the specification of SQAs across the mobile and cloud tiers, for overall efficiency. Furthermore, as a model-driven architecture, models can be built for computation intensive tasks and their SQAs, which in turn drives the development – for development efficiency. Thus, a modelling framework (called Mosaic) and a full-tier test framework (called Beftigre) were proposed to automate the architecture derivation and demonstrate the efficiency of Mango approach. By use of real world scenarios/applications, Mango has been demonstrated to enhance the MCA development process while achieving overall efficiency in terms of SQAs (including mobile performance and energy usage compared to existing counterparts).
APA, Harvard, Vancouver, ISO, and other styles
12

Álvares, De Oliveira Júnior Frederico Guilherme. "Multi autonomic management for optimizing energy consumption in cloud infrastructures." Nantes, 2013. https://archive.bu.univ-nantes.fr/pollux/show/show?id=15c149de-7475-4617-84ac-69aae3f7cad0.

Full text
Abstract:
Conséquence directe de la popularité croissante des services informatique en nuage, les centres de données se développent à une vitesse vertigineuse et doivent rapidement faire face à des problèmes de consommation d'énergie. Paradoxalement, l'informatique en nuage permet aux infrastructure et applications de s'ajuster dynamiquement afin de rendre l'infrastructure plus efficace en termes d'énergie et les applications plus conformes en termes de qualité de service (QdS). Toutefois, les décisions d'optimisation prises isolément à un certain niveau peuvent indirectement interférer avec (voire neutraliser) les décisions prises à un autre niveau, par exemple, une application demande plus de ressources pour garder sa QdS alors qu'une partie de l'infrastructure est en cours d'arrêt pour des raisons énergétiques. Par conséquent, il devient nécessaire non seulement d'établir une synergie entre les couches du nuage, mais aussi de rendre ces couches suffisamment souples et sensibles pour être en mesure de réagir aux changements d'exécution et ainsi profiter pleinement de cette synergie. Cette thèse propose une approche d'auto-adaptation qui prend en considération les composants applicatifs (élasticité architecturale) ainsi que d'infrastructure (élasticité des ressources) pour réduire l'empreinte énergétique. Chaque application et l'infrastructure sont équipées d'une boucle de contrôle autonome qui leur permet d'optimiser indépendamment leur fonctionnement. Afin de créer une synergie entre boucles de contrôle autour d'un objectif commun, nous proposons un modèle pour la coordination et la synchronisation de plusieurs boucles de contrôle. L'approche est validée expérimentalement à la fois qualitativement (amélioration de QdS et des gains d'énergie) et quantitativement (passage à l'échelle)<br>As a direct consequence of the increasing popularity of Internet and Cloud Computing services, data centers are amazingly growing and hence have to urgently face energy consumption issues. Paradoxically, Cloud Computing allows infrastructure and applications to dynamically adjust the provision of both physical resources and software services in a pay-per-use manner so as to make the infrastructure more energy efficient and applications more Quality of Service (QoS) compliant. However, optimization decisions taken in isolation at a certain level may indirectly interfere in (or even neutralize) decisions taken at another level, e. G. An application requests more resources to keep its QoS while part of the infrastructure is being shutdown for energy reasons. Hence, it becomes necessary not only to establish a synergy between cloud layers but also to make these layers flexible and sensitive enough to be able to react to runtime changes and thereby fully benefit from that synergy. This thesis proposes a self-adaptation approach that considers both application internals (architectural lasticity) and infrastructure (resource elasticity) to reduce the energy footprint in cloud infrastructures. Each application and the infrastructure are equipped with their own autonomic manager, which allows them to autonomously optimize their execution. Ln order to get several autonomic managers working together, we propose an autonomic model for coordination and synchronization of multiple autonomic managers. The approach is experimentally validated through two studies: a qualitative (QoS improvements and energy gains) and a quantitative one (scalability)
APA, Harvard, Vancouver, ISO, and other styles
13

Strachota, Marek. "Technologické a funkční inovativní trendy ERP." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193056.

Full text
Abstract:
Current trends in ERP system were studied by conducting research and analyzes to see to what degree are current ERP trends reflected in existing ERP solutions that are being developed by both Czech and multinational vendors. For the purposes of the research a set of hypotheses was formulated and used to analyze the selected solutions. The research found two trends that are represented in the majority of selected ERP solutions. The application of cloud computing technology and the suitability to use particular solutions as two-tiered ERP. It was also found that all researched vendors to a certain degree implement the principles of Green ICT, although no vendor specifically labeled their software as Green ERP. It was also found that the term Social ERP doesn't appear to be an acceptable term for ERP vendors, although Social ERP functionality of some degree was found to be present in ERP solutions from multinational vendors.
APA, Harvard, Vancouver, ISO, and other styles
14

Amokrane, Ahmed. "Green et efficacité en énergie dans les réseaux d'accès et les infrastructures cloud." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066642/document.

Full text
Abstract:
Au cours des dernières années, l’utilisation des téléphones portables et tablettes s’est vue croitre de façon significative. De plus, la disponibilité et l’ubiquité de l’accès sans fil a permis de créer un environnement dans lequel les utilisateurs partout où ils sont accèdent en tout temps à des services se trouvant dans le cloud. Cet environnement appelé cloud sans fil consomme une quantité d’énergie significative et est responsable d’émissions considérables en carbone. Cette consommation massive d’énergie et émissions en carbone deviennent un problème majeur dans le secteur des technologies de la communication. Dans ce contexte, nous nous intéressons dans cette thèse à la réduction de la consommation d’énergie et des empreintes en carbone des réseaux d’accès de type mesh et réseaux de campus ainsi que les data centers des infrastructures cloud. Dans la première partie, nous nous intéressons aux réseaux mesh de type TDMA. Nous proposons des solutions pour le problème de routage et ordonnancement des liens qui permettent de réduire la consommation d’énergie dans le réseau. Nous étendons par la suite cette approche pour les réseaux de campus dans un contexte compatible avec le paradigme SDN. Dans la deuxième partie, nous nous intéressons à la réduction de la consommation l’énergie et des empreintes en carbone des infrastructures cloud distribuées, avec des couts variables de l’électricité et d’émission en carbone. Nous proposons des approches d’optimisations dans deux cas de figures : le cas d’un fournisseur cloud souhaitant réduire ses couts et dans le cas où les utilisateurs cloud spécifient des contraintes en carbone sous forme de Green SLA<br>Over the last decade, there has been an increasing use of personal wireless devices, such as laptops, smartphones and tablets. The widespread availability of wireless access created an environment in which anywhere at anytime users access data and services hosted in cloud infrastructures. However, such wireless cloud network consumes a non-negligible amount of energy and generates a considerable amount of carbon, which is becoming a major concern in IT industry. In this context, we address the problem of reducing energy consumption and carbon footprint, as well as building green infrastructures in the two different parts of the wireless cloud: (i) wireless access networks including wireless mesh and campus networks, and (ii) data centers in a cloud infrastructure. In the first part of the thesis, we present an energy-efficient framework for joint routing and link scheduling in multihop TDMA-based wireless networks. At a later stage, we extended this framework to cover campus networks using the emerging Software Defined Networking (SDN) paradigm. In the second part of this thesis, we address the problem of reducing energy consumption and carbon footprint of cloud infrastructures. Specifically, we propose optimization approaches for reducing the energy costs and carbon emissions of a cloud provider owning distributed infrastructures of data centers with variable electricity prices and carbon emissions in two different setups: the case of a cloud provider trying to reduce its carbon emissions and operational costs as well as the case where green constraints are specified by the cloud consumers in the form of Green SLAs
APA, Harvard, Vancouver, ISO, and other styles
15

Amokrane, Ahmed. "Green et efficacité en énergie dans les réseaux d'accès et les infrastructures cloud." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066642.

Full text
Abstract:
Au cours des dernières années, l’utilisation des téléphones portables et tablettes s’est vue croitre de façon significative. De plus, la disponibilité et l’ubiquité de l’accès sans fil a permis de créer un environnement dans lequel les utilisateurs partout où ils sont accèdent en tout temps à des services se trouvant dans le cloud. Cet environnement appelé cloud sans fil consomme une quantité d’énergie significative et est responsable d’émissions considérables en carbone. Cette consommation massive d’énergie et émissions en carbone deviennent un problème majeur dans le secteur des technologies de la communication. Dans ce contexte, nous nous intéressons dans cette thèse à la réduction de la consommation d’énergie et des empreintes en carbone des réseaux d’accès de type mesh et réseaux de campus ainsi que les data centers des infrastructures cloud. Dans la première partie, nous nous intéressons aux réseaux mesh de type TDMA. Nous proposons des solutions pour le problème de routage et ordonnancement des liens qui permettent de réduire la consommation d’énergie dans le réseau. Nous étendons par la suite cette approche pour les réseaux de campus dans un contexte compatible avec le paradigme SDN. Dans la deuxième partie, nous nous intéressons à la réduction de la consommation l’énergie et des empreintes en carbone des infrastructures cloud distribuées, avec des couts variables de l’électricité et d’émission en carbone. Nous proposons des approches d’optimisations dans deux cas de figures : le cas d’un fournisseur cloud souhaitant réduire ses couts et dans le cas où les utilisateurs cloud spécifient des contraintes en carbone sous forme de Green SLA<br>Over the last decade, there has been an increasing use of personal wireless devices, such as laptops, smartphones and tablets. The widespread availability of wireless access created an environment in which anywhere at anytime users access data and services hosted in cloud infrastructures. However, such wireless cloud network consumes a non-negligible amount of energy and generates a considerable amount of carbon, which is becoming a major concern in IT industry. In this context, we address the problem of reducing energy consumption and carbon footprint, as well as building green infrastructures in the two different parts of the wireless cloud: (i) wireless access networks including wireless mesh and campus networks, and (ii) data centers in a cloud infrastructure. In the first part of the thesis, we present an energy-efficient framework for joint routing and link scheduling in multihop TDMA-based wireless networks. At a later stage, we extended this framework to cover campus networks using the emerging Software Defined Networking (SDN) paradigm. In the second part of this thesis, we address the problem of reducing energy consumption and carbon footprint of cloud infrastructures. Specifically, we propose optimization approaches for reducing the energy costs and carbon emissions of a cloud provider owning distributed infrastructures of data centers with variable electricity prices and carbon emissions in two different setups: the case of a cloud provider trying to reduce its carbon emissions and operational costs as well as the case where green constraints are specified by the cloud consumers in the form of Green SLAs
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Jiashang. "Resource Allocation and Energy Management in Green Network Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587577356321898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Trung. "Towards Sustainable Cloud Computing: Reducing Electricity Cost and Carbon Footprint for Cloud Data Centers through Geographical and Temporal Shifting of Workloads." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23082.

Full text
Abstract:
Cloud Computing presents a novel way for businesses to procure their IT needs. Its elasticity and on-demand provisioning enables a shift from capital expenditures to operating expenses, giving businesses the technological agility they need to respond to an ever-changing marketplace. The rapid adoption of Cloud Computing, however, poses a unique challenge to Cloud providers—their already very large electricity bill and carbon footprint will get larger as they expand; managing both costs is therefore essential to their growth. This thesis squarely addresses the above challenge. Recognizing the presence of Cloud data centers in multiple locations and the differences in electricity price and emission intensity among these locations and over time, we develop an optimization framework that couples workload distribution with time-varying signals on electricity price and emission intensity for financial and environmental benefits. The framework is comprised of an optimization model, an aggregate cost function, and 6 scheduling heuristics. To evaluate cost savings, we run simulations with 5 data centers located across North America over a period of 81 days. We use historical data on electricity price, emission intensity, and workload collected from market operators and research data archives. We find that our framework can produce substantial cost savings, especially when workloads are distributed both geographically and temporally—up to 53.35% on electricity cost, or 29.13% on carbon cost, or 51.44% on electricity cost and 13.14% on carbon cost simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
18

Talaganov, Goce. "Green VoIP : A SIP Based Approach." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-98795.

Full text
Abstract:
This master thesis presents, examines, designs, implements, and evaluates with respect to energy efficiency a secure and robust VoIP system. This system utilizes a Session Initiation Protocol (SIP) infrastructure assisted by a cloud service, specifically focusing on small to medium sized enterprises (SME) and homes. This research focuses on using inexpensive, flexible, commodity embedded hardware (specifically a Linksys WRT54GL wireless router for the local site with a customized operating system, specifically DD-WRT). The idea is to reduce the local site's power consumption to very low levels by examining which functions can be done in a cloud service rather than at the local site. The thesis presents the design of a low-power IP telephony system for the local site and the cloud site. A number of different usage scenarios and desirable features are described. The methodology for conducting a set of experiments is defined to perform stress-testing and to evaluate the low- power IP telephony system's design. The experiments concern the overall power consumption of the local site under various configurations, the VPN link's call capacity, the QoS metrics for the VoIP calls, the session request delay (SRD) and the registration request delay (RRD). The results from these experiments show that there is a potential for significant power savings when using the proposed design for an IP telephony system.<br>Detta examensarbete presenterar, undersöker, utformar, implementerar, och försöker att utvärdera ett säkert och robust VoIP-system med energieffektivitet. Detta system använder en Session Initiation Protocol (SIP)-infrastruktur med hjälp av en molntjänst med särskild inriktning på, små, och medelstora företag (SME) och hemmanvändare. Denna forskning fokuserar att använda en prisvärt, billig, flexibel, med program inbyggd hårdvara (speciellt en Linksys WRT54GL trådlös router för den lokala platsen med ett anpassat operativsystem DD-WRT). Tanken är att minska energiförbrukningen på, den lokala platsen till mycket låga nivåer genom att undersöka vilka funktioner, som kan köras på, ett molntjnst snarare än på, den lokala platsen. Avhandlingen presenterar utformningen av ett IP-telefonisystem på, den lokala platsen med ett lågt strömbehov och på, molntjänsten. Ett antal olika användningsförhållanden och önskvärda egenskaper är beskrivna. Metodiken för att genomföra en rad experiment definieras för att utföra stresstester och för att utvärdera designen av IP-telefonisystem med ett lågt effektbehov. I försöken experimenteras den totala energiförbrukningen av den lokala platsen under olika konfigurationer, VPN-länkens samtalskapacitet, QoS-mätning för VoIP-samtal, Session Request Delay (SRD) och Registration Request Delay (RRD). Resultaten från dessa experiment visar att det finns en potential för betydande energibesparing när du använder den föreslagna designen för en IP-telefoni system.
APA, Harvard, Vancouver, ISO, and other styles
19

Junior, Osvaldo Adilson de Carvalho. "GreenMACC - Uma arquitetura para metaescalonamento verde com provisão de QoS em uma nuvem privada." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08042015-161656/.

Full text
Abstract:
Esta tese de Doutorado tem como objetivo apresentar uma arquitetura para metaescalonamento verde com provisão de qualidade de serviço em uma nuvem privada denominada GreenMACC. Essa nova arquitetura oferece a automatização na escolha de políticas em quatro estágios de escalonamento de uma nuvem privada, permitindo cumprir a negociação que foi estabelecida com o usuário. Devido a essa função, é possível garantir que o GreenMACC se comporte seguindo os princípios da computação verde sem deixar de se preocupar com a qualidade do serviço. Nesta tese o GreenMACC é apresentado, detalhado, discutido, validado e avaliado. Com os resultados apresentados pode-se concluir que a arquitetura proposta mostrou-se consistente, permitindo a execução dos serviços requisitados com diversas políticas de escalonamento em todos os seus estágios. Além disso, demonstrou flexibilidade em receber novas políticas, com focos verde e de qualidade de serviço, e eficiência na escolha das políticas de escalonamento de acordo com a negociação feita com o usuário.<br>This PhD thesis aims to present an architecture for green metascheduling with provision of quality of service in a private cloud called GreenMACC. This new architecture offers the possibility of choosing automatically the four stage scheduling policies of a private cloud, allowing to reach the users negotiation. As a result of this function, it is possible to ensure that GreenMACCs behavior follows the green computing principles and also is worried about the quality of the service. In this thesis Green- MACC is presented, particularized, discussed, validated and evaluated. The results show that the proposed architecture is consistent, allowing the execution of the requested services considering various scheduling policies in the stages. Moreover, GreenMACC proves to be flexible as allows new policies, focusing on green and quality of service, and to be efficient as chooses the scheduling policies following the users negotiation.
APA, Harvard, Vancouver, ISO, and other styles
20

Frisk, Arfvidsson Nils, and David Östlin. "Green Cloud Transition & Environmental Impacts of Stock Exchanges : A Case Study of Nasdaq, a Global Stock Exchange Company." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279615.

Full text
Abstract:
To address the issues of climate change and reduce the emissions released into the atmosphere, society and companies, including the financial markets, need to adjust how they act and conduct business. The financial markets are vital in the transition towards a more sustainable society and stock exchanges are a central actor to enhance green finance, enabling green securities to be traded. For stock exchange companies to stand tall and encourage a green transition, they need to be aware of their own internal environmental impact. As society is changing to become more serviceoriented, so is stock exchanges. A part of enabling servitization is the usage of cloud services which not only enable companies to focus more on their core business, it also has the potential to reduce companies’ environmental footprint. This study examines the environmental impact of a stock exchange company and how it can be reduced by transitioning to cloud computing. The study uses Nasdaq as a case company and examines environmental performance data from major stock exchanges worldwide. The study furthermore uses the Multi-Level Perspective (MLP) to understand what enables and disables a cloud transition for stock exchanges. This study concludes that the main environmental impact of a stock exchange is Business Travel, electricity and heat for Office Buildings and Data Centres, although the order of these varies throughout the industry. Further, it is concluded that a stock exchange can reduce its environmental footprint by transitioning to cloud computing, in the best-case scenario, emissions are reduced with 10 percent and electricity usage reduced with almost 30 percent of the total usage. However, the impact of a transition is dependent on the rate of renewable energy used for the data centre. The study finds that a cloud transition includes enablers and disablers on all three levels on the MLP and it will most likely be incremental innovations together with a business model shift and technical traits of cloud that will enable and open the window of opportunity for a regime shift. It is concluded that technology or IT-security of cloud computing is not hindering a cloud transition, rather it is organizational culture, assumptions, financial lock-ins, and landscape protectionism that are disablers for a transition. To overcome those, and reduce the environmental footprint, stock exchanges need to work together with cloud providers to create use cases that are in line with the regulatory and financial requirements of a stock exchange.<br>För att hantera klimatförändringar och reducera utsläppen i atmosfären måste samhället och företag, inklusive de finansiella marknaderna anpassa hur de agerar och bedriver verksamhet. De finansiella marknaderna är vitala för övergången till ett mer hållbart samhälle och börser är en central aktör för att utveckla grön finans och möjliggöra handel av gröna värdepapper. Om börsbolag ska stå rakryggade och uppmuntra till en grön övergång måste de vara medvetna om deras egen interna miljöpåverkan. I takt med att samhället förändras till att bli mer serviceinriktat, förändras också börserna. En faktor för att möjliggöra servitisering är användningen av molntjänster, som inte bara möjliggör mer fokus på kärnverksamhet utan har också potential att minska företags miljöpåverkan. Denna studie undersöker miljöeffekterna av ett börsföretag och hur det kan minskas genom en övergång till molntjänster. Studien använder Nasdaq som ett caseföretag samt undersöker data om miljöpåverkan från de stora börserna världen över. Vidare, för att förstå vad som möjliggör och förhindrar en molnövergång för börser använder studien Multi-Level Perspective (MLP). De viktigaste resultaten från denna studie är att den största miljöpåverkan av en börs är affärsresor, el och värme för kontorsbyggnader samt datacenter, dock varierar ordningen på dessa mellan börser. Studien konkluderar att en börs kan minska deras miljöpåverkan genom att övergå till molntjänster, i bästa fall kan molntjänster minska utsläppen med 10 procent och minska elanvändningen med nästan 30 procent av den totala användningen. Effekterna av en övergång är dock mycket beroende av andelen förnybar energi som användas av de olika datacentren. Studien konstaterar flertalet faktorer på alla tre nivåer av MLP som både möjliggör och förhindrar en molnövergång. Det kommer sannolikt vara inkrementellinnovation tillsammans med affärsmodellsförändringar och tekniska egenskaper för molntjänster som möjliggör och öppnar fönstret till regimskifte. Studien konkluderar att det inte är tekniken eller IT-säkerheten för molntjänster som förhindrar en molnövergång utan det är organisationskulturen, förutfattade meningar, ekonomiska inlåsningar och landskapsprotektionism som förhindrar en molnövergång. För att övervinna dessa och minska miljöpåverkan måste börserna samarbeta med molntjänsteleverantörer för att skapa use cases som är i linje med lagstiftningen och de finansiella kraven på en börs.
APA, Harvard, Vancouver, ISO, and other styles
21

Nergis, Damirag Melodi. "Web Based Cloud Interaction and Visualization of Air Pollution Data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254401.

Full text
Abstract:
According to World Health Organization, around 7 million people die every year due to diseases caused by air pollution. With the improvements in Internet of Things in the recent years, environmental sensing systems has started to gain importance. By using technologies like Cloud Computing, RFID, Wireless Sensor Networks, and open Application Programming Interfaces, it has become easier to collect data for visualization on different platforms. However, collected data need to be represented in an efficient way for better understanding and analysis, which requires design of data visualization tools. The GreenIoT initiative aims to provide open data with its infrastructure for sustainable city development in Uppsala. An environmental web application is presented within this thesis project, which visualizes the gathered environmental data to help municipality organizations to implement new policies for sustainable urban planning, and citizens to gain more knowledge to take sustainable decisions in their daily life. The application has been developed making use of the 4Dialog API, which is developed to provide data from a dedicated cloud storage for visualization purposes. According to the evaluation presented in this thesis, further development is needed to improve the performance to provide faster and more reliable service as well as the accessibility to promote openness and social inclusion.<br>Enligt World Health Organization dör 7 miljoner människor varje år på grund av sjukdomar orsakade av luftföroreningar. Med förbättringar inom Internet of Things under senare år, har betydelsen av system för miljösensorer. Genom att använda tekniker som molntjänster, RFID, trådlösa sensornätverk och öppna programmeringsgränssnitt, har det blivit enklare att samla in data för visualisering på olika plattformar. Men insamlad data behöver bli representerad på ett effektivt sätt för bättre förståelse och analys, vilket kräver utformande av verktyg för visualisering av data. Initiativet GreenIoT strävar mot att erbjuda öppen data med sin infrastruktur för hållbar stadsutveckling i Uppsala. I detta arbete presenteras en webb-tillämpning, som visualiserar den insamlade miljödatan för att hjälpa kommunen att implementera nya policies för hållbar stadsutveckling, och stimulera medborgare till att skaffa mer kunskap för att göra miljövänliga val i sin vardag. Tillämpningen har utvecklats med hjälp av 4Dialog API, som tillhandahåller data från lagring i molnet för visualiseringssyfte. Enligt den utvärdering som presenteras i denna rapport konstateras att vidare utveckling behövs för att förbättra dels prestanda för att erbjuda en snabbare och mer tillförlitlig service, och dels åtkomstmöjligheter för att främja öppenhet och social inkludering.
APA, Harvard, Vancouver, ISO, and other styles
22

Östlin, David, and Arfvidsson Nils Frisk. "Green Cloud Transition & Environmental Impacts of Stock Exchanges : A Case Study of Nasdaq, a Global Stock Exchange Company." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278173.

Full text
Abstract:
To address the issues of climate change and reduce the emissions released into the atmosphere, society and companies, including the financial markets, need to adjust how they act and conduct business. The financial markets are vital in the transition towards a more sustainable society and stock exchanges are a central actor to enhance green finance, enabling green securities to be traded. For stock exchange companies to stand tall and encourage a green transition, they need to be aware of their own internal environmental impact. As society is changing to become more service-oriented, so is stock exchanges. A part of enabling servitization is the usage of cloud services which not only enable companies to focus more on their core business, it also has the potential to reduce companies’ environmental footprint. This study examines the environmental impact of a stock exchange company and how it can be reduced by transitioning to cloud computing. The study uses Nasdaq as a case company and examines environmental performance data from major stock exchanges worldwide. The study furthermore uses the Multi-Level Perspective (MLP) to understand what enables and disables a cloud transition for stock exchanges. This study concludes that the main environmental impact of a stock exchange is Business Travel, electricity and heat for Office Buildings and Data Centres, although the order of these varies throughout the industry. Further, it is concluded that a stock exchange can reduce its environmental footprint by transitioning to cloud computing, in the best-case scenario, emissions are reduced with 10 percent and electricity usage reduced with almost 30 percent of the total usage. However, the impact of a transition is dependent on the rate of renewable energy used for the data centre. The study finds that a cloud transition includes enablers and disablers on all three levels on the MLP and it will most likely be incremental innovations together with a business model shift and technical traits of cloud that will enable and open the window of opportunity for a regime shift. It is concluded that technology or IT-security of cloud computing is not hindering a cloud transition, rather it is organizational culture, assumptions, financial lock-ins, and landscape protectionism that are disablers for a transition. To overcome those, and reduce the environmental footprint, stock exchanges need to work together with cloud providers to create use cases that are in line with the regulatory and financial requirements of a stock exchange.<br>För att hantera klimatförändringar och reducera utsläppen i atmosfären måste samhället och företag, inklusive de finansiella marknaderna anpassa hur de agerar och bedriver verksamhet. De finansiella marknaderna är vitala för övergången till ett mer hållbart samhälle och börser är en central aktör för att utveckla grön finans och möjliggöra handel av gröna värdepapper. Om börsbolag ska stå rakryggade och uppmuntra till en grön övergång måste de vara medvetna om deras egen interna miljöpåverkan. I takt med att samhället förändras till att bli mer serviceinriktat, förändras också börserna. En faktor för att möjliggöra servitisering är användningen av molntjänster, som inte bara möjliggör mer fokus på kärnverksamhet utan har också potential att minska företags miljöpåverkan. Denna studie undersöker miljöeffekterna av ett börsföretag och hur det kan minskas genom en övergång till molntjänster. Studien använder Nasdaq som ett caseföretag samt undersöker data om miljöpåverkan från de stora börserna världen över. Vidare, för att förstå vad som möjliggör och förhindrar en molnövergång för börser använder studien Multi-Level Perspective (MLP). De viktigaste resultaten från denna studie är att den största miljöpåverkan av en börs är affärsresor, el och värme för kontorsbyggnader samt datacenter, dock varierar ordningen på dessa mellan börser. Studien konkluderar att en börs kan minska deras miljöpåverkan genom att övergå till molntjänster, i bästa fall kan molntjänster minska utsläppen med 10 procent och minska elanvändningen med nästan 30 procent av den totala användningen. Effekterna av en övergång är dock mycket beroende av andelen förnybar energi som användas av de olika datacentren. Studien konstaterar flertalet faktorer på alla tre nivåer av MLP som både möjliggör och förhindrar en molnövergång. Det kommer sannolikt vara inkrementellinnovation tillsammans med affärsmodellsförändringar och tekniska egenskaper för molntjänster som möjliggör och öppnar fönstret till regimskifte. Studien konkluderar att det inte är tekniken eller IT-säkerheten för molntjänster som förhindrar en molnövergång utan det är organisationskulturen, förutfattade meningar, ekonomiska inlåsningar och landskapsprotektionism som förhindrar en molnövergång. För att övervinna dessa och minska miljöpåverkan måste börserna samarbeta med molntjänsteleverantörer för att skapa use cases som är i linje med lagstiftningen och de finansiella kraven på en börs.
APA, Harvard, Vancouver, ISO, and other styles
23

Balouek-Thomert, Daniel. "Scheduling on Clouds considering energy consumption and performance trade-offs : from modelization to industrial applications." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN058/document.

Full text
Abstract:
L'utilisation massive des services connectés dans les entreprises et les foyers a conduit à un développement majeur des "Ciouds" ou informatique en nuage. Les Clouds s'imposent maintenant comme un modèle économique attractif où le client paye pour utiliser des ressources ou des services à la demande sans avoir à se préoccuper de la maintenance ou du coût réel de l'infrastructure. Ce développement rencontre cependant un obstacle majeur du point de vue des fournisseurs de ce type d'architecture : la consommation électrique des moteurs du cloud, les "datacenters" ou centre de données.Cette thèse s'intéresse à l'efficacité énergétique des Clouds en proposant un framework d'ordonnancement extensible et multi-critères dans le but d'augmenter le rendement d'une infrastructure hétérogène d'un point de vue énergétique. Nous proposons une approche basée sur un curseur capable d'aggréger les préférences de l'opérateur et du client pour la création de politiques d'ordonnancement. L'enjeu est de dimensionner au plus juste le nombre de serveurs et composants actifs tout en respectant les contraintes d'exploitation, et ainsi réduire les impacts environnementaux liés à une consommation superflue.Ces travaux ont été validés de façon expérimentale sur la plateforme Grid'SOOO par leur intégration au sein de l'intergiciel DIET et font l'objet d'un transfert industriel au sein de la plateforme NUVEA que nous proposons. Cette plate-forme fournit un accompagnement pour l'opérateur et l'utilisateur allant de l'audit à l'optimisation des infrastructures<br>Modern society relies heavily on the use of computational resources. Over the last decades, the number of connected users and deviees has dramatically increased, leading to the consideration of decentralized on-demand computing as a utility, commonly named "The Cloud". Numerous fields of application such as High Performance Computing (HPC). medical research, movie rendering , industrial facto ry processes or smart city management , benefit from recent advances of on-demand computation .The maturity of Cloud technologies led to a democratization and to an explosion of connected services for companies, researchers, techies and even mere mortals, using those resources in a pay-per-use fashion.ln particular, since the Cloud Computing paradigm has since been adopted in companies . A significant reason is that the hardware running the cloud andprocessing the data does not reside at a company physical site, which means thatthe company does not have to build computer rooms (known as CAPEX, CAPitalEXpenditures) or buy equipment, nor to fill and mainta in that equipment over a normal life-cycle (known as OPEX, Operational EXpenditures).This thesis revolves around the energy efficiency of Cloud platforms by proposing an extensible and multi-criteria framework, which intends to improve the efficiency of an heterogeneous platform from an energy consumption perspective. We propose an approach based on user involvement using the notion of a cursor offering the ability to aggregate cloud operator and end user preferences to establish scheduling policies . The objective is the right sizing of active servers and computing equipments while considering exploitation constraints, thus reducing the environmental impactassociated to energy wastage.This research work has been validated on experiments and simulations on the Grid'SOOO platform, the biggest shared network in Europe dedicated to research.lt has been integrated to the DIET middleware, and a industrial valorisation has beendone in the NUVEA commercial platform, designed during this thesis . This platform constitutes an audit and optimization tool of large scale infrastructures for operatorsand end users
APA, Harvard, Vancouver, ISO, and other styles
24

Hasan, MD Sabbir. "Smart management of renewable energy in clouds : from infrastructure to application." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0010/document.

Full text
Abstract:
Avec l'avènement des technologies de Cloud computing et son adoption, les entreprises et les institutions académiques transfèrent de plus en plus leurs calculs et leurs données vers le Cloud. Alors que ce progrès et ce modèle simple d'accès ont eu un impact considérable sur notre communauté scientifique et industrielle en termes de réduction de la complexité et augmentation des revenus, les centres de données consomment énormément d'énergie, ce qui se traduit par des émissions plus élevées de C02. En réponse, de nombreux travaux de recherche se sont focalisés sur les enjeux du développement durable pour le Cloud à travers la réduction de la consommation d'énergie en concevant des stratégies d'efficacité énergétiques. Cependant, l'efficacité énergétique dans l'infrastructure du C!oud ne suffira pas à stimuler la réduction de • l'empreinte carbone. Il est donc impératif d'envisager une utilisation intelligente de l'énergie verte à la fois au niveau de l'infrastructure et de l'application pour réduire davantage l'empreinte carbone. Depuis peu, certains fournisseurs de Cloud computing alimentent leurs centres de données avec de l'énergie renouvelable. Les sources d'énergie renouvelable sont très intermittentes, ce qui crée plusieurs défis pour les gérer efficacement. Pour surmonter ces défis, nous étudions les options pour intégrer les différentes sources d'énergie renouvelable de manière réaliste et proposer un Cloud energy broker qui peut ajuster la disponibilité et la combinaison de prix pour acheter de l'énergie verte dynamiquement sur le marché de l'énergie et rendre les centres de données partiellement verts. Puis, nous introduisons le concept de la virtualisation de l'énergie verte, qui peut être vu comme une alternative au stockage d'énergie utilisé dans les centres de données pour éliminer le problème d'intermittence dans une certaine mesure. Avec l'adoption du concept de virtualisation, nous maximisons l'utilisation de l'énergie verte contrairement au stockage d'énergie qui induit des pertes d'énergie, tout en introduisant des Green SLA basé sur l'énergie verte pour le fournisseur de services et les utilisateurs finaux. En utilisant des traces réalistes et une simulation et une analyse approfondie, nous montrons que la proposition peut fournir un système efficace, robuste et rentable de gestion de l'énergie pour le centre de données. Si une gestion efficace de l'énergie en présence d'énergie verte intermittente est nécessaire, la façon dont les applications Cloud modernes peuvent tirer profit de la présence ou l'absence d'énergie verte n'a pas été suffisamment étudiée. Contrairement aux applications Batch, les applications Interactive Cloud doivent toujours être accessibles et ne peuvent pas être programmées à l'avance pour correspondre au profil d'énergie verte. Par conséquent, cette thèse propose une solution d'autoscaling adaptée à l'énergie pour exploiter les caractéristiques internes des applications et créer une conscience d'énergie verte dans l'application, tout en respectant les propriétés traditionnelles de QoS. Pour cela, nous concevons un contrôleur d'application green qui profite de la disponibilité de l'énergie verte pour effectuer une adaptation opportuniste dans une application gérée par un contrôleur orienté performance. L'expérience est réalisée avec une application réelle sur Grid5000 et les résultats montrent une réduction significative de la consommation d'énergie par rapport à l'approche orientée performance, tout en respectant les attributs traditionnels de QoS<br>With the advent of cloud enabling technologies and adoption of cloud computing, enterprise and academic institutions are moving their IT workload to the cloud. Although this prolific advancement and easy to access model have greatly impacted our scientific and industrial community in terms of reducing complexity and increasing revenue, data centers are consuming enormous amount of energy, which translates into higher carbon emission. In response, varieties of research work have focused on environmental sustainability for Cloud Computing paradigm through energy consumption reduction by devising energy efficient strategies. However, energy efficiency in cloud infrastructure alone is not going to be enough to boost carbon footprint reduction. Therefore, it is imperative to envision of smartly using green energy at infrastructure and application level for further reduction of carbon footprint. In recent years, some cloud providers are powering their data centers with renewable energy. The characteristics of renewable energy sources are highly intermittent which creates several challenges to manage them efficiently. To overcome the problem, we investigate the options and challenges to integrate different renewable energy sources in a realistic way and propose a Cloud energy broker, which can adjust the availability and price combination to buy Green energy dynamically from the energy market in advance to make a data center partially green. Later, we introduce the concept of Virtualization of Green Energy, which can be seen as an alternative to energy storage used in data center to eliminate the intermittency problem to some extent. With the adoption of virtualization concept, we maximize the usage of green energy contrary to energy storage which induces energy losses, while introduce Green Service Level Agreement based on green energy for service provider and end users. By •using realistic traces and extensive simulation and analysis, we show that, the proposal can provide an efficient, robust and cost-effective energy management scheme for data center. While an efficient energy management in the presence of intermittent green energy is necessary, how modern Cloud applications can take advantage of the presence/absence of green energy has not been studied with requisite effort. Unlike Batch applications, Interactive Cloud applications have to be always accessible and car not be scheduled in advance to match with green energy profile. Therefore, this thesis proposes an energy adaptive autoscaling solution to exploit applications internal to create green energy awareness in the application, while respecting traditional QoS properties. To elaborate, we design green energy aware application controller that takes advantage of green energy availability to perform opportunistic adaptation in an application along with performance aware application controller. Experiment is performed with real life application at Grid5000 and results show significant reduction of energy consumption while respecting traditional QoS attributes compared to performance aware approach
APA, Harvard, Vancouver, ISO, and other styles
25

Aldosari, Mansour. "Design and analysis of green mobile communication networks." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/design-and-analysis-of-green-mobile-communication-networks(37b5278a-45da-4a81-b89c-54c7d876586a).html.

Full text
Abstract:
Increasing energy consumption is a result of the rapid growth in cellular communication technologies and a massive increase in the number of mobile terminals (MTs) and communication sites. In cellular communication networks, energy efficiency (EE) and spectral efficiency (SE) are two of the most important criteria employed to evaluate the performance of networks. A compromise between these two conflicting criteria is therefore required, in order to achieve the best cellular network performance. Fractional frequency reuse (FFR), classed as either strict FFR or soft frequency reuse (SFR), is an intercell interference coordination (ICIC) technique applied to manage interference when more spectrum is used, and to enhance the EE. A conventional cellular model's downlink is designed as a reference in the presence of inter-cell interference (ICI) and a general fading environment. Energy-efficient cellular models,such as cell zooming, cooperative BSs and relaying models are designed, analysed and compared with the reference model, in order to reduce network energy consumption without degrading the SE. New mathematical models are derived herein to design a distributed antenna system (DAS), in order to enhance the system's EE and SE. DAS is designed in the presence of ICI and composite fading and shadowing with FFR. A coordinate multi-point (CoMP) technique is applied, using maximum ratio transmission (MRT) to serve the mobile terminal (MT), with all distributed antenna elements (DAEs), transmit antenna selection (TAS) being applied to select the best DAE and general selection combining (GSC) being applied to select more than one DAE. Furthermore, a Cloud radio access network (C-RAN) is designed and analysed with two different schemes, using the high-power node (HPN) and a remote radio head (RRH), in order to improve the EE and SE of the system. Finally, a trade-off between the two conflicting criteria, EE and SE, is handled carefully in this thesis, in order to ensure a green cellular communication network.
APA, Harvard, Vancouver, ISO, and other styles
26

Chevalier, Arthur. "Optimisation du placement des licences logicielles dans le Cloud pour un déploiement économique et efficient." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN071.

Full text
Abstract:
Cette thèse s'intéresse au Software Asset Management (SAM) qui correspond à la gestion de licences, de droits d'usage, et du bon respect des règles contractuelles. Lorsque l'on parle de logiciels propriétaires, ces règles sont bien souvent mal interprétées ou totalement incomprises. En échange du fait que nous sommes libres de licencier notre usage comme bon nous semble, dans le respect du contrat, les éditeurs possèdent le droit d'audit. Ces derniers peuvent vérifier le bon respect des règles et imposer, lorsque ces dernières ne sont pas respectées, des pénalités bien souvent d'ordre financières. L'émergence du Cloud a grandement augmenté la problématique du fait que les droits d'usages des logiciels n'étaient pas initialement prévus pour ce type d'architecture. Après un historique académique et industriel du Software Asset Management, des racines aux travaux les plus récents concernant le Cloud et l'identification logicielle, nous nous intéressons aux méthodes de licensing des principaux éditeurs avant d'introduire les différents problèmes intrinsèques au SAM. Le manque de standardisation dans les métriques, des droits d'usages, et la différence de paradigme apportée par le Cloud et prochainement le réseau virtualisé rendent la situation plus compliquée qu'elle ne l'était déjà. Nos recherches s'orientent vers la modélisation de ces licences et métriques afin de s'abstraire du côté juridique et flou des contrats. Cette abstraction nous permet de développer des algorithmes de placement de logiciels qui assurent le bon respect des règles contractuelles en tout temps. Ce modèle de licence nous permet également d'introduire une heuristique de déploiement qui optimise plusieurs critères au moment du placement du logiciel tels que la performance, l'énergie et le coût des licences. Nous introduisons ensuite les problèmes liés au déploiement de plusieurs logiciels simultanément en optimisant ces mêmes critères et nous apportons une preuve de la NPcomplétude du problème de décision associé. Afin de répondre à ces critères, nous présentons un algorithme de placement qui approche l'optimal et utilise l'heuristique ci-dessus. En parallèle, nous avons développé un logiciel SAM qui utilise ces recherches pour offrir une gestion automatisée et totalement générique des logiciels dans une architecture Cloud. Tous ces travaux ont été menés en collaboration avec Orange et testés lors de différentes preuves de concept avant d'être intégrés totalement dans l'outillage SAM<br>This thesis takes place in the field of Software Asset Management, license management, use rights, and compliance with contractual rules. When talking about proprietary software, these rules are often misinterpreted or totally misunderstood. In exchange for the fact that we are free to license our use as we see fit, in compliance with the contract, the publishers have the right to make audits. They can check that the rules are being followed and, if they are not respected, they can impose penalties, often financial penalties. This can lead to disastrous situations such as the lawsuit between AbInBev and SAP, where the latter claimed a USD 600 million penalty. The emergence of the Cloud has greatly increased the problem because software usage rights were not originally intended for this type of architecture. After an academic and industrial history of Software Asset Management (SAM), from its roots to the most recent work on the Cloud and software identification, we look at the licensing methods of major publishers such as Oracle, IBM and SAP before introducing the various problems inherent in SAM. The lack of standardization in metrics, specific usage rights, and the difference in paradigm brought about by the Cloud and soon the virtualized network make the situation more complicated than it already was. Our research is oriented towards modeling these licenses and metrics in order to abstract from the legal and blurry side of contracts. This abstraction allows us to develop software placement algorithms that ensure that contractual rules are respected at all times. This licensing model also allows us to introduce a deployment heuristic that optimizes several criteria at the time of software placement such as performance, energy and cost of licenses. We then introduce the problems associated with deploying multiple software at the same time by optimizing these same criteria and prove the NP-completeness of the associated decision problem. In order to meet these criteria, we present a placement algorithm that approaches the optimal and uses the above heuristic. In parallel, we have developed a SAM tool that uses these researches to offer an automated and totally generic software management in a Cloud architecture. All this work has been conducted in collaboration with Orange and tested in different Proof-Of-Concept before being fully integrated into the SAM tool
APA, Harvard, Vancouver, ISO, and other styles
27

Silva, Sidnei Gonçalves da. "Desenvolvimento de procedimentos limpos para extração de íons metálicos em ponto nuvem." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/46/46133/tde-28032008-094914/.

Full text
Abstract:
O presente trabalho enfoca o desenvolvimento de procedimentos analíticos limpos através da extração e pré-concentração de íons metálicos em ponto nuvem para aumento de sensibilidade, melhoria de seletividade e substituição de solventes tóxicos. Cobre(II), ferro(II) e níquel(II) foram extraídos simultaneamente em digeridos de materiais vegetais, utilizando como agente extrator o Triton X-114, após formarem complexos hidrofóbicos com o 1,2-tiazolilazo-2-naftol (TAN). Os analitos foram determinados por espectrometria de absorção atômica com chama (Cu, Fe e Ni) ou por espectrofotometria de absorção molecular (Fe). A determinação de Cu e Ni por espectrofotometria de absorção molecular foi inviabilizada por interferências espectrais causadas pela formação de complexos com diversos íons metálicos encontrados nos digeridos dos materiais vegetais. Nas medidas por FAAS, os limites de detecção (99,7 % de confiança) foram estimados em 1,10 e 5 µg L-1 para cobre, ferro e níquel, respectivamente. Respostas lineares foram observadas para cobre e ferro na faixa de concentração de 25-200 µg L-1 e para o níquel entre 5 e 80 µg L-1 . O fator de enriquecimento foi estimado em 30 e a extração dos analitos foi quantitativa, de acordo com medidas efetuadas no sobrenadante após extração. Na determinação de ferro por espectrofotometria de absorção molecular, o limite de detecção foi estimado em 1 µg L-1 , com resposta linear entre 6 e 60 µg L-1 e fator de enriquecimento estimado em 20. Após digestão com ácidos diluídos em forno de microondas, o procedimento proposto foi aplicado à determinação dos analitos em digeridos de materiais de referência certificados, sendo os resultados concordantes com os valores certificados a nível de confiança de 95 %. O consumo de reagentes foi estimado em 50 mg Triton X-114 e 150 µg TAN por determinação.<br>This work is focused on the development of green analytical procedures for cloud-point extraction and concentration of metal ions, aiming increasing sensitivity, improving selectivity and avoiding the use of toxic solvents. Copper(II), iron(II) and nickel(II) were simultaneously extracted from plant materials digests, by using Triton X-114 as extracting agent, after formation of hydrophobic complexes with 1,2- tiazolylazo-2-naphthol (TAN). The analytes were determined by flame atomic absorption spectrometry (Cu, Fe and Ni) or by molecular absorption spectrophotometry (Fe). The determination of Cu and Ni by UV-vis spectrophotometry was hindered by spectral interferences caused by the formation of complexes with several metal ions in the sample digests. In the measurements by FAAS, the detection limits (99.7 % confidence level) were estimated as 1, 10 and 5 µg L-1 for copper, iron and nickel, respectively. Linear responses were observed in the 25-200 µg L-1 range for copper and iron and 5-80 µg L-1 for nickel. The enrichment factor was estimated as 30 and the extraction was quantitative, as evaluated by measurements in the supernatant solution after extraction. For iron determination by UV-vis spectrophotometry, the detection limit was estimated as 1 µg L-1 , with linear response within 6 and 60 µg L-1 and enrichment factor of 20. After digestion with diluted acids in microwave oven, the procedure was applied to the determination of the metals in reference materials and the results agreed with the certified values at the 95 % confidence level. The reagent consumption was estimated as 50 mg Triton X-114 and 150 µg TAN per determination.
APA, Harvard, Vancouver, ISO, and other styles
28

Hrabčak, Miroslav. "Informační strategie firmy." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2011. http://www.nusl.cz/ntk/nusl-223080.

Full text
Abstract:
Master´s thesis is focused on a presentation of entrepreneurial intention of corporate information strategy. The corporate information strategy is based on the analysis of current situation of the company and its environment which uses modern trends in IT business. It is about a complex solution which should help to the growth of turnover, strengthen of mark of company and stabilize the market position.
APA, Harvard, Vancouver, ISO, and other styles
29

Melounek, Rudolf. "Trendy v oblasti IT a jejich uplatnění pro firemní sféru." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-81978.

Full text
Abstract:
This work deals with IT market (especially hardware) and tracks its trends and development. In the first chapters assesses the current status and future development of strategic hardware components. Software technologies (that affect the hardware) are also mentioned. These are Cloud computing, operating systems and the Internet. In addition, all relevant factors (that should be taken into account when choosing a computer) are included. For example the choice between desktop and laptop, or Mac and PC (Windows). In the next section, the typical corporate roles are analysed according to their computer performance requirements. The hardware recommendations for the individual user roles is the main purpose of this work
APA, Harvard, Vancouver, ISO, and other styles
30

Мельник, Леонід Григорович, Леонид Григорьевич Мельник, Leonid Hryhorovych Melnyk та ін. ""Зелёное" производство в свете Третьей и Четвертой промышленных революций". Thesis, Одеса, Атлант, 2016. http://essuir.sumdu.edu.ua/handle/123456789/66706.

Full text
Abstract:
Розкривається сутність "зеленого" виробництва в світі Третьої і Четвертої промислових революцій. Наводяться приклади управляючих систем, які не тільки беруть на себе функцію оптимізації в просторі і часі виробничих процесів, але й є інтегруючим началом, об’єднавши діяльність багатьох господарських ланок. А саме: віртуальні підприємства, горизонтальні розподілені мережі, "хмарні" технології.<br>Раскрывается сущность "зеленого" производства в мире Третьей и Четвертой промышленных революций. Приводятся примеры управляющих систем, которые не только берут на себя функцию оптимизации в пространстве и времени производственных процессов, но и является интегрирующим началом, объединив деятельность многих хозяйственных звеньев. А именно: виртуальные предприятия, горизонтальные распределенные сети, "облачные" технологии.<br>The essence of "green" production in the world of the Third and Fourth industrial revolutions is revealed. Examples of control systems that not only assume the function of optimization in the space and time of production processes are also given, but are also an integrative basis, combining the activities of many business units. Namely: virtual enterprises, horizontal distributed networks, "cloud" technologies.
APA, Harvard, Vancouver, ISO, and other styles
31

Orgerie, Anne-Cécile. "An Energy-Efficient Reservation Framework for Large-Scale Distributed Systems." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00672130.

Full text
Abstract:
Over the past few years, the energy consumption of Information and Communication Technologies (ICT) has become a major issue. Nowadays, ICT accounts for 2% of the global CO2 emissions, an amount similar to that produced by the aviation industry. Large-scale distributed systems (e.g. Grids, Clouds and high-performance networks) are often heavy electricity consumers because -- for high-availability requirements -- their resources are always powered on even when they are not in use. Reservation-based systems guarantee quality of service, allow for respect of user constraints and enable fine-grained resource management. For these reasons, we propose an energy-efficient reservation framework to reduce the electric consumption of distributed systems and dedicated networks. The framework, called ERIDIS, is adapted to three different systems: data centers and grids, cloud environments and dedicated wired networks. By validating each derived infrastructure, we show that significant amounts of energy can be saved using ERIDIS in current and future large-scale distributed systems.
APA, Harvard, Vancouver, ISO, and other styles
32

MORAES, Renato Ubaldo Moreira e. "SSACC -SERVIÇO DE SEGURANÇA PARA AUTENTICAÇÃO CIENTE DO CONTEXTO: para Dispositivos Móveis no Paradigma da Computação em Nuvem." Universidade Federal do Maranhão, 2014. http://tedebc.ufma.br:8080/jspui/handle/tede/291.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:38Z (GMT). No. of bitstreams: 1 Dissertacao_Renato Ubaldo Moreira e Moraes.pdf: 1378349 bytes, checksum: c8d418a48e72c6d94fdc632323dcf508 (MD5) Previous issue date: 2014-09-26<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Nowadays, there was a massive inclusion of smart mobile devices, known as smartphones, and with this accession, there s consequently a large increase in the consumption of information, especially from the internet. To support the great demand for information access, it s created a numerous devices to facilitate both access, the creation and the storage of such information, among the best known and disseminated currently is cloud computing. The feedback takes currently, an increasingly important and even critical for some entities, size and value turns out to be very desirable. Being often target capture and espionage attempts. To obtain data confidential information hackers use numerous devices, and more is used to scan networks. In other words can be described as scan "Scans notifications in computer networks, in order to identify which computers are active and which services are available for them. It is widely used by attackers to identify potential targets because it allows associate potential vulnerabilities to services enabled on a computer " [10]. According to [10] the number of attacks has been widening each year as shown in Figure 1.1 and 1.2 which are in section 1.1. Based on this high number incidents, the growth of the information consumer by means of devices furniture and the need to improve energy costs, the proposed establishment of the Office Security for Context Aware of authentication (Serviço de Segurança para Autenticação Ciente do Contexto(SSACC)) is required for today. The ssacc focus to provide a secure channel for transfer files to a server, using context information and reducing energy waste, thus saving resources and framing the Green Computing. Made based on the Secure Socket Layer (SSL), which is a widely used protocol that provides secure communication through a network. It uses several different cryptographic processes to ensure that data sent through the network is secure. It provides a security enhancement for the Transport Control Protocol (TCP) / Internet Protocol (IP) standard, which is used for communication with the Internet. SSL uses public key cryptography to provide authentication. The SSL protocol also uses encryption of the private key and digital signatures to ensure privacy and the integrity of data " [26].<br>Atualmente houve uma adesão em massa aos dispositivos móveis inteligentes, conhecidos como smartphones, e, com essa adesão, houve consequentemente um grande aumento no consumo da informação, principalmente proveniente da internet. Para atender a grande demanda de acesso à informação foi criado inúmeros artifícios para facilitar tanto o acesso, quanto a criação e o armazenamento dessas informações, dentre os mais conhecidos e difundidos atualmente está a computação em nuvem. A informação assume, hoje em dia, uma importância crescente e até vital para algumas entidades, e com tamanho valor acaba se tornando muito desejada, sendo muitas vezes alvo de tentativas de captura e espionagem. Para se obter dados de informações confidenciais hackers usam inúmeros artifícios, e o mais usado é o scan de redes, que em outras palavras pode ser descrito scan como "notificações de varreduras em redes de computadores, com o intuito de identificar quais computadores estão ativos e quais serviços estão sendo disponibilizados por eles. É amplamente utilizado por atacantes para identificar potenciais alvos, pois permite associar possíveis vulnerabilidades aos serviços habilitados em um computador" [10]. De acordo com o [10] o número de ataques só vem crescendo a cada ano como mostra a figura 1.1 e 1.2 que estão na seção 1.1. Com base nesse alto número de incidentes, o crescimento do consumo da informação por meio de dispositivos móveis e a necessidade de melhorar gastos de energia, a proposta de criação do Serviço de Segurança para Autenticação Ciente do Contexto (SSACC) é necessária para a atualidade. O SSACC tem como principal objetivo fornecer um canal seguro para transferência de arquivos para um servidor, fazendo uso de informações de contexto e diminuindo o desperdício de energia, consequentemente economizando recursos e se enquadrando à Computação Verde. Feito com base no Secure Socket Layer(SSL), que é um "protocolo amplamente utilizado que fornece comunicação segura através de uma rede. Ele usa vários processos criptográficos diferentes para garantir que os dados enviados por meio de rede são seguras. Ele fornece um aprimoramento de segurança para o protocolo Transport Control Protocol (TCP)/ Internet Protocol (IP) padrão, que é usado para comunicação com a internet. SSL utiliza criptografia de chave pública para fornecer autenticação. O protocolo SSL também usa criptografia de chave privada e assinaturas digitais para garantir a privacidade e a integridade dos dados" [26].
APA, Harvard, Vancouver, ISO, and other styles
33

Harris-Birtill, Rosemary. "Mitchell's mandalas : mapping David Mitchell's textual universe." Thesis, University of St Andrews, 2017. http://hdl.handle.net/10023/12255.

Full text
Abstract:
This study uses the Tibetan mandala, a Buddhist meditation aid and sacred artform, as a secular critical model by which to analyse the complete fictions of author David Mitchell. Discussing his novels, short stories and libretti, this study maps the author's fictions as an interconnected world-system whose re-evaluation of secular belief in galvanising compassionate ethical action is revealed by a critical comparison with the mandala's methods of world-building. Using the mandala as an interpretive tool to critique the author's Buddhist influences, this thesis reads the mandala as a metaphysical map, a fitting medium for mapping the author's ethical worldview. The introduction evaluates critical structures already suggested to describe the author's worlds, and introduces the mandala as an alternative which more fully addresses Mitchell's fictional terrain. Chapter I investigates the mandala's cartographic properties, mapping Mitchell's short stories as integral islandic narratives within his fictional world which, combined, re-evaluate the role of secular belief in galvanising positive ethical action. Chapter II discusses the Tibetan sand mandala in diaspora as a form of performance when created for unfamiliar audiences, reading its cross-cultural deployment in parallel with the regenerative approaches to tragedy in the author's libretti Wake and Sunken Garden. Chapter III identifies Mitchell's use of reincarnation as a form of non-linear temporality that advocates future-facing ethical action in the face of humanitarian crises, reading the reincarnated Marinus as a form of secular bodhisattva. Chapter IV deconstructs the mandala to address its theoretical limitations, identifying the panopticon as its sinister counterpart, and analysing its effects in number9dream. Chapter V shifts this study's use of the mandala from interpretive tool to emerging category, identifying the transferrable traits that form the emerging category of mandalic literature within other post-secular contemporary fictions, discussing works by Michael Ondaatje, Ali Smith, Yann Martel, Will Self, and Margaret Atwood.
APA, Harvard, Vancouver, ISO, and other styles
34

Simensen, Kamilla Notto, and 倪米娜. "Green Cloud Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70837875238023946706.

Full text
Abstract:
碩士<br>輔仁大學<br>國際創業與經營管理學程碩士在職專班<br>101<br>Cloud computing has become very popular over the years, and that for many good reasons. Allowing users to have gather all their necessary information in a cloud, making them able to access it at anytime from anywhere has made life a lot easier for not only individuals, but also for organizations and businesses. Cloud computing was estimated to reduce the pollution generated form the initial datacenter and was therefore a great solution, not just for the end users, but also for the environment. However, as the number of users has increased, the theory of less pollution seems to become more and more untrue. What this paper speaks about is first of all definition of cloud computing, what aspects of cloud computing can be made more energy efficient and how to improve cloud computing to become “greener”. It then goes over to describing a possible solution; Green Cloud Computing. What are the pros and cons of green cloud computing and giving reasoning for why this is a potential solution that will benefit not just the environment, but also the users. To support the arguments there are two case studies included. The main findings through the studies has been that green cloud computing can easily be accomplished with just a few adjustments, and to a cloud provider it will not have a negative impact on their profit. Additionally there are a few issues that is relevant such as time, commitment and greening.
APA, Harvard, Vancouver, ISO, and other styles
35

LIN, YI-CHUNG, and 林怡沖. "Green Power Management on Cloud Data Center." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/68883266035643161645.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電機工程系<br>100<br>Recently, cloud computing technology has becoming a popular topic within government agencies, private companies and research institutes. They all focus on development for new cloud computing technologies and applications. Google, Amazon and Microsoft, cloud computing providers, build cloud data centers all around the world. The large data center is considered would bring high demand for electricity and carbon dioxide emissions. Since depletion of global resources, rising costs of electricity and environmental protection, energy saving issues obtain progressive attentions. Extensively there are some state of the art research issues of cloud computing that regardless of the hardware implementation, such as resource management, operational efficiency, power consumption, and even the service location. In cloud computing large data center server operation, energy loss occurs when power supply performs AC to DC conversion likewise the server motherboard DC buck circuit losses. This study exploits capabilities of Intel platform and Lite-on of power supply to conduct power consumption simulation analysis and finally apply intelligent power management control. It can monitor the system load status through microcontroller, and according to the load level of power supply, the system will be controlled to keep the power supply operation for highly efficient with effective range of power consumption to avoid overload system operation. By actual measurement and analysis, at light load power supply operation, its power conversion efficiency was very low, on the other hand on low load operation, the wasted power rate is very high. By implementing the proposed power control method power conversion efficiency at light loads can be improved up to 17.54%. Meanwhile, on the low loading operation, the additional power consumption can be reduced. Conclusively, during rising electricity situation, the proposed method can effectively reduce data center operating costs.
APA, Harvard, Vancouver, ISO, and other styles
36

Fioccola, Giovanni Battista. "Green Resource Management in Distributed Cloud Infrastructures." Tesi di dottorato, 2016. http://www.fedoa.unina.it/10742/1/Fioccola_Giovanni_Battista_27.pdf.

Full text
Abstract:
Computing has evolved over time according to different paradigms, along with an increasing need for computational power. Modern computing paradigms basically share the same underlying concept of Utility Computing, that is a service provisioning model through which a shared pool of computing resources is used by a customer when needed. The objective of Utility Computing is to maximize the resource utilization and bring down the relative costs. Nearly a decade ago, the concept of Cloud Computing emerged as a virtualization technique where services were executed remotely in a ubiquitous way, providing scalable and virtualized resources. The spread of Cloud Computing has been also encouraged by the success of the virtualization, which is one of the most promising and efficient techniques to consolidate system's utilization on one side, and to lower power, electricity charges and space costs in data centers on the other. In the last few years, there has been a remarkable growth in the number of data centers, which represent one of the leading sources of increased business data traffic on the Internet. An effect of the growing scale and the wide use of data centers is the dramatic increase of power consumption, with significant consequences both in terms of environmental and operational costs. In addition to power consumption, also carbon footprint of the Cloud infrastructures is becoming a serious concern, since a lot of power is generated from non-renewable sources. Hence, energy awareness has become one of the major design constraints for Cloud infrastructures. In order to face these challenges, a new generation of energy-efficient and eco-sustainable network infrastructures is needed. In this thesis, a novel energy-aware resource orchestration framework for distributed Cloud infrastructures is discussed. The aim is to explain how both network and IT resources can be managed while, at the same time, the overall power consumption and carbon footprint are being minimized. To this end, an energy-aware routing algorithm and an extension of the OSPF-TE protocol to distribute energy-related information have been implemented.
APA, Harvard, Vancouver, ISO, and other styles
37

Yelinek, Jason-Alan, and 葉安捷. "Energy Efficiency and Green Energy in Cloud Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/47872882132059291348.

Full text
Abstract:
碩士<br>國立臺灣大學<br>企業管理碩士專班<br>101<br>One of the key questions that should be asked when making technological innovation is whether or not one should make the innovation. With the breakout success of cloud computing and mobility computing devices such as smart phones, tablets, and laptops the question is more of how quickly innovation can get to the market rather than should it get to the market. Due to the inherent limitations of mobile devices, a massive cloud based infrastructure is needed behind the scenes to enable ubiquity of apps and connectivity. The cloud application and mobility fields are continuously growing as is the cloud infrastructure powering and keeping the lion’s share of the data. Here, the question not being asked is what is the real impact on the environment from cloud computing and how can this be mitigated. In a hope to make more informed decisions on the planning and commissioning of cloud computing data centers and data centers, this thesis explores the cloud computing landscape, energy efficiency, climate change and the intersection of each. There are great opportunities to reduce power consumption in cloud computing to save energy. There are also great opportunities to strategically apply green energy to the cloud. This thesis proposes a weighted levelized cost of energy metric to quantify the real cost and impact of different power sources. This is done at the global level as well as the local level in the United States and China. With a weighted levelized cost of energy, this thesis then explores regional opportunities within the United States and considers power mix migration scenarios to improve overall weighted cost. The final step is to create a data center model and investigate permutations of power mix and location in order to better understand cost impact savings opportunities.
APA, Harvard, Vancouver, ISO, and other styles
38

Chiao, Hsien Yuen, and 謝月嬌. "Study on Energy Management Model for Green Cloud Computing." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/09357761216259726897.

Full text
Abstract:
碩士<br>育達商業科技大學<br>資訊管理所<br>101<br>Abstract Cloud computing cost, efficiency and flexibility is the most attractive technology. As the world of over-development, increased global warming, the rise of environmental awareness, with the Earth the fever issue more and more people attach importance to save the planet, people began to pay attention to energy saving and carbon reduction. Other cloud service providers of the information center task load increase, will allow businesses to electricity consumption will increase significantly. Cloud computing center to manage a large number of power-hungry servers, and to accept a managed cloud applications software program, software execution consumes a lot of energy, so its high operating costs and how to save energy and reduce carbon environment to attract the world's attention. Therefore, this study is expected to identify cloud computing center green cloud computing solutions can not only create the environment of energy conservation can also reduce operating costs. Keyword:Cloud Computing、Green ICT、Energy conservation
APA, Harvard, Vancouver, ISO, and other styles
39

Lan, Zih-Heng, and 藍梓恆. "Green Supply Chain Network Design Combined with Cloud Computing." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/59278188613196184202.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊工程學系<br>104<br>In the past decades, supply chain management has been commonly recognized as a key factor of business success. As the consciousness of energy depletion and environment protection grows, a critical issue arises as how to look after the cost, the quality of services and environmental concerns in the supply chain network design. At the same time, cloud services and cloud computing have revolutionized both academic research and industrial practices. Computing resources can be utilized efficiently with the optimal distribution of the virtual machines (VMs) among the physical machines, so that green computing is closer to realization. This study aims to establish a two-stage model of green supply chain network design by integrating cloud computing. In the first stage, the supply chain network design is fulfilled by considering three objectives: minimizing the total costs, minimizing the carbon emission and maximizing the satisfaction level of service. In the second stage, the dynamic placement model of VMs in the supply chain nodes is established by multi-objective programming for minimizing energy consumption, maximizing effectiveness of physical machine and minimizing the task waiting time. Experiments are implemented to verify the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Sigwele, Tshiamo, Prashant Pillai, and Yim Fun Hu. "iTREE: Intelligent Traffic and Resource Elastic Energy scheme for Cloud-RAN." 2015. http://hdl.handle.net/10454/11130.

Full text
Abstract:
Yes<br>By 2020, next generation (5G) cellular networks are expected to support a 1000 fold traffic increase. To meet such traffic demands, Base Station (BS) densification through small cells are deployed. However, BSs are costly and consume over half of the cellular network energy. Meanwhile, Cloud Radio Access Networks (C-RAN) has been proposed as an energy efficient architecture that leverage cloud computing technology where baseband processing is performed in the cloud. With such an arrangement, more energy gains can be acquired through statistical multiplexing by reducing the number of BBUs used. This paper proposes a green Intelligent Traffic and Resource Elastic Energy (iTREE) scheme for C-RAN. In iTREE, BBUs are reduced by matching the right amount of baseband processing with traffic load. This is a bin packing problem where items (BS aggregate traffic) are to be packed into bins (BBUs) such that the number of bins used are minimized. Idle BBUs can then be switched off to save energy. Simulation results show that iTREE can reduce BBUs by up to 97% during off peak and 66% at peak times with RAN power reductions of up to 27% and 18% respectively compared with conventional deployments.
APA, Harvard, Vancouver, ISO, and other styles
41

Liao, Yi-Hsuan, and 廖宜軒. "Green Computing: An SLA-based Energy-aware Methodology for Cloud Datacenters." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/69505752645882889303.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊工程學系<br>101<br>Cloud computing offers utility-oriented services to users with a pay-as-you-go model. It allows hosting of common applications for consumer and business use. However, cloud datacenters consume huge amounts of energy, high operational cost, and mass carbon emission. Therefore, we need a Green Cloud Computing solution that not only reduce the energy consumption and environmental impact but also minimize operational cost. Commonly used Green Cloud computing solutions will reduce the performance of hosts of Cloud data centers. These may cause the Service-Level-Agreement (SLA) violation and need to pay the penalty. In this thesis, we propose a new framework for energy-efficient cloud datacenters. By using the efficient VM allocation and selection mechanisms to reduce the data centers power consumption. Moreover, we utilize the Dynamic Voltage and Frequency Scaling (DVFS) technique to shorten the hosts wake up time. In addition, our framework meets the more and more important topic: SLA. It helps to decrease the total operational costs.
APA, Harvard, Vancouver, ISO, and other styles
42

LO, CHIEN-SHENG, and 羅建盛. "System Integration and Performance Verification of Cloud Green Energy Smart Microgrid." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/89wgrk.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chung-Han, Hsieh, and 謝宗翰. "A Green Cloud-assisted Health Monitoring Service on Wireless Body Area Networks." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/77002894603193133689.

Full text
Abstract:
碩士<br>國立宜蘭大學<br>資訊工程學系碩士班<br>103<br>As cloud computing and wireless body sensor network (WBAN) technologies mature, relevant applications have grown more and more popular in recent years. The healthcare field is one of the popular applications for this technology that adopts sensor devices to sense signals of negative physiological events, and to notify users. The development and implementation of long-term healthcare monitoring that can prevent or quickly respond to the occurrence of disease and accidents present an interesting challenge for computing power and energy limits. This study proposed a green cloud-assisted healthcare service on WBAN, and considered the sensing frequency of the physiological signals of various body parts, as well as the data transmission among the sensor nodes of WBAN. The cloudassisted healthcare service regulates the sensing frequency of nodes by considering the overall WBAN environment and the sensing variations of body parts. The experimental results show that the proposed service can effectively transmit the sensing data and prolong the overall lifetime of WBAN.
APA, Harvard, Vancouver, ISO, and other styles
44

GUAZZONE, MARCO. "Power and Performance Management in Cloud Computing Systems." Doctoral thesis, 2012. http://hdl.handle.net/2318/141162.

Full text
Abstract:
Cloud computing is an emerging computing paradigm which is gaining popularity in IT industry for its appealing property of considering "Everything as a Service". The goal of a cloud infrastructure provider is to maximize its profit by minimizing the amount of violations of Quality-of-Service (QoS) levels agreed with service providers, and, at the same time, by lowering infrastructure costs. Among these costs, the energy consumption induced by the cloud infrastructure, for running cloud services, plays a primary role. Unfortunately, the minimization of QoS viola- tions and, at the same time, the reduction of energy consumption is a conflicting and challenging problem. In this thesis, we propose a framework to automatically manage computing resources of cloud infrastructures in order to simultaneously achieve suitable QoS levels and to reduce as much as possible the amount of energy used for providing services. We show, through simulation, that our approach is able to dynamically adapt to time-varying workloads (without any prior knowledge) and to significantly reduce QoS violations and energy consumption with respect to traditional static approaches.
APA, Harvard, Vancouver, ISO, and other styles
45

GUAZZONE, Marco. "Power and Performance Management in Cloud Computing Systems." Doctoral thesis, 2012. http://hdl.handle.net/11579/68600.

Full text
Abstract:
Cloud computing is an emerging computing paradigm which is gaining popularity in IT industry for its appealing property of considering "Everything as a Service". The goal of a cloud infrastructure provider is to maximize its profit by minimizing the amount of violations of Quality-of-Service (QoS) levels agreed with service providers, and, at the same time, by lowering infrastructure costs. Among these costs, the energy consumption induced by the cloud infrastructure, for running cloud services, plays a primary role. Unfortunately, the minimization of QoS viola- tions and, at the same time, the reduction of energy consumption is a conflicting and challenging problem. In this thesis, we propose a framework to automatically manage computing resources of cloud infrastructures in order to simultaneously achieve suitable QoS levels and to reduce as much as possible the amount of energy used for providing services. We show, through simulation, that our approach is able to dynamically adapt to time-varying workloads (without any prior knowledge) and to significantly reduce QoS violations and energy consumption with respect to traditional static approaches.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Kuan-Chieh, and 王冠傑. "Implementation of a Green Power Management Algorithm for Virtual Machines on Cloud Computing Environments." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/22531055284833133653.

Full text
Abstract:
碩士<br>東海大學<br>資訊工程學系<br>99<br>With the government and the high development of electronic business, electronic services, the demand is sustained, rapid increase, they need to build more data centers to provide services, which will consume considerable power, especially when the service is running at low usage conditions, such as processor utilization of 10%, its total power consumption of at least 60% of the peak, making the idle resources, energy waste, which is caused by low energy efficiency of data centers. The purpose of this paper is to use virtualization technology and power management approach to achieve energy saving targets. We focus on the development of power management algorithms, and implements a virtual cloud of green cloud management platform,and build the web application in the above, and experimetns to test the performance and power saving. We prove thate that Green Power Management (GPM), at low utilization state of services, effectly save energy by consolidation service via a method of Migrating VM.
APA, Harvard, Vancouver, ISO, and other styles
47

Chiang, Chunyen, and 江俊彥. "The Application of Cloud Service in CRM - The Cases Study of Green Products in Taiwan." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/16107775257566434526.

Full text
Abstract:
碩士<br>大葉大學<br>企業管理學系碩士班<br>100<br>As environmental protection and sustainable development are generally valued in the world, many countries have promoted green related issues and actions, and Taiwan is no exception. For a country, advocacy of green concept can enhance quality of the nation. For a business, green image is consistent with the current trend of environmental protection and can enhance business image. Customer relationship management and business image are closely related. Environment protection is an important issue, and cloud service is one of the methods for reducing environmental pollution. Thus, whether a business can strengthen its closeness with customers and create a triple-win situation for profits, environmental protection and customers through green image of products and cloud service techniques are worth investigating. This study uses green product manufacturers in Taiwan to discuss the relations between customer relationship management, cloud service and marketing performance. The results of empirical research indicate: 1.Promotion of customer relationship management can increase marketing performance. 2.Customer relationship management has positive effect on cloud service. 3.Use of cloud service can increase marketing performance. 4.Cloud service has intermediary effects on customer relationship management toward marketing performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Chingkun, and 林敬堃. "Cloudy Computing with Android Terminal Devices in Green Energy Management System." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/18283117953175761089.

Full text
Abstract:
碩士<br>大葉大學<br>電機工程學系<br>100<br>In this thesis the cloudy computing protocol combines with Android platform of wireless communications is applied to implement a surveillance system for green energy management system. The green energy system includes three sub-system, mobile terminal device, cloud server and green charging system. For purpose of purpusiting the real-time and transparent characteristics, the concept of WiFi techniques is adopted as the key transmission method. Moreover, the cloudy computing concept is also taken into account for all management.In addition, the implemented system is build up for real-time experiment in the yard of Dayeh University. The results from the experiment are gathered for comparison purposes.
APA, Harvard, Vancouver, ISO, and other styles
49

Wei, Ming-Yi, and 魏銘誼. "Cloud Management and Wireless Sensing Network to Smart Environment – Case Study on the Design of Green Campus." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/c3ua39.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電機工程系<br>106<br>Applying the technology of Internet of Things (IoT) to city management has become a popular trend under globalizing urbanization. Most of the cases are applied to themes of houses, transportation and health care. Sustainable management and environmental impact became urgent issues due to the fact of increasingly serious global warming and climate change phenomena. Nowadays, studies of Smart Environment gained more attention; however, most of the studies focus on managements of negative impacts such as monitoring air pollution and water contamination and therefore lacking the continuingly monitoring of positive constructions such as examining the performance of Urban Resilience, Sponge City and Low Carbon Communities. To establish a smart environment, green engineering and data analysis should not be ignored. Based on the above broad perspective, the study aims at developing the cross board research combining the field of civil engineering and the electrical engineering and providing a feasible managing green engineering information system by applying the technology of IoT to monitor the urbanizing green constructions based on the technology of multi-functioned module consisting front end sensor network, middle end wireless transmission and back end cloud management. The research analyzes the ambient temperature, humidity and power generation under different background characteristic of times based on the evaluation system of green constructions. By discussing the efficiency of energy saving in green roofs, permeable pavements and PV roofs, the result of the research is able to be taken as a reference in urban governance in calculation and evaluation of energy saving and carbon reduction.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Yi-Ming, and 張一鳴. "Adopting Service Experience Engineering Method to Explore Printed Circuit Board Industry and The Contribution of Green Material Cloud Service Platform." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/97826231060592811088.

Full text
Abstract:
碩士<br>臺北市立大學<br>資訊科學系碩士在職專班<br>104<br>Cloud service has been the method to enhance the competitiveness of the enterprise in the current society. However, for different industries, the focus on cloud services should be different. Cloud services may help industry grow only when their different needs were met. This paper aims to study the impact of green material cloud services platform for Taiwanese IT manufacturing industries. First of all, chooses the printed circuit board industry, the representative sector in Taiwan IT manufacturing industries, as the research subject. Explore the trend (FIND) of the printed circuit board industry through the service experience engineering (SEE) method. Conduct the value chain research of the service (Innovation Net). Shape the innovative services model, and finally conduct the proof of concept (POC) in the service experiment (Design Lab). Secondly, we also specifically focus on the concept and the business model of "green material cloud platform", and visit the main promoters of the platform, Mr. Bai Guanghua, the deputy general manager of GIGABYTE. We learn how green material cloud platform works conceptually, and get understanding of its service and business model through the interview with Mr. Bai. Finally, this paper will focus on the comparison and summary between the innovative cloud platform and the green material cloud platform in the printed circuit board industry and draw up the important conclusions. Then base on the conclusions to explore the service, value and future prospect of the green material cloud service platform to Taiwan IT manufacturing industry.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!