To see the other types of publications on this topic, follow the link: Cloud Computing Performance.

Dissertations / Theses on the topic 'Cloud Computing Performance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cloud Computing Performance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Al-Refai, Ali, and Srinivasreddy Pandiri. "Cloud Computing : Trends and Performance Issues." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3672.

Full text
Abstract:
Context: Cloud Computing is a very fascinating concept these days, it is attracting so many organiza-tions to move their utilities and applications into a dedicated data centers, and so it can be accessed from the Internet. This allows the users to focus solely on their businesses while Cloud Computing providers handle the technology. Choosing a best provider is a challenge for organizations that are willing to step into the Cloud Computing world. A single cloud center generally could not deliver large scale of resources for the cloud tenants; therefore, multiple cloud centers need to collaborate to achieve some business goals and to provide the best possible services at lowest possible costs. How-ever a number of aspects, legal issues, challenges, and policies should be taken into consideration when moving our service into the Cloud environment. Objectives: The aim of this research is to identify and elaborate the major technical and strategy differences between the cloud-computing providers in order to enable the organizations managements, system designers and decision makers to have better insight into the strategies of the different Cloud Computing providers. It is also to understand the risks and challenges due to implementing Cloud Computing, and “how” those issues can be moderated. This study will try to define Multi-Cloud Computing by studying the pros and cons of this new domain. It is also aiming to study the concept of load balancing in the cloud in order to examine the performance over multiple cloud environments. Methods: In this master thesis a number of research methods are used, including the systematic litera-ture review, contacting experts from the relevant field (Interviews) and performing a quantitative methodology (Experiment). Results: Based on the findings of the Literature Review, Interviews and Experiment, we got out the results for the research questions as, 1) A comprehensive study for identifying and comparing the major Cloud Computing providers, 2) Addressing a list of impacts of Cloud Computing (legal aspects, trust and privacy). 3) Creating a definition for Multi-Cloud Computing and identifying the benefits and drawbacks, 4) Finding the performance results on the cloud environment by performing an expe-riment on a load balancing solution. Conclusions: Cloud Computing becomes a central interest for many organizations nowadays. More and more companies start to step into the Cloud Computing service technologies, Amazon, Google, Microsoft, SalesForce, and Rackspace are the top five major providers in the market today. However, there is no Cloud that is perfect for all services. The legal framework is very important for the protection of the user’s private data; it is an important key factor for the safety of the user’s personal and sensitive information. The privacy threats vary according to the nature of the cloud scenario, since some clouds and services might face a very low privacy threats compare to the others, the public cloud that is accessed through the Internet is one of the most means when it comes the increasing threats of the privacy concerns. Lack of visibility of the provider supply chain will lead to suspicion and ultimately distrust. The evolution of Cloud Computing shows that it is likely, in a near future, the so-called Cloud will be in fact a Multi-cloud environment composed of a mixture of private and public Clouds to form an adaptive environment. Load balancing in the Cloud Computing environment is different from the typical load balancing. The architecture of cloud load balancing is using a number of commodity servers to perform the load balancing. The performance of the cloud differs depending on the cloud’s location even for the same provider. HAProxy load balancer is showing positive effect on the cloud’s performance at high amount of load, the effect is unnoticed at lower amounts of load. These effects can vary depending on the location of the cloud.
APA, Harvard, Vancouver, ISO, and other styles
2

Mani, Sindhu. "Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/418.

Full text
Abstract:
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms. We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them.
APA, Harvard, Vancouver, ISO, and other styles
3

Pelletingeas, Christophe. "Performance evaluation of virtualization with cloud computing." Thesis, Edinburgh Napier University, 2010. http://researchrepository.napier.ac.uk/Output/4010.

Full text
Abstract:
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances. The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud. The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
APA, Harvard, Vancouver, ISO, and other styles
4

Noureddine, Moustafa. "Enterprise adoption oriented cloud computing performance optimization." Thesis, University of East London, 2014. http://roar.uel.ac.uk/4026/.

Full text
Abstract:
Cloud computing in the Enterprise has emerged as a new paradigm that brings both business opportunities and software engineering challenges. In Cloud computing, business participants such as service providers, enterprise solutions, and marketplace applications are required to adopt a Cloud architecture engineered for security and performance. One of the major hurdles of formal adoption of Cloud solutions in the enterprise is performance. Enterprise applications (e.g., SAP, SharePoint, Yammer, Lync Server, and Exchange Server) require a mechanism to predict and manage performance expectations in a secure way. This research addresses two areas of performance challenges: Capacity planning to ensure resources are provisioned in a way that meets requirements while minimizing total cost of ownership; and optimization to authentication protocols that enable enterprise applications to authenticate among each other and meet the performance requirements for enterprise servers, including third party marketplace applications. For the first set of optimizations, the theory was formulated using a stochastic process where multiple experiments were monitored and data collected over time. The results were then validated using a real-life enterprise product called Lync Server. The second set of optimizations was achieved by introducing provisioning steps to pre-establish trust among enterprise applications servers, the associated authorisation server, and the clients interested in access to protected resources. In this architecture, trust is provisioned and synchronized as a pre-requisite step 3 to authentication among all communicating entities in the authentication protocol and referral tokens are used to establish trust federation for marketplace applications across organizations. Various case studies and validation on commercially available products were used throughout the research to illustrate the concepts. Such performance optimizations have proved to help enterprise organizations meet their scalability requirements. Some of the work produced has been adopted by Microsoft and made available as a downloadable tool that was used by customers around the globe assisting them with Cloud adoption.
APA, Harvard, Vancouver, ISO, and other styles
5

Penmetsa, Jyothi Spandana. "AUTOMATION OF A CLOUD HOSTED APPLICATION : Performance, Automated Testing, Cloud Computing." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12849.

Full text
Abstract:
Context: Software testing is the process of assessing quality of a software product to determine whether it matches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product. Objectives: In this research, cloud hosted application is automated using TestComplete tool. The objective of this thesis is to verify the functionality of cloud application such as test appliance library through automation and to measure the impact of the automation on release cycles of the organisation. Methods: Here automation is implemented using scrum methodology which is an agile development software process. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test appliance library functionality is verified deploying testing device thereby keeping track of automatic software downloads into the testing device and licenses updating in the testing device. Results: Automation of test appliance functionality of cloud hosted application is made using TestComplete tool and impact of automation on release cycles is found reduced. Through automation of cloud hosted application, nearly 24% of reduction in level of release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery. Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilisation of time can be made effectively and application can be tested continuously increasing the efficiency and
AUTOMATION OF A CLOUD HOSTED APPLICATION
APA, Harvard, Vancouver, ISO, and other styles
6

Roloff, Eduardo. "Viability and performance of high-performance computing in the cloud." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/79594.

Full text
Abstract:
Computação em nuvem é um novo paradigma, onde recursos computacionais são disponibilizados como serviços. Neste cenário, o usuário não tem a necessidade de adquirir infraestrutura, ele pode alugar os recursos de um provedor e usá-los durante um certo período de tempo. Além disso, o usuário pode facilmente alocar e desalocar quantos recursos ele desejar, num ambiente totalmente elástico. O usuário só é cobrado pelo efetivo uso que for feito dos recursos alocados, isso significa que ele somente pagará pelo que for utilizado. Por outro lado, usuários de processamento de alto desempenho (PAD) tem a necessidade de utilizar grande poder computacional como uma ferramenta de trabalho. Para se ter acesso a estes recursos, são necessários investimentos financeiros adequados para aquisição de sistemas para PAD. Mas, neste caso, duas situações podem incorrer em problemas. O usuário necessita ter acesso aos recursos financeiros totais para adquirir e manter um sistema para PAD, e esses recusros são limitados. O propósito dessa dissertação é avaliar se o paradigma de computação em nuvem é um ambiente viável para PAD, verificando se este modelo de computação tem a capaciodade de prover acesso a ambientes que podem ser utilizados para a execução de aplicações de alto desempenho, e também, se o custo benefício apresentado é melhor do que o de sistemas tradicionais. Para isso, todo o modelo de computação em nuvem foi avaliado para se identificar quais partes dele tem o potencial para ser usado para PAD. Os componentes identificados foram avaliados utilizando-se proeminentes provedores de computação em nuvem. Foram analisadas as capacidades de criação de ambientes de PAD, e tais ambientes tiveram seu desempenho analisado através da utilização de técnicas tradicionais. Para a avaliação do custo benefício, foi criado e aplicado um modelo de custo. Os resultados mostraram que todos os provedores analisados possuem a capacidade de criação de ambientes de PAD. Em termos de desempenho, houveram alguns casos em que os provedores de computação em nuvem foram melhores do que um sistema tradicional. Na perspectiva de custo, a nuvem apresenta uma alternativa bastante interessante devido ao seu modelo de cobrança de acordo com o uso. Como conclusão dessa dissertação, foi mostrado que a computação em nuvem pode ser utilizada como uma alternativa real para ambientes de PAD.
Cloud computing is a new paradigm, where computational resources are offered as services. In this context, the user does not need to buy infrastructure, the resources can be rented from a provider and used for a period of time. Furthermore the user can easily allocate as many resources as needed, and deallocate them as well, in a totally elastic environment. The resources need to be paid only for the effective usage time. On the other hand, High-Performance Computing (HPC) requires a large amount of computational power. To acquire systems capable for HPC, large financial investments are necessary. Apart from the initial investment, the user must pay the maintenance costs, and has only limited computational resources. To overcome these issues, this thesis aims to evaluate the cloud computing paradigm as a candidate environment for HPC. We analyze the efforts and challenges for porting and deploy HPC applications to the cloud. We evaluate if this computing model can provide sufficient capacities for running HPC applications, and compare its cost efficiency to traditional HPC systems, such as clusters. The cloud computing paradigm was analyzed to identify which models have the potential to be used for HPC purposes. The identified models were then evaluated using major cloud providers, Microsoft Windows Azure, Amazon EC2 and Rackspace and compare them to a traditional HPC system. We analyzed the capabilities to create HPC environments, and evaluated their performance. For the evaluation of the cost efficiency, we developed an economic model. The results show that all the evaluated providers have the capability to create HPC environments. In terms of performance, there are some cases where cloud providers present a better performance than the traditional system. From the cost perspective, the cloud presents an interesting alternative due to the pay-per-use model. Summarizing the results, this dissertation shows that cloud computing can be used as a realistic alternative for HPC environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Sridharan, Suganya. "A Performance Comparison of Hypervisors for Cloud Computing." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/269.

Full text
Abstract:
The virtualization of IT infrastructure enables the consolidation and pooling of IT resources so that they can be shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Virtualization provides a logical abstraction of physical computing resources and creates computing environments that are not restricted by physical configuration or implementation. Virtualization is very important for cloud computing because the delivery of services is simplified by providing a platform for optimizing complex IT resources in a scalable manner, which makes cloud computing more cost effective. Hypervisor plays an important role in the virtualization of hardware. It is a piece of software that provides a virtualized hardware environment to support running multiple operating systems concurrently using one physical server. Cloud computing has to support multiple operating environments and Hypervisor is the ideal delivery mechanism. The intent of this thesis is to quantitatively and qualitatively compare the performance of VMware ESXi 4.1, Citrix Systems Xen Server 5.6 and Ubuntu 11.04 Server KVM Hypervisors using standard benchmark SPECvirt_sc2010v1.01 formulated by Standard Performance Evaluation Corporation (SPEC) under various workloads simulating real life situations.
APA, Harvard, Vancouver, ISO, and other styles
8

Danielsson, Simon, and Staffan Johansson. "Cloud Computing - A Study of Performance and Security." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20326.

Full text
Abstract:
Cloud Computing är det stora modeordet i IT-världen just nu. Det har blivit mer och mer populärt på senare år men frågor har uppstått om dess prestanda och säkerhet. Hur säkert är det egentligen och är det någon större skillnad i prestanda mellan en lokal server och en molnbaserad server? Detta examensarbete tar upp dessa frågor. En serie prestandatester kombinerat med en litteraturstudie genomfördes för att få fram ett resultatet för detta examensarbete.Denna rapport kan komma att vara till nytta för de som har ett intresse av Cloud Computing men som saknar någon större kunskap om ämnet. Resultaten kan användas som exempel för hur framtida forskning inom Cloud Computing kan genomföras.
Cloud Computing - the big buzz word of the IT world. It has become more and more popular in recent years but questions has arisen about it’s performance and security. How safe is it and is there any real difference in performance between a locally based server and a cloud based server? This thesis will examine these questions. A series of performance tests combined with a literature study were performed to achieve the results of this thesis.This thesis could be of use for those who have an interest in Cloud Computing and do not have much knowledge of it. The results can be used as an example for how future research in Cloud Computing can be done.
APA, Harvard, Vancouver, ISO, and other styles
9

Calatrava, Arroyo Amanda. "High Performance Scientific Computing over Hybrid Cloud Platforms." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/75265.

Full text
Abstract:
Scientific applications generally require large computational requirements, memory and data management for their execution. Such applications have traditionally used high-performance resources, such as shared memory supercomputers, clusters of PCs with distributed memory, or resources from Grid infrastructures on which the application needs to be adapted to run successfully. In recent years, the advent of virtualization techniques, together with the emergence of Cloud Computing, has caused a major shift in the way these applications are executed. However, the execution management of scientific applications on high performance elastic platforms is not a trivial task. In this doctoral thesis, Elastic Cloud Computing Cluster (EC3) has been developed. EC3 is an open-source tool able to execute high performance scientific applications by creating self-managed cost-efficient virtual hybrid elastic clusters on top of IaaS Clouds. These self-managed clusters have the capability to adapt the size of the cluster, i.e. the number of nodes, to the workload, thus creating the illusion of a real cluster without requiring an investment beyond the actual usage. They can be fully customized and migrated from one provider to another, in an automatically and transparent process for the users and jobs running in the cluster. EC3 can also deploy hybrid clusters across on-premises and public Cloud resources, where on-premises resources are supplemented with public Cloud resources to accelerate the execution process. Different instance types and the use of spot instances combined with on-demand resources are also cluster configurations supported by EC3. Moreover, using spot instances, together with checkpointing techniques, the tool can significantly reduce the total cost of executions while introducing automatic fault tolerance. EC3 is conceived to facilitate the use of virtual clusters to users, that might not have an extensive knowledge about these technologies, but they can benefit from them. Thus, the tool offers two different interfaces for its users, a web interface where EC3 is exposed as a service for non-experienced users and a powerful command line interface. Moreover, this thesis explores the field of light-weight virtualization using containers as an alternative to the traditional virtualization solution based on virtual machines. This study analyzes the suitable scenario for the use of containers and proposes an architecture for the deployment of elastic virtual clusters based on this technology. Finally, to demonstrate the functionality and advantages of the tools developed during this thesis, this document includes several use cases covering different scenarios and fields of knowledge, such as structural analysis of buildings, astrophysics or biodiversity.
Las aplicaciones científicas generalmente precisan grandes requisitos de cómputo, memoria y gestión de datos para su ejecución. Este tipo de aplicaciones tradicionalmente ha empleado recursos de altas prestaciones, como supercomputadores de memoria compartida, clústers de PCs de memoria distribuida, o recursos provenientes de infraestructuras Grid, sobre los que se adaptaba la aplicación para que se ejecutara satisfactoriamente. El auge que han tenido las técnicas de virtualización en los últimos años, propiciando la aparición de la computación en la nube (Cloud Computing), ha provocado un importante cambio en la forma de ejecutar este tipo de aplicaciones. Sin embargo, la gestión de la ejecución de aplicaciones científicas sobre plataformas de computación elásticas de altas prestaciones no es una tarea trivial. En esta tesis doctoral se ha desarrollado Elastic Cloud Computing Cluster (EC3), una herramienta de código abierto capaz de llevar a cabo la ejecución de aplicaciones científicas de altas prestaciones creando para ello clústers virtuales, híbridos y elásticos, autogestionados y eficientes en cuanto a costes, sobre plataformas Cloud de tipo Infraestructura como Servicio (IaaS). Estos clústers autogestionados tienen la capacidad de adaptar su tamaño, es decir, el número de nodos, a la carga de trabajo, creando así la ilusión de un clúster real sin requerir una inversión por encima del uso actual. Además, son completamente configurables y pueden ser migrados de un proveedor a otro de manera automática y transparente a los usuarios y trabajos en ejecución en el cluster. EC3 también permite desplegar clústers híbridos sobre recursos Cloud públicos y privados, donde los recursos privados son complementados con recursos Cloud públicos para acelerar el proceso de ejecución. Otras configuraciones híbridas, como el empleo de diferentes tipos de instancias y el uso de instancias puntuales combinado con instancias bajo demanda son también soportadas por EC3. Además, el uso de instancias puntuales junto con técnicas de checkpointing permite a EC3 reducir significantemente el coste total de las ejecuciones a la vez que proporciona tolerancia a fallos. EC3 está concebido para facilitar el uso de clústers virtuales a los usuarios, que, aunque no tengan un conocimiento extenso sobre este tipo de tecnologías, pueden beneficiarse fácilmente de ellas. Por ello, la herramienta ofrece dos interfaces diferentes a sus usuarios, una interfaz web donde se expone EC3 como servicio para usuarios no experimentados y una potente interfaz de línea de comandos. Además, esta tesis doctoral se adentra en el campo de la virtualización ligera, mediante el uso de contenedores como alternativa a la solución tradicional de virtualización basada en máquinas virtuales. Este estudio analiza el escenario propicio para el uso de contenedores y propone una arquitectura para el despliegue de clusters virtuales elásticos basados en esta tecnología. Finalmente, para demostrar la funcionalidad y ventajas de las herramientas desarrolladas durante esta tesis, esta memoria recoge varios casos de uso que abarcan diferentes escenarios y campos de conocimiento, como estudios estructurales de edificios, astrofísica o biodiversidad.
Les aplicacions científiques generalment precisen grans requisits de còmput, de memòria i de gestió de dades per a la seua execució. Este tipus d'aplicacions tradicionalment hi ha empleat recursos d'altes prestacions, com supercomputadors de memòria compartida, clústers de PCs de memòria distribuïda, o recursos provinents d'infraestructures Grid, sobre els quals s'adaptava l'aplicació perquè s'executara satisfactòriament. L'auge que han tingut les tècniques de virtualitzaciò en els últims anys, propiciant l'aparició de la computació en el núvol (Cloud Computing), ha provocat un important canvi en la forma d'executar este tipus d'aplicacions. No obstant això, la gestió de l'execució d'aplicacions científiques sobre plataformes de computació elàstiques d'altes prestacions no és una tasca trivial. En esta tesi doctoral s'ha desenvolupat Elastic Cloud Computing Cluster (EC3), una ferramenta de codi lliure capaç de dur a terme l'execució d'aplicacions científiques d'altes prestacions creant per a això clústers virtuals, híbrids i elàstics, autogestionats i eficients quant a costos, sobre plataformes Cloud de tipus Infraestructura com a Servici (IaaS). Estos clústers autogestionats tenen la capacitat d'adaptar la seua grandària, es dir, el nombre de nodes, a la càrrega de treball, creant així la il·lusió d'un cluster real sense requerir una inversió per damunt de l'ús actual. A més, són completament configurables i poden ser migrats d'un proveïdor a un altre de forma automàtica i transparent als usuaris i treballs en execució en el cluster. EC3 també permet desplegar clústers híbrids sobre recursos Cloud públics i privats, on els recursos privats són complementats amb recursos Cloud públics per a accelerar el procés d'execució. Altres configuracions híbrides, com l'us de diferents tipus d'instàncies i l'ús d'instàncies puntuals combinat amb instàncies baix demanda són també suportades per EC3. A més, l'ús d'instàncies puntuals junt amb tècniques de checkpointing permet a EC3 reduir significantment el cost total de les execucions al mateix temps que proporciona tolerància a fallades. EC3e stà concebut per a facilitar l'ús de clústers virtuals als usuaris, que, encara que no tinguen un coneixement extensiu sobre este tipus de tecnologies, poden beneficiar-se fàcilment d'elles. Per això, la ferramenta oferix dos interfícies diferents dels seus usuaris, una interfície web on s'exposa EC3 com a servici per a usuaris no experimentats i una potent interfície de línia d'ordres. A més, esta tesi doctoral s'endinsa en el camp de la virtualitzaciò lleugera, per mitjà de l'ús de contenidors com a alternativa a la solució tradicional de virtualitzaciò basada en màquines virtuals. Este estudi analitza l'escenari propici per a l'ús de contenidors i proposa una arquitectura per al desplegament de clusters virtuals elàstics basats en esta tecnologia. Finalment, per a demostrar la funcionalitat i avantatges de les ferramentes desenrotllades durant esta tesi, esta memòria arreplega diversos casos d'ús que comprenen diferents escenaris i camps de coneixement, com a estudis estructurals d'edificis, astrofísica o biodiversitat.
Calatrava Arroyo, A. (2016). High Performance Scientific Computing over Hybrid Cloud Platforms [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/75265
TESIS
APA, Harvard, Vancouver, ISO, and other styles
10

Hutchins, Richard Chad. "Feasibility of virtual machine and cloud computing technologies for high performance computing." Thesis, Monterey, California. Naval Postgraduate School, 2013. http://hdl.handle.net/10945/42447.

Full text
Abstract:
Approved for public release; distribution is unlimited
Reissued May 2014 with additions to the acknowledgments
Knowing the future weather on the battlefield with high certainty can result in a higher advantage over the adversary. To create this advantage for the United States, the U.S. Navy utilizes the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) to create high spatial resolution, regional, numerical weather prediction (NWP) forecasts. To compute a forecast, COAMPS runs on high performance computing (HPC) systems. These HPC systems are large, dedicated supercomputers with little ability to scale or move. This makes these systems vulnerable to outages without a costly, equally powerful secondary system. Recent advancements in cloud computing and virtualization technologies provide a method for high mobility and scalability without sacrificing performance. This research used standard benchmarks in order to quantitatively compare a virtual machine (VM) to a native HPC cluster. The benchmark tests showed that the VM was feasible platform for executing HPC applications. Then we ran the COAMPS NWP on a VM within a cloud infrastructure to prove the ability to run a HPC application in a virtualized environment. The VM COAMPS model run performed better than the native HPC machine model run. These results show that VM and cloud computing technologies can be used to run HPC applications for the Department of Defense
APA, Harvard, Vancouver, ISO, and other styles
11

Agarwal, Dinesh. "Scientific High Performance Computing (HPC) Applications On The Azure Cloud Platform." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/cs_diss/75.

Full text
Abstract:
Cloud computing is emerging as a promising platform for compute and data intensive scientific applications. Thanks to the on-demand elastic provisioning capabilities, cloud computing has instigated curiosity among researchers from a wide range of disciplines. However, even though many vendors have rolled out their commercial cloud infrastructures, the service offerings are usually only best-effort based without any performance guarantees. Utilization of these resources will be questionable if it can not meet the performance expectations of deployed applications. Additionally, the lack of the familiar development tools hamper the productivity of eScience developers to write robust scientific high performance computing (HPC) applications. There are no standard frameworks that are currently supported by any large set of vendors offering cloud computing services. Consequently, the application portability among different cloud platforms for scientific applications is hard. Among all clouds, the emerging Azure cloud from Microsoft in particular remains a challenge for HPC program development both due to lack of its support for traditional parallel programming support such as Message Passing Interface (MPI) and map-reduce and due to its evolving application programming interfaces (APIs). We have designed newer frameworks and runtime environments to help HPC application developers by providing them with easy to use tools similar to those known from traditional parallel and distributed computing environment set- ting, such as MPI, for scientific application development on the Azure cloud platform. It is challenging to create an efficient framework for any cloud platform, including the Windows Azure platform, as they are mostly offered to users as a black-box with a set of application programming interfaces (APIs) to access various service components. The primary contributions of this Ph.D. thesis are (i) creating a generic framework for bag-of-tasks HPC applications to serve as the basic building block for application development on the Azure cloud platform, (ii) creating a set of APIs for HPC application development over the Azure cloud platform, which is similar to message passing interface (MPI) from traditional parallel and distributed setting, and (iii) implementing Crayons using the proposed APIs as the first end-to-end parallel scientific application to parallelize the fundamental GIS operations.
APA, Harvard, Vancouver, ISO, and other styles
12

Khan, Majid, and Muhammad Faisal Amin. "Web Server Performance Evaluation in Cloud Computing and Local Environment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1965.

Full text
Abstract:
Context: Cloud computing is a concept in which a user get services like SaaS, PaaS and IaaS by deploying their data and application on remotely servers. Users have to pay only for the time the resources are acquired. They do not need to install and upgrade software and hardware. Due to these benefits organization are willing to move their data into the cloud and minimize their overhead. Organizations need to confirm that cloud can replace the traditional platform, software and hardware in an efficient way and provide robust performance. Web servers play a vital role providing services and deploying application. One might be interested to have information about a web server performance in the cloud. With this aim, we have compared cloud server performance with a local web server. Objectives: The objective of this study is to investigate cloud performance. For this purpose, we first find out the parameters and factors that affect a web server performance. Finding the parameters helped us in measuring the actual performance of a cloud server on some specific task. These parameters will help users, developers and IT specialists to measure cloud performance based on their requirements and needs. Methods: In order to fulfill the objective of this study, we performed a Systematic literature review and an experiment. The Systematic literature review is performed by studying articles from electronic sources including ACM Digital Library, IEEE, EiVillage (Compendx,Inspec). The Snowball method is used to minimize the chance of missing articles and to increase the validity of our findings. In experiment, two performance parameters (Throughput and Execution Time) are used to measure the performance of the Apache Web Server in Local and Cloud environment. Results: In Systematic literature review, we found many factors that affect the performance of a web server in Cloud computing. Most common of them are throughput, response time, execution time, CPU and other resource utilization. The experimental results revealed that web server performed well in local environment as compared to cloud environment. But there are other factors like cost overhead, software/ hardware configuration, software/hardware up -gradation and time consumption due to which cloud computing cannot be neglected. Conclusions: The parameters that affect the cloud performance are throughput, response time, execution time, CPU utilization and memory utilization. Increase and decrease in values of these parameters can affect cloud performance to a great extent. Overall performance of a cloud is not that effective but there are other reasons for using cloud computing
APA, Harvard, Vancouver, ISO, and other styles
13

Belhareth, Sonia. "Performances réseaux et système pour le cloud computing." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4146/document.

Full text
Abstract:
Le cloud computing permet d'offrir un accès à la demande à des ressources de calcul et de stockage. Le succès du cloud computing nécessite la maîtrise d'aspects système et réseau. Dans cette thèse, nous nous sommes intéressés aux performances du protocole TCP Cubic, qui est la version par défaut de TCP sous Linux et donc présent dans de nombreux serveurs opérationnels dans les data centers actuels. Afin de comprendre les performances d'un environnement cloud, qui offre un faible produit bande passante-délai pour le cas intra-data center, et un fort produit dans le cas inter-data center, nous avons développé des modèles analytiques pour les cas d'une ou plusieurs connexions TCP Cubic. Nos modèles se sont révélés précis dans le cas intra-datacenter, mais ne capturaient pas la synchronisation des pertes indiquée par les simulations ns-2 dans le cas inter-datacenter. Nous avons complété les simulations par des tests en environnements réels avec (i) un réseau expérimental à l'I3S ; et (ii) une solution cloud interne à Orange : Cube. Les études dans Cube nous ont démontré la forte corrélation qui pouvait exister entre performances réseau et système, et la complexité d'analyser les performances des applications dans des contextes cloud. Les études dans l'environnement I3S ont confirmé la forte synchronisation qui peut exister entre connexions TCP Cubic et nous ont permis de définir les conditions d'apparition de cette synchronisation. Nous avons étudié deux types de solution pour lutter contre la synchronisation: des solutions niveau client, avec des modifications de TCP Cubic, et des solutions réseau avec l'utilisation de politiques de gestion de tampon, notamment PIE et Codel
Cloud computing enables a flexible access to computation and storage services. This requires, for the cloud service provider, mastering network and system issues. During this PhD thesis, we focused on the performance of TCP Cubic, which is the default version of TCP in Linux and thus widely used in today's data centers. Cloud environments feature low bandwidth-delay products (BPD) in the case of intra data center communications and high BDP in the case of inter data center communications. We have developed analytical models to study the performance of a Cubic connection in isolation or a set of competing Cubic connections. Our models turn out to be precise in the low BDP case but fail at capturing the synchronization of losses that ns-2 simulations reveal in the high BDP case. We have complemented our simulations studies with tests in real environments: (i) an experimental network at I3S and (ii) a cloud solution available internally at Orange: Cube. Studies performed in Cube have highlighted the high correlation that might exist between network and system performance and the complexity to analyze the performance of applications in a cloud context. Studies in the controlled environment of I3S have confirmed the existence of synchronization and enabled us to identify its condition of appearance. We further investigated two types of solution to combat synchronization: client level solutions that entail modifications of TCP and network level solutions based on queue management solutions, in particular PIE and Codel
APA, Harvard, Vancouver, ISO, and other styles
14

Palopoli, Amedeo. "Containerization in Cloud Computing: performance analysis of virtualization architectures." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14818/.

Full text
Abstract:
La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle.
APA, Harvard, Vancouver, ISO, and other styles
15

Kanthla, Arjun Reddy. "Network Performance Improvement for Cloud Computing using Jumbo Frames." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143806.

Full text
Abstract:
The surge in the cloud computing is due to its cost effective benefits and the rapid scalability of computing resources, and the crux of this is virtualization.  Virtualization technology enables a single physical machine to be shared by multiple operating systems. This increases the eciency of the hardware, hence decreases the cost of cloud computing. However, as the load in the guest operating system increases, at some point the physical resources cannot support all the applications efficiently. Input and output services, especially network applications, must share the same total bandwidth and this sharing can be negatively affected by virtualization overheads. Network packets may undergo additional processing and have to wait until the virtual machine is scheduled by the underlying hypervisor before reaching the final service application, such as a web server.In a virtualized environment it is not the load (due to the processing of the user data) but the network overhead, that is the major problem. Modern network interface cards have enhanced network virtualization by handling IP packets more intelligently through TCP segmentation offload, interrupt coalescence, and other virtualization specific hardware. Jumbo frames have long been proposed for their advantages in traditional environment. They increase network throughput and decrease CPU utilization.  Jumbo frames can better exploit Gigabit Ethernet and offer great enhancements to the virtualized environment by utilizing the bandwidth more effectively while lowering processor overhead. This thesis shows a network performance improvement of 4.7% in a Xen virtualized environment by using jumbo frames.  Additionally the thesis examines TCP's performance in Xen and compares Xen with the same operations running on a native Linux system.
Den kraftiga ökningen i datormoln är på grund av dess kostnadseffektiva fördelar och den snabba skalbarhet av datorresurser, och kärnan i detta är virtualisering. Virtualiseringsteknik möjliggör att man kan köra era operativsystem på en enda fysisk maskin. Detta ökar effektiviteten av hårdvaran, vilket gör att kostnaden minskar för datormoln. Men eftersom lasten i gästoperativsystemet ökar, gör att de fysiska resurserna inte kan stödja alla program på ett effektivt sätt. In-och utgångstjänster, speciellt nätverksapplikationer, måste dela samma totala bandbredd gör att denna delning kan påverkas negativt av virtualisering. Nätverkspaket kan genomgå ytterligare behandling och måste vänta tills den virtuella maskinen är planerad av den underliggande hypervisor innan den slutliga services applikation, till exempel en webbserver. I en virtuell miljö är det inte belastningen (på grund av behandlingen av användarens data) utan nätverket overhead, som är det största problemet. Moderna nätverkskort har förbättrat nätverk virtualisering genom att hantera IP-paket mer intelligent genom TCP- segmenterings avlastning, avbrotts sammansmältning och genom en annan hårdvara som är specifik för virtualisering. Jumboramar har länge föreslagits för sina fördelar i traditionell miljö. De ökar nätverk genomströmning och minska CPU-användning. Genom att använda Jumbo frames kan Gigabit Ethernet användandet förbättras samt erbjuda stora förbättringar för virtualiserad miljö genom att utnyttja bandbredden mer effektivt samtidigt sänka processor overhead. Det här examensarbetet visar ett nätverk prestandaförbättring på 4,7% i en Xen virtualiserad miljö genom att använda jumbo frames. Dessutom undersöker det TCP prestanda i Xen och jämför Xen med samma funktion som körs på en Linux system.
APA, Harvard, Vancouver, ISO, and other styles
16

Kaza, Bhagavathi. "Performance Evaluation of Data Intensive Computing In The Cloud." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/450.

Full text
Abstract:
Big data is a topic of active research in the cloud community. With increasing demand for data storage in the cloud, study of data-intensive applications is becoming a primary focus. Data-intensive applications involve high CPU usage for processing large volumes of data on the scale of terabytes or petabytes. While some research exists for the performance effect of data intensive applications in the cloud, none of the research compares the Amazon Elastic Compute Cloud (Amazon EC2) and Google Compute Engine (GCE) clouds using multiple benchmarks. This study performs extensive research on the Amazon EC2 and GCE clouds using the TeraSort, MalStone and CreditStone benchmarks on Hadoop and Sector data layers. Data collected for the Amazon EC2 and GCE clouds measure performance as the number of nodes is varied. This study shows that GCE is more efficient for data-intensive applications compared to Amazon EC2.
APA, Harvard, Vancouver, ISO, and other styles
17

Sangroya, Amit. "Towards dependability and performance benchmarking for cloud computing services." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM016/document.

Full text
Abstract:
Le Cloud Computing est en plein essor, grace a ses divers avantages, telsl'elasticite, le cout, ou encore son importante exibilite dans le developpementd'applications. Il demeure cependant des problemes en suspens, lies auxperformances, a la disponibilite, la fiabilite, ou encore la securite. De nombreusesetudes se focalisent sur la fiabilite et les performance dans les servicesdu Cloud, qui sont les points critiques pour le client. On retrouve parmicelles-ci plusieurs themes emergents, allant de l'ordonnancement de tachesau placement de donnees et leur replication, en passant par la tolerance auxfautes adaptative ou a la demande, et l'elaboration de nouveaux modeles defautes.Les outils actuels evaluant la fiabilite des services du Cloud se basent surdes parametres simplifies. Ils ne permettent pas d'analyser les performancesou de comparer l'efficacite des solutions proposees. Cette these aborde precisement ce probleme en proposant un modele d'environnement complet detest destine a evaluer la fiabilite et les performances des services de CloudComputing. La creation d'outils de tests destines a l'evaluation de la fiabiliteet des performances des services du Cloud pose de nombreux defis, en raisonde la grande quantite et de la complexite des donnees traitees par ce genrede services. Les trois principaux modeles de Cloud Computing sont respectivement:Infrastructure en tant que Service (IaaS), Plate-forme en tant queService (PaaS) et Logiciel en tant que Service (SaaS).Dans le cadre de cettethese, nous nous concentrons sur le modele PaaS. Il permet aux systemesd'exploitation ou aux intergiciels d'etre accessibles via une connexion internet.Nous introduisons une architecture de test generique, qui sera utiliseepar la suite lors de la creation d'outils de tests, destines a l'evaluation de lafiabilite et de la performance
Cloud computing models are attractive because of various benefits such asscalability, cost and exibility to develop new software applications. However,availability, reliability, performance and security challenges are still not fullyaddressed. Dependability is an important issue for the customers of cloudcomputing who want to have guarantees in terms of reliability and availability.Many studies investigated the dependability and performance of cloud services,ranging from job scheduling to data placement and replication, adaptiveand on-demand fault-tolerance to new fault-tolerance models. However, thead-hoc and overly simplified settings used to evaluate most cloud service fault toleranceand performance improvement solutions pose significant challengesto the analysis and comparison of the effectiveness of these solutions.This thesis precisely addresses this problem and presents a benchmarkingapproach for evaluating the dependability and performance of cloud services.Designing of dependability and performance benchmarks for a cloud serviceis a particular challenge because of high complexity and the large amount ofdata processed by such service. Infrastructure as a Service (IaaS), Platform asa Service (PaaS) and Software as a Service (SaaS) are the three well definedmodels of cloud computing. In this thesis, we will focus on the PaaS modelof cloud computing. PaaS model enables operating systems and middlewareservices to be delivered from a managed source over a network. We introduce ageneric benchmarking architecture which is further used to build dependabilityand performance benchmarks for PaaS model of cloud services
APA, Harvard, Vancouver, ISO, and other styles
18

Němec, Petr. "Cloud computing a jeho aplikace." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-112946.

Full text
Abstract:
This diploma work is focused on the Cloud Computing and its possible applications in the Czech Republic. The first part of the work is theoretical and describes accessible sources analysis related to its topic. Historical circumstances are given in the relation to the evolution of this technology. Also few definitions are quoted, followed by the Cloud Computing basic taxonomy and common models of use. The chapter named Cloud Computing Architecture covers the generally accepted model of this technology in details. In the part focused on theory are mentioned some of the services opearating on this technology. At the end the theoretical part brings possibility of the Cloud Computing usage from the customer's and supplier's perspective. The practical part of the diploma work is divided into sections. The first one brings results of the questionnare research, performed by the author in the Czech Republic, focused on the usage of the Cloud Computing and virtualization services. In the sekond section is pre-feasibility study. The study is focused on the providing SaaS Services in the area of long-term and safe digital data store. Lastly there is an author's view on the Cloud Computing technology future and possible evolution.
APA, Harvard, Vancouver, ISO, and other styles
19

Tunc, Cihan. "Autonomic Cloud Resource Management." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/347144.

Full text
Abstract:
The power consumption of data centers and cloud systems has increased almost three times between 2007 and 2012. The traditional resource allocation methods are typically designed for high performance as the primary objective to support peak resource requirements. However, it is shown that server utilization is between 12% and 18%, while the power consumption is close to those at peak loads. Hence, there is a pressing need for devising sophisticated resource management approaches. State of the art dynamic resource management schemes typically rely on only a single resource such as core number, core speed, memory, disk, and network. There is a lack of fundamental research on methods addressing dynamic management of multiple resources and properties with the objective of allocating just enough resources for each workload to meet quality of service requirements while optimizing for power consumption. The main focus of this dissertation is to simultaneously manage power and performance for large cloud systems. The objective of this research is to develop a framework of performance and power management and investigate a general methodology for an integrated autonomic cloud management. In this dissertation, we developed an autonomic management framework based on a novel data structure, AppFlow, used for modeling current and near-term future cloud application behavior. We have developed the following capabilities for the performance and power management of the cloud computing systems: 1) online modeling and characterizing the cloud application behavior and resource requirements; 2) predicting the application behavior to proactively optimize its operations at runtime; 3) a holistic optimization methodology for performance and power using number of cores, CPU frequency, and memory amount; and 4) an autonomic cloud management to support the dynamic change in VM configurations at runtime to simultaneously optimize multiple objectives including performance, power, availability, etc. We validated our approach using RUBiS benchmark (emulating eBay), on an IBM HS22 blade server. Our experimental results showed that our approach can lead to a significant reduction in power consumption upto 87% when compared to the static resource allocation strategy, 72% when compared to adaptive frequency scaling strategy, and 66% when compared to a multi-resource management strategy.
APA, Harvard, Vancouver, ISO, and other styles
20

Alarcon, Jean-Luc Bruno. "Emerging Capabilities and Firm Performance in the Cloud Computing Environment." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/493895.

Full text
Abstract:
Business Administration/Interdisciplinary
D.B.A.
New capabilities required to succeed in the new Cloud environment represent a radical change for software companies, which have to transition from selling on-the-premises software products to providing subscription-based cloud services (aka Software-as-a-Service or SaaS). While emerging SaaS vendors have led the exponential growth of the market, the traditional software industry has been disrupted. The purpose of this dissertation is to analyze which capabilities are driving the performance of software firms in today’s cloud-computing environment by drawing upon the resource-based view (RBV) of the firm. What is the optimum spending across the primary firm capabilities (e.g., service delivery, R&D, marketing and sales) to maximize financial performance? This dissertation focuses on publicly-traded SaaS companies using publicly-available information from financial databases, corporate investor relations materials, and industry research. It is comprised of two essays. The first essay is a quantitative study based on secondary data. The second essay includes an extensive literature review, an analysis of in-depth interviews of practitioners, and mini case studies. Together, the essays contribute to RBV theory and provide useful insights to help assess the quality of execution of SaaS growth strategies and improve financial planning and performance in the software industry for the cloud computing environment. Although the results come from firms in the SaaS industry, the findings from this study could cautiously generalize to firms in other emerging technology industries. The dissertation concludes with a detailed agenda for future research.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
21

Farahani, Rad Ali. "Cloud computing and its implications for organizational design and performance." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81073.

Full text
Abstract:
Thesis (S.M. in Management Studies)--Massachusetts Institute of Technology, Sloan School of Management, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 35-36).
Cloud computing has been at the center of attention for a while now. This attention is directed towards different aspects of this concept which concern different stakeholders from IT companies to cloud adopters to simple users of cloud services. Cloud computing is affecting the IT industry in terms of operations, product and service development, and strategy. It also affects the organizations that adopt cloud computing. Any new technology or new means of conducting business processes could have a potential effect on organizational structure as well as performance. The purpose of this thesis is to investigate the implications of adopting cloud computing for organizational design and performance. It focuses on how a lowering in the costs associated with IT might influence the organizational design and in particular the decision making structure of organizations. Past research conducted on the effects of the costs associated with IT on organizational structure have been used to explain the implications of adopting cloud computing.
by Ali Farahani Rad.
S.M.in Management Studies
APA, Harvard, Vancouver, ISO, and other styles
22

Ibidunmoye, Olumuyiwa. "Performance problem diagnosis in cloud infrastructures." Licentiate thesis, Umeå universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-120287.

Full text
Abstract:
Cloud datacenters comprise hundreds or thousands of disparate application services, each having stringent performance and availability requirements, sharing a finite set of heterogeneous hardware and software resources. The implication of such complex environment is that the occurrence of performance problems, such as slow application response and unplanned downtimes, has become a norm rather than exception resulting in decreased revenue, damaged reputation, and huge human-effort in diagnosis. Though causes can be as varied as application issues (e.g. bugs), machine-level failures (e.g. faulty server), and operator errors (e.g. mis-configurations), recent studies have attributed capacity-related issues, such as resource shortage and contention, as the cause of most performance problems on the Internet today. As cloud datacenters become increasingly autonomous there is need for automated performance diagnosis systems that can adapt their operation to reflect the changing workload and topology in the infrastructure. In particular, such systems should be able to detect anomalous performance events, uncover manifestations of capacity bottlenecks, localize actual root-cause(s), and possibly suggest or actuate corrections. This thesis investigates approaches for diagnosing performance problems in cloud infrastructures. We present the outcome of an extensive survey of existing research contributions addressing performance diagnosis in diverse systems domains. We also present models and algorithms for detecting anomalies in real-time application performance and identification of anomalous datacenter resources based on operational metrics and spatial dependency across datacenter components. Empirical evaluations of our approaches shows how they can be used to improve end-user experience, service assurance and support root-cause analysis.
Cloud Control (C0590801)
APA, Harvard, Vancouver, ISO, and other styles
23

Lindgren, Hans. "Performance Management for Cloud Services: Implementation and Evaluation of Schedulers for OpenStack." Thesis, KTH, Kommunikationsnät, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124542.

Full text
Abstract:
To achieve the best performance out of an IaaS cloud, the resource management layer must be able to distribute the workloads it is tasked with optimally on the underlying infrastructure. A utilization-based scheduler can take advantage of the fact that allocated resources and actual resource usage often differ to make better-informed decisions of where to place future requests. This thesis presents the design, implementation and evaluation of an initial placement controller that uses host utilization data as one of its inputs to help place virtual machines according to one of a number of supported management objectives. The implementation, which builds on top of the OpenStack cloud platform, deals with two different objectives, namely, balanced load and energy efficiency. The thesis also discusses additional objectives and how they can be supported. A testbed and demonstration platform consisting of the aforementioned controller, a synthetic load generator and a monitoring system are built and used during evaluation of the system. Results indicate that the scheduler performs equally well for both objectives using synthetically generated request patterns of both interactive and batch type workloads. A discussion of current limitations of the scheduler and ways to overcome those conclude the thesis. Among the things discussed are how the rate at which host utilization data is collected limits the performance of the scheduler and under which circumstances dynamic placement of virtual machines must be used to complement utilization-based scheduling to avoid the risk of overloading the cloud.
För att erhålla maximal prestanda ur ett IaaS moln måste dess resurshanteringssystem kunna schemalägga resursutnyttjandet av den underliggande infrastrukturen på ett optimalt sätt. En nyttjandebaserad schemaläggare kan dra nytta av det faktum att allokerade resurser och faktiskt använda resurser ofta skiljer sig åt, för att på så sätt kunna fatta mer välgrundade beslut om var framtida förfrågningar skall placeras. Detta examensarbete presenterar såväl utformning, implementation samt utvärdering av en kontrollenhet för initial placering som till sin hjälp använder information om värdutnyttjande som indata för placering av virtuella maskiner enligt ett av ett antal stödda förvaltningssyften. Implementationen, som baseras på molnplattformen OpenStack, behandlar två sådana syften, balanserad last och energieffektivitet. I tillägg diskuteras krav för att stödja ytterligare syften. En testmiljö och demonstrationsplattform, bestående av ovan nämnda kontrollenhet, en syntetisk lastgenererare samt en övervakningsplattform byggs upp och används vid utvärdering av systemet. Resultat av utvärderingen påvisar att schemaläggaren presterar likvärdigt för båda syftena vid användande av last bestående av såväl interaktiva som applikationer av batch-typ. En diskussion om nuvarande begränsningar hos schemaläggaren och hur dessa kan överbryggas sammanfattar arbetet. Bland det som tas upp diskuteras bl.a. hur hastigheten vid vilken värdutnyttjande data samlas in påverkar prestandan hos schemaläggaren samt under vilka förhållanden dynamisk placering av virtuella maskiner bör användas som komplement till nyttjandebaserad schemaläggning för att undvika risk för överbelastning i molnet.
APA, Harvard, Vancouver, ISO, and other styles
24

López, Huguet Sergio. "Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/172327.

Full text
Abstract:
Tesis por compendio
[ES] Las aplicaciones científicas implican generalmente una carga computacional variable y no predecible a la que las instituciones deben hacer frente variando dinámicamente la asignación de recursos en función de las distintas necesidades computacionales. Las aplicaciones científicas pueden necesitar grandes requisitos. Por ejemplo, una gran cantidad de recursos computacionales para el procesado de numerosos trabajos independientes (High Throughput Computing o HTC) o recursos de alto rendimiento para la resolución de un problema individual (High Performance Computing o HPC). Los recursos computacionales necesarios en este tipo de aplicaciones suelen acarrear un coste muy alto que puede exceder la disponibilidad de los recursos de la institución o estos pueden no adaptarse correctamente a las necesidades de las aplicaciones científicas, especialmente en el caso de infraestructuras preparadas para la ejecución de aplicaciones de HPC. De hecho, es posible que las diferentes partes de una aplicación necesiten distintos tipos de recursos computacionales. Actualmente las plataformas de servicios en la nube se han convertido en una solución eficiente para satisfacer la demanda de las aplicaciones HTC, ya que proporcionan un abanico de recursos computacionales accesibles bajo demanda. Por esta razón, se ha producido un incremento en la cantidad de clouds híbridos, los cuales son una combinación de infraestructuras alojadas en servicios en la nube y en las propias instituciones (on-premise). Dado que las aplicaciones pueden ser procesadas en distintas infraestructuras, actualmente la portabilidad de las aplicaciones se ha convertido en un aspecto clave. Probablemente, las tecnologías de contenedores son la tecnología más popular para la entrega de aplicaciones gracias a que permiten reproducibilidad, trazabilidad, versionado, aislamiento y portabilidad. El objetivo de la tesis es proporcionar una arquitectura y una serie de servicios para proveer infraestructuras elásticas híbridas de procesamiento que puedan dar respuesta a las diferentes cargas de trabajo. Para ello, se ha considerado la utilización de elasticidad vertical y horizontal desarrollando una prueba de concepto para proporcionar elasticidad vertical y se ha diseñado una arquitectura cloud elástica de procesamiento de Análisis de Datos. Después, se ha trabajo en una arquitectura cloud de recursos heterogéneos de procesamiento de imágenes médicas que proporciona distintas colas de procesamiento para trabajos con diferentes requisitos. Esta arquitectura ha estado enmarcada en una colaboración con la empresa QUIBIM. En la última parte de la tesis, se ha evolucionado esta arquitectura para diseñar e implementar un cloud elástico, multi-site y multi-tenant para el procesamiento de imágenes médicas en el marco del proyecto europeo PRIMAGE. Esta arquitectura utiliza un almacenamiento distribuido integrando servicios externos para la autenticación y la autorización basados en OpenID Connect (OIDC). Para ello, se ha desarrollado la herramienta kube-authorizer que, de manera automatizada y a partir de la información obtenida en el proceso de autenticación, proporciona el control de acceso a los recursos de la infraestructura de procesamiento mediante la creación de las políticas y roles. Finalmente, se ha desarrollado otra herramienta, hpc-connector, que permite la integración de infraestructuras de procesamiento HPC en infraestructuras cloud sin necesitar realizar cambios en la infraestructura HPC ni en la arquitectura cloud. Cabe destacar que, durante la realización de esta tesis, se han utilizado distintas tecnologías de gestión de trabajos y de contenedores de código abierto, se han desarrollado herramientas y componentes de código abierto y se han implementado recetas para la configuración automatizada de las distintas arquitecturas diseñadas desde la perspectiva DevOps.
[CA] Les aplicacions científiques impliquen generalment una càrrega computacional variable i no predictible a què les institucions han de fer front variant dinàmicament l'assignació de recursos en funció de les diferents necessitats computacionals. Les aplicacions científiques poden necessitar grans requisits. Per exemple, una gran quantitat de recursos computacionals per al processament de nombrosos treballs independents (High Throughput Computing o HTC) o recursos d'alt rendiment per a la resolució d'un problema individual (High Performance Computing o HPC). Els recursos computacionals necessaris en aquest tipus d'aplicacions solen comportar un cost molt elevat que pot excedir la disponibilitat dels recursos de la institució o aquests poden no adaptar-se correctament a les necessitats de les aplicacions científiques, especialment en el cas d'infraestructures preparades per a l'avaluació d'aplicacions d'HPC. De fet, és possible que les diferents parts d'una aplicació necessiten diferents tipus de recursos computacionals. Actualment les plataformes de servicis al núvol han esdevingut una solució eficient per satisfer la demanda de les aplicacions HTC, ja que proporcionen un ventall de recursos computacionals accessibles a demanda. Per aquest motiu, s'ha produït un increment de la quantitat de clouds híbrids, els quals són una combinació d'infraestructures allotjades a servicis en el núvol i a les mateixes institucions (on-premise). Donat que les aplicacions poden ser processades en diferents infraestructures, actualment la portabilitat de les aplicacions s'ha convertit en un aspecte clau. Probablement, les tecnologies de contenidors són la tecnologia més popular per a l'entrega d'aplicacions gràcies al fet que permeten reproductibilitat, traçabilitat, versionat, aïllament i portabilitat. L'objectiu de la tesi és proporcionar una arquitectura i una sèrie de servicis per proveir infraestructures elàstiques híbrides de processament que puguen donar resposta a les diferents càrregues de treball. Per a això, s'ha considerat la utilització d'elasticitat vertical i horitzontal desenvolupant una prova de concepte per proporcionar elasticitat vertical i s'ha dissenyat una arquitectura cloud elàstica de processament d'Anàlisi de Dades. Després, s'ha treballat en una arquitectura cloud de recursos heterogenis de processament d'imatges mèdiques que proporciona distintes cues de processament per a treballs amb diferents requisits. Aquesta arquitectura ha estat emmarcada en una col·laboració amb l'empresa QUIBIM. En l'última part de la tesi, s'ha evolucionat aquesta arquitectura per dissenyar i implementar un cloud elàstic, multi-site i multi-tenant per al processament d'imatges mèdiques en el marc del projecte europeu PRIMAGE. Aquesta arquitectura utilitza un emmagatzemament integrant servicis externs per a l'autenticació i autorització basats en OpenID Connect (OIDC). Per a això, s'ha desenvolupat la ferramenta kube-authorizer que, de manera automatitzada i a partir de la informació obtinguda en el procés d'autenticació, proporciona el control d'accés als recursos de la infraestructura de processament mitjançant la creació de les polítiques i rols. Finalment, s'ha desenvolupat una altra ferramenta, hpc-connector, que permet la integració d'infraestructures de processament HPC en infraestructures cloud sense necessitat de realitzar canvis en la infraestructura HPC ni en l'arquitectura cloud. Es pot destacar que, durant la realització d'aquesta tesi, s'han utilitzat diferents tecnologies de gestió de treballs i de contenidors de codi obert, s'han desenvolupat ferramentes i components de codi obert, i s'han implementat receptes per a la configuració automatitzada de les distintes arquitectures dissenyades des de la perspectiva DevOps.
[EN] Scientific applications generally imply a variable and an unpredictable computational workload that institutions must address by dynamically adjusting the allocation of resources to their different computational needs. Scientific applications could require a high capacity, e.g. the concurrent usage of computational resources for processing several independent jobs (High Throughput Computing or HTC) or a high capability by means of using high-performance resources for solving complex problems (High Performance Computing or HPC). The computational resources required in this type of applications usually have a very high cost that may exceed the availability of the institution's resources or they are may not be successfully adapted to the scientific applications, especially in the case of infrastructures prepared for the execution of HPC applications. Indeed, it is possible that the different parts that compose an application require different type of computational resources. Nowadays, cloud service platforms have become an efficient solution to meet the need of HTC applications as they provide a wide range of computing resources accessible on demand. For this reason, the number of hybrid computational infrastructures has increased during the last years. The hybrid computation infrastructures are the combination of infrastructures hosted in cloud platforms and the computation resources hosted in the institutions, which are named on-premise infrastructures. As scientific applications can be processed on different infrastructures, the application delivery has become a key issue. Nowadays, containers are probably the most popular technology for application delivery as they ease reproducibility, traceability, versioning, isolation, and portability. The main objective of this thesis is to provide an architecture and a set of services to build up hybrid processing infrastructures that fit the need of different workloads. Hence, the thesis considered aspects such as elasticity and federation. The use of vertical and horizontal elasticity by developing a proof of concept to provide vertical elasticity on top of an elastic cloud architecture for data analytics. Afterwards, an elastic cloud architecture comprising heterogeneous computational resources has been implemented for medical imaging processing using multiple processing queues for jobs with different requirements. The development of this architecture has been framed in a collaboration with a company called QUIBIM. In the last part of the thesis, the previous work has been evolved to design and implement an elastic, multi-site and multi-tenant cloud architecture for medical image processing has been designed in the framework of a European project PRIMAGE. This architecture uses a storage integrating external services for the authentication and authorization based on OpenID Connect (OIDC). The tool kube-authorizer has been developed to provide access control to the resources of the processing infrastructure in an automatic way from the information obtained in the authentication process, by creating policies and roles. Finally, another tool, hpc-connector, has been developed to enable the integration of HPC processing infrastructures into cloud infrastructures without requiring modifications in both infrastructures, cloud and HPC. It should be noted that, during the realization of this thesis, different contributions to open source container and job management technologies have been performed by developing open source tools and components and configuration recipes for the automated configuration of the different architectures designed from the DevOps perspective. The results obtained support the feasibility of the vertical elasticity combined with the horizontal elasticity to implement QoS policies based on a deadline, as well as the feasibility of the federated authentication model to combine public and on-premise clouds.
López Huguet, S. (2021). Elastic, Interoperable and Container-based Cloud Infrastructures for High Performance Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172327
TESIS
Compendio
APA, Harvard, Vancouver, ISO, and other styles
25

Celaya, Tracy A. "Cloud-Based Computing and human resource management performance| A Delphi study." Thesis, University of Phoenix, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10004286.

Full text
Abstract:

The purpose of this qualitative study with a modified Delphi research design was to understand the reasons human resource (HR) leaders are slow to implement Cloud-based technologies and potentially identify how Cloud-Based Computing influences human resource management (HRM) and HR effectiveness, and potentially the overall performance of the organization. Business executives and HR leaders acknowledge the effect of technology on business processes and strategies, and the leader's influence on technology implementation and adoption. Cloud-Based Computing is fast becoming the standard for conducting HR processes and HR leaders must be prepared to implement the change effectively. Study findings revealed characteristics demonstrated by HR leaders successfully implementing cloud technology, best practices for successful implementation, factors championing and challenging Cloud-Based Computing adoption, and effects on HRM and organizational performance as a result of using Cloud-Based Computing. The outcomes of this study may provide the foundation of a model for implementing Cloud-Based Computing, a leadership model including characteristics of technology early adopters in HR, and identify factors impeding adoption and may assist HR leaders in creating effective change management strategies for adopting and implementing Cloud-Based Computing. Findings and recommendation from this study will enable HR professionals and leaders to make informed decisions on the adoption of Cloud-Based Computing and improve the effectiveness, efficiency, and strategic capability of HR.

APA, Harvard, Vancouver, ISO, and other styles
26

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Full text
Abstract:
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Dean. "Data Parallel Application Development and Performance with Azure." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308168930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dawoud, Wesam. "Scalability and performance management of internet applications in the cloud." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6818/.

Full text
Abstract:
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.
Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert. Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt. Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.
APA, Harvard, Vancouver, ISO, and other styles
29

Tamanampudi, Monica, and Mohith Kumar Reddy Sannareddy. "Performance Optimization of a Service in Virtual and Non - Virtual Environment." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18462.

Full text
Abstract:
In recent times Cloud Computing has become an accessible technology which makes it possible to provide online services to end user by the network of remote servers. With the increase in remote servers and resources allocated to these remote servers leads to performance degradation of service. In such a case, the environment on which service is made run plays a significant role in order to provide better performance and adds up to Quality of Service. This paper focuses on Bare metal and Linux container environments as request response time is one of the performance metrics to determine the QOS. To improve request response time platforms are customized using real-time kernel and compiler optimization flags to optimize the performance of a service. UDP packets are served to the service made run in these customized environments. From the experiments performed, it concludes that Bare metal using real-time kernel and level 3 Compiler optimization flag gives better performance of a service.
APA, Harvard, Vancouver, ISO, and other styles
30

Silva, Jefferson de Carvalho. "A framework for building component-based applications on a cloud computing platform for high performance computing services." Universidade Federal do CearÃ, 2016. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16543.

Full text
Abstract:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Developing High Performance Computing applications (HPC), which optimally access the available computing resources in a higher level of abstraction, is a challenge for many scientists. To address this problem, we present a proposal of a component computing cloud called HPC Shelf, where HPC applications perform and SAFe framework, a front-end aimed to create applications in HPC Shelf and the author's main contribution. SAFe is based on Scientific Workflows Management Systems (SWMS) projects and it allows the specification of computational solutions formed by components to solve problems specified by the expert user through a high level interface. For that purpose, it implements SAFeSWL, an architectural and orchestration description language for describing scientific worflows. Compared with other SWMS alternatives, besides rid expert users from concerns about the construction of parallel and efficient computational solutions from the components offered by the cloud, SAFe integrates itself to a system of contextual contracts which is aligned to a system of dynamic discovery (resolution) of components. In addition, SAFeSWL allows explicit control of life cycle stages (resolution, deployment, instantiation and execution) of components through embedded operators, aimed at optimizing the use of cloud resources and minimize the overall execution cost of computational solutions (workflows). Montage and Map/Reduce are the case studies that have been applied for demonstration, evaluation and validation of the particular features of SAFe in building HPC applications aimed to the HPC Shelf platform.
Desenvolver aplicaÃÃes de ComputaÃÃo de Alto Desempenho (CAD), que acessem os recursos computacionais disponÃveis de forma otimizada e em um nÃvel maior de abstraÃÃo, à um desafio para cientistas de diversos domÃnios. Esta Tese apresenta a proposta de uma nuvem de componentes chamada HPC Shelf, pano de fundo onde as aplicaÃÃes CAD executam, e o arcabouÃo SAFe, Front-End para criaÃÃo de aplicaÃÃes na HPC Shelf e contribuiÃÃo principal do autor. O SAFe toma como base o projeto de sistemas gerenciadores de workflows cientÃficos (SGWC), permitindo a implementaÃÃo de soluÃÃes computacionais baseadas em componentes para resolver os problemas especificados por meio de uma interface de nÃvel de abstraÃÃo mais alto. Para isso, foi desenvolvido o SAFeSWL, uma linguagem de descriÃÃo arquitetural e orquestraÃÃo de worflows cientÃficos. Comparado com outros SGWC, alÃm de livrar usuÃrios finais de preocupaÃÃes em relaÃÃo à construÃÃo de soluÃÃes computacionais paralelas e eficientes a partir dos componentes oferecidos pela nuvem, o SAFe faz uso de um sistema de contratos contextuais integrado a um sistema de descoberta (resoluÃÃo) dinÃmica de componentes. A linguagem SAFeSWL permite o controle explÃcito das etapas do ciclo de vida de um componente em execuÃÃo (resoluÃÃo, implantaÃÃo, instanciaÃÃo e execuÃÃo), atravÃs de operadores embutidos, a fim de otimizar o uso dos recursos da nuvem e minimizar os custos de sua utilizaÃÃo. Montage e Map/Reduce constituem os estudos de caso aplicados para demonstraÃÃo e avaliaÃÃo das propriedades originais do SAFe e do SAFeSWL na construÃÃo de aplicaÃÃes de CAD.
APA, Harvard, Vancouver, ISO, and other styles
31

van, Ketwich Willem. "IT Governance of Cloud Computing| Performance Measures using an IT Outsourcing Perspective." Thesis, University of Melbourne (Australia), 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1527429.

Full text
Abstract:

With the advent of cloud computing and the success of the cloud computing industry, organisations are beginning to adopt this service model and technology at an increasing rate. As the rate and level of use increases, organisations are faced with how best to govern these investments and obtain maximum benefit from the services offered by providers. This includes measuring the performance of these services, the corresponding organisational performance and the associated business value generated. In investigating these areas, this study compares cloud computing and IT outsourcing. It is found that while cloud measures relate, to a great extent, to the operational level of an organisation, IT outsourcing measures are concerned more with the strategic level. This highlights that cloud computing lacks strategic measures and that measures from IT outsourcing may be adopted to fill this gap.

APA, Harvard, Vancouver, ISO, and other styles
32

Coutinho, Emanuel Ferreira. "FOLE: A Conceptual Framework for Elasticity Performance Analysis in Cloud Computing Environments." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13197.

Full text
Abstract:
Currently, many customers and providers are using resources of Cloud Computing environments,such as processing and storage, for their applications and services. Through ease of use, based on the pay per use model, it is natural that the number of users and their workloads also grow. As a result, providers should expand their resources and maintain the agreed level of quality for customers, otherwise breaks the Service Level Agreement (SLA) and the resulting penalties. With the increase in computational resources usage, a key feature of Cloud Computing has become quite attractive: the elasticity. Elasticity can be defined as how a computational cloud adapts to variations in its workload through resources provisioning and deprovisioning. Due to limited availability of information regarding configuration of the experiments, in general is not trivial to implement elasticity concepts, much less apply them in cloud environments. Furthermore, the way of measuring cloud elasticity is not obvious, and there is not yet a standard for this task. Moreover, its evaluation could be performed in different ways due to many technologies and strategies for providing cloud elasticity. A common aspect of elasticity performance analysis is the use of environmental resources, such as CPU and memory, and even without a specific metric, to allow an indirectly assess of elasticity. In this context, this work proposes FOLE, a conceptual framework for conducting performance analysis of elasticity in Cloud Computing environments in a systematic, flexible and reproducible way. To support the framework, we proposed a set of specific metrics for elasticity and metrics for its indirect measurement. For the measurement of elasticity in Cloud Computing, we proposed metrics based on concepts of Physics, such as strain and stress, and Microeconomics, such as Price Elasticity of Demand. Additionally, we also proposed metrics based on resources allocation and deallocation operation times, and used resources, to support the measurement of elasticity. For verification and validation of the proposal, we performed two experiments, one in a private cloud and other in a hybrid cloud, using microbenchmarks and a classic scientific application, through a designed infrastructure based on concepts of Autonomic Computing. Through these experiments, FOLE had validated their activities, allowing the systematization of a elasticity performance analysis. The results show it is possible to assess the elasticity of a Cloud Computing environment using specific metrics based on other areas of knowledge, and also complemented by metrics related to time and resources operations satisfactorily.
Atualmente muitos clientes e provedores estÃo utilizando recursos de ambientes de ComputaÃÃo em Nuvem, tais como processamento e armazenamento, para suas aplicaÃÃes e serviÃos. Devido à facilidade de utilizaÃÃo, baseada no modelo de pagamento por uso, à natural que a quantidade de usuÃrios e suas respectivas cargas de trabalho tambÃm cresÃam. Como consequÃncia, os provedores devem ampliar seus recursos e manter o nÃvel de qualidade acordado com os clientes, sob pena de quebras do Service Level Agreement (SLA) e consequentes multas. Com o aumento na utilizaÃÃo de recursos computacionais, uma das caracterÃsticas principais da ComputaÃÃo em Nuvem tem se tornado bastante atrativa: a elasticidade. A elasticidade pode ser definida como o quanto uma nuvem computacional se adapta a variaÃÃes na sua carga de trabalho atravÃs do provisionamento e desprovisionamento de recursos. Devido à pouca disponibilidade de informaÃÃo em relaÃÃo à configuraÃÃo dos experimentos, em geral nÃo à trivial implementar conceitos de elasticidade, muito menos aplicÃ-los em ambientes de nuvens computacionais. AlÃm disso, a maneira de se medir a elasticidade nÃo à tÃo Ãbvia, e bastante variada, nÃo havendo ainda uma padronizaÃÃo para esta tarefa, e sua avaliaÃÃo pode ser executada de diferentes maneiras devido Ãs diversas tecnologias e estratÃgias para o provimento da elasticidade. Um aspecto comum na avaliaÃÃo de desempenho da elasticidade à a utilizaÃÃo de recursos do ambiente, como CPU e memÃria, e mesmo sem ter uma mÃtrica especÃfica para a elasticidade, à possÃvel se obter uma avaliaÃÃo indireta. Nesse contexto, este trabalho propÃe o FOLE, um framework conceitual para a realizaÃÃo de anÃlise de desempenho da elasticidade em nuvens computacionais de maneira sistemÃtica, flexÃvel e reproduzÃvel. Para apoiar o framework, mÃtricas especÃficas para a elasticidade e mÃtricas para sua mediÃÃo indireta foram propostas. Para a mediÃÃo da elasticidade em ComputaÃÃo em Nuvem, propomos mÃtricas baseadas em conceitos da FÃsica, como tensÃo e estresse, e da Microeconomia, como Elasticidade do PreÃo da Demanda. Adicionalmente, mÃtricas baseadas em tempos de operaÃÃes de alocaÃÃo e desalocaÃÃo de recursos, e na utilizaÃÃo desses recursos foram propostas para apoiar a mediÃÃo da elasticidade. Para verificaÃÃo e validaÃÃo da proposta, dois estudos de caso foram realizados, um em uma nuvem privada e outro em uma nuvem hÃbrida, com experimentos projetados utilizando microbenchmarks e uma aplicaÃÃo cientÃfica clÃssica, executados sobre uma infraestrutura baseada em conceitos de ComputaÃÃo AutonÃmica. Por meio desses experimentos, o FOLE foi validado em suas atividades, permitindo a sistematizaÃÃo de uma anÃlise de desempenho da elasticidade. Os resultados mostram que à possÃvel avaliar a elasticidade de um ambiente de ComputaÃÃo em Nuvem por meio de mÃtricas especÃficas baseadas em conceitos de outras Ãreas de conhecimento, e tambÃm complementada por mÃtricas relacionadas a tempos de operaÃÃes e recursos de maneira satisfatÃria.
APA, Harvard, Vancouver, ISO, and other styles
33

Nigro, Michele. "Progettazione di un cluster sul cloud con il framework HPC." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20316/.

Full text
Abstract:
Nell'ambito del calcolo distribuito, Microsoft HPC Pack offre un importante strumento per la gestione delle risorse computazionali in maniera efficiente orchestrando lo scheduling delle unità di lavoro all’interno dell’infrastruttura. HPC supporta nativamente l’integrazione sul cloud di Microsoft Azure tramite strategie di networking e virtualizzazione ben definite. Dopo una breve presentazione di Prometeia, presso cui il progetto ha avuto luogo, sono presentate nel dettaglio le tecnologie Microsoft utilizzate nel percorso. Segue una presentazione del progetto, che si compone di due passi: il primo consiste nella realizzazione dell’infrastruttura applicativa e HPC sul cloud Azure tramite template automatizzato (realizzazione di macchine virtuali, rete virtuale, installazione dei servizi e di HPC); il secondo passo è la realizzazione di un’applicazione che consenta, in base alle esigenze dell’utente, di creare ed eliminare risorse di calcolo dall’infrastruttura tramite comandi appositamente implementati. Questa soluzione apporta vantaggi di tempo ed economici sia rispetto agli scenari on-premise, in quanto non è più richiesto l’acquisto, la manutenzione e l’aggiornamento di server fisici, sia rispetto a soluzioni cloud più statiche, in cui la presenza di risorse di calcolo inattive per lunghi periodi di tempo producono costi molto più elevati. La parte finale dell’elaborato si concentra sull’analisi dei vantaggi economici che la soluzione presentata apporta, mostrando nel dettaglio le differenze tra i costi delle varie soluzioni offerte da Azure.
APA, Harvard, Vancouver, ISO, and other styles
34

Lakew, Ewnetu Bayuh. "Autonomous cloud resource provisioning : accounting, allocation, and performance control." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-107955.

Full text
Abstract:
The emergence of large-scale Internet services coupled with the evolution of computing technologies such as distributed systems, parallel computing, utility computing, grid, and virtualization has fueled a movement toward a new resource provisioning paradigm called cloud computing. The main appeal of cloud computing lies in its ability to provide a shared pool of infinitely scalable computing resources for cloud services, which can be quickly provisioned and released on-demand with minimal effort. The rapidly growing interest in cloud computing from both the public and industry together with the rapid expansion in scale and complexity of cloud computing resources and the services hosted on them have made monitoring, controlling, and provisioning cloud computing resources at runtime into a very challenging and complex task. This thesis investigates algorithms, models and techniques for autonomously monitoring, controlling, and provisioning the various resources required to meet services’ performance requirements and account for their resource usage. Quota management mechanisms are essential for controlling distributed shared resources so that services do not exceed their allocated or paid-for budget. Appropriate cloud-wide monitoring and controlling of quotas must be exercised to avoid over- or under-provisioning of resources. To this end, this thesis presents new distributed algorithms that efficiently manage quotas for services running across distributed nodes. Determining the optimal amount of resources to meet services’ performance requirements is a key task in cloud computing. However, this task is extremely challenging due to multi-faceted issues such as the dynamic nature of cloud environments, the need for supporting heterogeneous services with different performance requirements, the unpredictable nature of services’ workloads, the non-triviality of mapping performance measurements into resources, and resource shortages. Models and techniques that can predict the optimal amount of resources needed to meet service performance requirements at runtime irrespective of variations in workloads are proposed. Moreover, different service differentiation schemes are proposed for managing temporary resource shortages due to, e.g., flash crowds or hardware failures. In addition, the resources used by services must be accounted for in order to properly bill customers. Thus, monitoring data for running services should be collected and aggregated to maintain a single global state of the system that can be used to generate a single bill for each customer. However, collecting and aggregating such data across geographical distributed locations is challenging because the management task itself may consume significant computing and network resources unless done with care. A consistency and synchronization mechanism that can alleviate this task is proposed.
APA, Harvard, Vancouver, ISO, and other styles
35

Smith, James William. "Investigating performance and energy efficiency on a private cloud." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6540.

Full text
Abstract:
Organizations are turning to private clouds due to concerns about security, privacy and administrative control. They are attracted by the flexibility and other advantages of cloud computing but are wary of breaking decades-old institutional practices and procedures. Private Clouds can help to alleviate these concerns by retaining security policies, in-organization ownership and providing increased accountability when compared with public services. This work investigates how it may be possible to develop an energy-aware private cloud system able to adapt workload allocation strategies so that overall energy consumption is reduced without loss of performance or dependability. Current literature focuses on consolidation as a method for improving the energy-efficiency of cloud systems, but if consolidation is undesirable due to the performance penalties, dependability or latency then another approach is required. Given a private cloud in which the machines are constant, with no machines being powered down in response to changing workloads, and a set of virtual machines to run, each with different characteristics and profiles, it is possible to mix the virtual machine placement to reduce energy consumption or improve performance of the VMs. Through a series of experiments this work demonstrates that workload mixes can have an effect on energy consumption and the performance of applications running inside virtual machines. These experiments took the form of measuring the performance and energy usage of applications running inside virtual machines. The arrangement of these virtual machines on their hosts was varied to determine the effect of different workload mixes. The insights from these experiments have been used to create a proof-of- concept custom VM Allocator system for the OpenStack private cloud computing platform. Using CloudMonitor, a lightweight monitoring application to gather data on system performance and energy consumption, the implementation uses a holistic view of the private cloud state to inform workload placement decisions.
APA, Harvard, Vancouver, ISO, and other styles
36

Jamaliannasrabadi, Saba. "High Performance Computing as a Service in the Cloud Using Software-Defined Networking." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1433963448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kehrer, Stefan [Verfasser], and Wolfgang [Akademischer Betreuer] Blochinger. "Elastic parallel systems for high performance cloud computing / Stefan Kehrer ; Betreuer: Wolfgang Blochinger." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/121990578X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

McConnell, Aaron. "End-to-end performance monitoring and SLA-complaint resource optimisation in cloud computing." Thesis, University of Ulster, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.588494.

Full text
Abstract:
Cloud computing provides a highly scalable, distributed infrastructure on which applications and data can be hosted. Hosted applications must perform within specified constraints, despite running over an extremely dynamic infrastructure. This dynamic behaviour is created, on one hand, by the applications running within virtual machines hosted on heterogeneous servers, and on the other by the constantly fluctuating demand on the network link between the virtual machine and the user of the application. This elasticity of infrastructure, and fluctuating demand placed on it, makes it difficult to ensure QoS for the application. It is no longer sufficient to measure the performance of various host servers within a data centre - it is also now required to measure the performance of the application where it is most critical with the user of the application. The efficiency of application hosting is also difficult to ensure. Optimisation of resource allocation should minimise the hosting cost while ensuring the delivery of the application to the end-user is acceptable. This thesis details a programme of research aimed at designing, implementing and using a distributed, highly-scalable system for monitoring and ensuring end-to-end, SLA-compliant performance of virtual private cloud applications, while minimising the overall hosting cost for the cloud. The research undertaken and described within this document reviews cloud computing, the underlying virtualisation technologies and optimisation techniques, and presents three models with completed prototypes. The first model is a cloud resource monitoring methodology, which acquires real- time metrics from live applications and hosts running in a virtual private cloud test- bed environment. This model is an adaptation of the Analytical Hierarchy Process (AHP), aimed at providing a SLA-compliant scoring mechanism for all criteria to be considered when measuring application performance. The model presented finally allows an overall score to be calculated by which the application's performance is assessed at any potential host where it may be placed. The second model is a cloud-centric network monitoring methodology, which analyses the quality of the network link between any two I.P. addresses. This system provides network quality information to be used in the assessment of the application's (potential) performance between any given host location and the end-user. The third model is an optimiser for SLA-compliant virtual machine placement within virtual private clouds, where the hosting cost is minimised. In summary, the key outputs of this thesis are: 1) a prototype vendor-agnostic monitoring system, for virtual private clouds, which provides the ability to acquire any metric, relevant to the successful delivery of an application to the end-user, and provide information regarding real-time SLA-compliance based on performance thresholds set for those metrics 2) a live system for projecting SLA-compliant application performance on any given host 3) a hierarchical framework for dynamically defining application SLAs, with any conceivable, acquirable metric 4) an optimisation engine with the goal of optimally placing applications on host servers with the goal of reducing the hosting cost across all host servers. The hosting costs for each application on each server is provided by the monitoring system 5) a software-based, virtualised tool for automatic monitoring of network performance, with a remote data logging component 6) An agent-based system for automatically and periodically acquiring VM and host metrics from a VMWare virtual private cloud. The thesis presents an overall fully-functional system incorporating all these aspects.
APA, Harvard, Vancouver, ISO, and other styles
39

Leite, Alessandro Ferreira. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." reponame:Repositório Institucional da UnB, 2014. http://repositorio.unb.br/handle/10482/18262.

Full text
Abstract:
Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2014.
Submitted by Ana Cristina Barbosa da Silva (annabds@hotmail.com) on 2015-05-25T14:38:06Z No. of bitstreams: 1 2014_AlessandroFerreiraLeite.pdf: 9950238 bytes, checksum: 5899f0fba30e3075ce700c4440d984f9 (MD5)
Approved for entry into archive by Guimaraes Jacqueline(jacqueline.guimaraes@bce.unb.br) on 2015-05-25T15:49:14Z (GMT) No. of bitstreams: 1 2014_AlessandroFerreiraLeite.pdf: 9950238 bytes, checksum: 5899f0fba30e3075ce700c4440d984f9 (MD5)
Made available in DSpace on 2015-05-25T15:49:14Z (GMT). No. of bitstreams: 1 2014_AlessandroFerreiraLeite.pdf: 9950238 bytes, checksum: 5899f0fba30e3075ce700c4440d984f9 (MD5)
A computação em nuvem tem sido considerada como uma opção para executar aplicações de alto desempenho. Entretanto, enquanto as plataformas de alto desempenho tradicionais como grid e supercomputadores oferecem um ambiente estável quanto à falha, desempenho e número de recursos, a computação em nuvem oferece recursos sob demanda, geralmente com desempenho imprevisível à baixo custo financeiro. Além disso, em ambiente de nuvem, as falhas fazem parte da sua normal operação. No entanto, as nuvens podem ser combinadas, criando uma federação, para superar os limites de uma nuvem muitas vezes com um baixo custo para os usuários. A federação de nuvens pode ajudar tanto os provedores quanto os usuários das nuvens a atingirem diferentes objetivos tais como: reduzir o tempo de execução de uma aplicação, reduzir o custo financeiro, aumentar a disponibilidade do ambiente, reduzir o consumo de energia, entre outros. Por isso, a federação de nuvens pode ser uma solução elegante para evitar o sub-provisionamento de recursos ajudando os provedores a reduzirem os custos operacionais e a reduzir o número de recursos ativos, que outrora ficariam ociosos consumindo energia, por exemplo. No entanto, a federação de nuvens aumenta as opções de recursos disponíveis para os usuários, requerendo, em muito dos casos, conhecimento em administração de sistemas ou em computação em nuvem, bem como um tempo considerável para aprender sobre as opções disponíveis. Neste contexto, surgem algumas questões, tais como: (a) qual dentre os recursos disponíveis é apropriado para uma determinada aplicação? (b) como os usuários podem executar suas aplicações na nuvem e obter um desempenho e um custo financeiro aceitável, sem ter que modificá-las para atender as restrições do ambiente de nuvem? (c) como os usuários não especialistas em nuvem podem maximizar o uso da nuvem, sem ficar dependente de um provedor? (d) como os provedores podem utilizar a federação para reduzir o consumo de energia dos datacenters e ao mesmo tempo atender os acordos de níveis de serviços? A partir destas questões, este trabalho apresenta uma solução para consolidação de aplicações em nuvem federalizadas considerando os acordos de serviços. Nossa solução utiliza um sistema multi-agente para negociar a migração das máquinas virtuais entres as nuvens. Simulações mostram que nossa abordagem pode reduzir em até 46% o consumo de energia e atender os requisitos de qualidade. Nós também desenvolvemos e avaliamos uma solução para executar uma aplicação de bioinformática em nuvens federalizadas, a custo zero. Nesse caso, utilizando a federação, conseguimos diminuir o tempo de execução da aplicação em 22,55%, considerando o seu tempo de execução na melhor nuvem. Além disso, este trabalho apresenta uma arquitetura chamada Excalibur, que possibilita escalar a execução de aplicações comuns em nuvem. Excalibur conseguiu escalar automaticamente a execução de um conjunto de aplicações de bioinformática em até 11 máquinas virtuais, reduzindo o tempo de execução em 63% e o custo financeiro em 84% quando comparado com uma configuração definida pelos usuários. Por fim, este trabalho apresenta um método baseado em linha de produto de software para lidar com as variabilidades dos serviços oferecidos por nuvens de infraestrutura (IaaS), e um sistema que utiliza deste processo para configurar o ambiente e para lidar com falhas de forma automática. O nosso método utiliza modelo de feature estendido com atributos para descrever os recursos e para selecioná-los com base nos objetivos dos usuários. Experimentos realizados com dois provedores diferentes mostraram que utilizando o nosso processo, os usuários podem executar as suas aplicações em um ambiente de nuvem federalizada, sem conhecer as variabilidades e limitações das nuvens. _______________________________________________________________________________________ ABSTRACT
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-engineer the applications to fit clouds’ constraints? (c) how can non-cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-aware application consolidation solution for cloud federation. Using a multi-agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-scale cloud-unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user’s configuration. Finally, this thesis presents a software product line engineering (SPLE) method to handle the commonality and variability of infrastructure-as-a-service (IaaS) clouds, and an autonomic multi-cloud architecture that uses this method to configure and to deal with failures autonomously. The SPLE method uses extended feature model (EFM) with attributes to describe the resources and to select them based on the users’ objectives. Experiments realized with two different cloud providers show that using the proposed method, the users could execute their application on a federated cloud environment, without needing to know the variability and constraints of the clouds. _______________________________________________________________________________________ RÉSUMÉ
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance (HPC). Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. En outre, dans un environnement de cloud, les défaillances sont perçues comme étant ordinaires. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consummation d’énergie, pour ne citer que ceux-là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent : (a) Quelle ressource du cloud est appropriée pour une application donnée ? (b) Comment les utilisateurs peuvent-ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux norms et contraintes du cloud ? (c) Comment les non-spécialistes du cloud peuvent-ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la presente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-agents (SMA) pour négocier la migration des machines virtuelles entre les clouds. Les résultats de simulations montrent que notre approche pourrait réduire jusqu’à 46% la consommation totale d’énergie, tout en respectant les exigencies de performance. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modele étendu de fonction (EFM) avec des attributs pour décrire les ressources et les sélectionner en fonction dês objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’em utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud.
APA, Harvard, Vancouver, ISO, and other styles
40

Ferreira, Leite Alessandro. "A user-centered and autonomic multi-cloud architecture for high performance computing applications." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112355/document.

Full text
Abstract:
Le cloud computing a été considéré comme une option pour exécuter des applications de calcul haute performance. Bien que les plateformes traditionnelles de calcul haute performance telles que les grilles et les supercalculateurs offrent un environnement stable du point de vue des défaillances, des performances, et de la taille des ressources, le cloud computing offre des ressources à la demande, généralement avec des performances imprévisibles mais à des coûts financiers abordables. Pour surmonter les limites d’un cloud individuel, plusieurs clouds peuvent être combinés pour former une fédération de clouds, souvent avec des coûts supplémentaires légers pour les utilisateurs. Une fédération de clouds peut aider autant les fournisseurs que les utilisateurs à atteindre leurs objectifs tels la réduction du temps d’exécution, la minimisation des coûts, l’augmentation de la disponibilité, la réduction de la consommation d’énergie, pour ne citer que ceux-Là. Ainsi, la fédération de clouds peut être une solution élégante pour éviter le sur-Approvisionnement, réduisant ainsi les coûts d’exploitation en situation de charge moyenne, et en supprimant des ressources qui, autrement, resteraient inutilisées et gaspilleraient ainsi de énergie. Cependant, la fédération de clouds élargit la gamme des ressources disponibles. En conséquence, pour les utilisateurs, des compétences en cloud computing ou en administration système sont nécessaires, ainsi qu’un temps d’apprentissage considérable pour maîtrises les options disponibles. Dans ce contexte, certaines questions se posent: (a) Quelle ressource du cloud est appropriée pour une application donnée? (b) Comment les utilisateurs peuvent-Ils exécuter leurs applications HPC avec un rendement acceptable et des coûts financiers abordables, sans avoir à reconfigurer les applications pour répondre aux normes et contraintes du cloud ? (c) Comment les non-Spécialistes du cloud peuvent-Ils maximiser l’usage des caractéristiques du cloud, sans être liés au fournisseur du cloud ? et (d) Comment les fournisseurs de cloud peuvent-Ils exploiter la fédération pour réduire la consommation électrique, tout en étant en mesure de fournir un service garantissant les normes de qualité préétablies ? À partir de ces questions, la présente thèse propose une solution de consolidation d’applications pour la fédération de clouds qui garantit le respect des normes de qualité de service. On utilise un système multi-Agents pour négocier la migration des machines virtuelles entre les clouds. En nous basant sur la fédération de clouds, nous avons développé et évalué une approche pour exécuter une énorme application de bioinformatique à coût zéro. En outre, nous avons pu réduire le temps d’exécution de 22,55% par rapport à la meilleure exécution dans un cloud individuel. Cette thèse présente aussi une architecture de cloud baptisée « Excalibur » qui permet l’adaptation automatique des applications standards pour le cloud. Dans l’exécution d’une chaîne de traitements de la génomique, Excalibur a pu parfaitement mettre à l’échelle les applications sur jusqu’à 11 machines virtuelles, ce qui a réduit le temps d’exécution de 63% et le coût de 84% par rapport à la configuration de l’utilisateur. Enfin, cette thèse présente un processus d’ingénierie des lignes de produits (PLE) pour gérer la variabilité de l’infrastructure à la demande du cloud, et une architecture multi-Cloud autonome qui utilise ce processus pour configurer et faire face aux défaillances de manière indépendante. Le processus PLE utilise le modèle étendu de fonction avec des attributs pour décrire les ressources et les sélectionner en fonction des objectifs de l’utilisateur. Les expériences réalisées avec deux fournisseurs de cloud différents montrent qu’en utilisant le modèle proposé, les utilisateurs peuvent exécuter leurs applications dans un environnement de clouds fédérés, sans avoir besoin de connaître les variabilités et contraintes du cloud
Cloud computing has been seen as an option to execute high performance computing (HPC) applications. While traditional HPC platforms such as grid and supercomputers offer a stable environment in terms of failures, performance, and number of resources, cloud computing offers on-Demand resources generally with unpredictable performance at low financial cost. Furthermore, in cloud environment, failures are part of its normal operation. To overcome the limits of a single cloud, clouds can be combined, forming a cloud federation often with minimal additional costs for the users. A cloud federation can help both cloud providers and cloud users to achieve their goals such as to reduce the execution time, to achieve minimum cost, to increase availability, to reduce power consumption, among others. Hence, cloud federation can be an elegant solution to avoid over provisioning, thus reducing the operational costs in an average load situation, and removing resources that would otherwise remain idle and wasting power consumption, for instance. However, cloud federation increases the range of resources available for the users. As a result, cloud or system administration skills may be demanded from the users, as well as a considerable time to learn about the available options. In this context, some questions arise such as: (a) which cloud resource is appropriate for a given application? (b) how can the users execute their HPC applications with acceptable performance and financial costs, without needing to re-Engineer the applications to fit clouds' constraints? (c) how can non-Cloud specialists maximize the features of the clouds, without being tied to a cloud provider? and (d) how can the cloud providers use the federation to reduce power consumption of the clouds, while still being able to give service-Level agreement (SLA) guarantees to the users? Motivated by these questions, this thesis presents a SLA-Aware application consolidation solution for cloud federation. Using a multi-Agent system (MAS) to negotiate virtual machine (VM) migrations between the clouds, simulation results show that our approach could reduce up to 46% of the power consumption, while trying to meet performance requirements. Using the federation, we developed and evaluated an approach to execute a huge bioinformatics application at zero-Cost. Moreover, we could decrease the execution time in 22.55% over the best single cloud execution. In addition, this thesis presents a cloud architecture called Excalibur to auto-Scale cloud-Unaware application. Executing a genomics workflow, Excalibur could seamlessly scale the applications up to 11 virtual machines, reducing the execution time by 63% and the cost by 84% when compared to a user's configuration. Finally, this thesis presents a product line engineering (PLE) process to handle the variabilities of infrastructure-As-A-Service (IaaS) clouds, and an autonomic multi-Cloud architecture that uses this process to configure and to deal with failures autonomously. The PLE process uses extended feature model (EFM) with attributes to describe the resources and to select them based on users' objectives. Experiments realized with two different cloud providers show that using the proposed model, the users could execute their application in a cloud federation environment, without needing to know the variabilities and constraints of the clouds
APA, Harvard, Vancouver, ISO, and other styles
41

Izurieta, Iván Carrera. "Performance modeling of MapReduce applications for the cloud." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/99055.

Full text
Abstract:
Nos últimos anos, Cloud Computing tem se tornado uma tecnologia importante que possibilitou executar aplicações sem a necessidade de implementar uma infraestrutura física com a vantagem de reduzir os custos ao usuário cobrando somente pelos recursos computacionais utilizados pela aplicação. O desafio com a implementação de aplicações distribuídas em ambientes de Cloud Computing é o planejamento da infraestrutura de máquinas virtuais visando otimizar o tempo de execução e o custo da implementação. Assim mesmo, nos últimos anos temos visto como a quantidade de dados produzida pelas aplicações cresceu mais que nunca. Estes dados contêm informação valiosa que deve ser obtida utilizando ferramentas como MapReduce. MapReduce é um importante framework para análise de grandes quantidades de dados desde que foi proposto pela Google, e disponibilizado Open Source pela Apache com a sua implementação Hadoop. O objetivo deste trabalho é apresentar que é possível predizer o tempo de execução de uma aplicação distribuída, a saber, uma aplicação MapReduce, na infraestrutura de Cloud Computing, utilizando um modelo matemático baseado em especificações teóricas. Após medir o tempo levado na execução da aplicação e variando os parámetros indicados no modelo matemático, e, após utilizar uma técnica de regressão linear, o objetivo é atingido encontrando um modelo do tempo de execução que foi posteriormente aplicado para predizer o tempo de execução de aplicações MapReduce com resultados satisfatórios. Os experimentos foram realizados em diferentes configurações: a saber, executando diferentes aplicações MapReduce em clusters privados e públicos, bem como em infraestruturas de Cloud comercial, e variando o número de nós que compõem o cluster, e o tamanho do workload dado à aplicação. Os experimentos mostraram uma clara relação com o modelo teórico, indicando que o modelo é, de fato, capaz de predizer o tempo de execução de aplicações MapReduce. O modelo desenvolvido é genérico, o que quer dizer que utiliza abstrações teóricas para a capacidade computacional do ambiente e o custo computacional da aplicação MapReduce. Motiva-se a desenvolver trabalhos futuros para estender esta abordagem para atingir outro tipo de aplicações distribuídas, e também incluir o modelo matemático deste trabalho dentro de serviços na núvem que ofereçam plataformas MapReduce, a fim de ajudar os usuários a planejar suas implementações.
In the last years, Cloud Computing has become a key technology that made possible running applications without needing to deploy a physical infrastructure with the advantage of lowering costs to the user by charging only for the computational resources used by the application. The challenge with deploying distributed applications in Cloud Computing environments is that the virtual machine infrastructure should be planned in a way that is time and cost-effective. Also, in the last years we have seen how the amount of data produced by applications has grown bigger than ever. This data contains valuable information that has to be extracted using tools like MapReduce. MapReduce is an important framework to analyze large amounts of data since it was proposed by Google, and made open source by Apache with its Hadoop implementation. The goal of this work is to show that the execution time of a distributed application, namely, a MapReduce application, in a Cloud computing environment, can be predicted using a mathematical model based on theoretical specifications. This prediction is made to help the users of the Cloud Computing environment to plan their deployments, i.e., quantify the number of virtual machines and its characteristics in order to have a lesser cost and/or time. After measuring the application execution time and varying parameters stated in the mathematical model, and after that, using a linear regression technique, the goal is achieved finding a model of the execution time which was then applied to predict the execution time of MapReduce applications with satisfying results. The experiments were conducted in several configurations: namely, private and public clusters, as well as commercial cloud infrastructures, running different MapReduce applications, and varying the number of nodes composing the cluster, as well as the amount of workload given to the application. Experiments showed a clear relation with the theoretical model, revealing that the model is in fact able to predict the execution time of MapReduce applications. The developed model is generic, meaning that it uses theoretical abstractions for the computing capacity of the environment and the computing cost of the MapReduce application. Further work in extending this approach to fit other types of distributed applications is encouraged, as well as including this mathematical model into Cloud services offering MapReduce platforms, in order to aid users plan their deployments.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Ziming. "Adaptive Power Management for Autonomic Resource Configuration in Large-scale Computer Systems." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804939/.

Full text
Abstract:
In order to run and manage resource-intensive high-performance applications, large-scale computing and storage platforms have been evolving rapidly in various domains in both academia and industry. The energy expenditure consumed to operate and maintain these cloud computing infrastructures is a major factor to influence the overall profit and efficiency for most cloud service providers. Moreover, considering the mitigation of environmental damage from excessive carbon dioxide emission, the amount of power consumed by enterprise-scale data centers should be constrained for protection of the environment.Generally speaking, there exists a trade-off between power consumption and application performance in large-scale computing systems and how to balance these two factors has become an important topic for researchers and engineers in cloud and HPC communities. Therefore, minimizing the power usage while satisfying the Service Level Agreements have become one of the most desirable objectives in cloud computing research and implementation. Since the fundamental feature of the cloud computing platform is hosting workloads with a variety of characteristics in a consolidated and on-demand manner, it is demanding to explore the inherent relationship between power usage and machine configurations. Subsequently, with an understanding of these inherent relationships, researchers are able to develop effective power management policies to optimize productivity by balancing power usage and system performance. In this dissertation, we develop an autonomic power-aware system management framework for large-scale computer systems. We propose a series of techniques including coarse-grain power profiling, VM power modelling, power-aware resource auto-configuration and full-system power usage simulator. These techniques help us to understand the characteristics of power consumption of various system components. Based on these techniques, we are able to test various job scheduling strategies and develop resource management approaches to enhance the systems' power efficiency.
APA, Harvard, Vancouver, ISO, and other styles
43

Ali-Eldin, Hassan Ahmed. "Workload characterization, controller design and performance evaluation for cloud capacity autoscaling." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108398.

Full text
Abstract:
This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.
APA, Harvard, Vancouver, ISO, and other styles
44

Marathe, Aniruddha Prakash. "Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the Cloud." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/332825.

Full text
Abstract:
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
APA, Harvard, Vancouver, ISO, and other styles
45

Guan, Shichao. "Efficient and Proactive Offloading Techniques for Sustainable and Mobility-aware Resource Management in Heterogeneous Mobile Cloud Environments." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40562.

Full text
Abstract:
To support increasingly sophisticated sensors and resource-hungry applications with the current-used Lithium-based batteries and to augment mobile computing power further, the concept of the Cloudlet-based offloading is proposed which enables to migrate part of application computing tasks from battery-limited low-capacity mobile elements to the local edge. Such Cloudlet-based offloading technologies extend the provisioning of computing and storage capabilities from remote Cloud Data Centers to the proximity of end users via heterogeneous networks. However, Cloudlet-based offloading is required to coordinate among User Equipment, inter-Cloudlet nodes and remote Cloud Data Centers, which emerges new challenges and issues regarding how to enable Cloudlet-based offloading in the context of mobile edge environment and how to achieve execution- and energy-efficient offloading allocation under limited available resources. In this dissertation, a Cloudlet-based Mobile Cloud offloading prototype is first proposed. A mechanism for handling diverse computing resources is described; by adopting it, idle public resources can be easily configured as additional computing capabilities in the virtual resource pool. A fast deployment model is built to relieve the migration and installation cost when adapting the platform. An energy-saving strategy is utilized to reduce the consumption of computing resources. Security components are implemented to protect sensitive information and block malicious attacks in the cloud. Concerning the limited processing capability on the edge, a task-centric energy-aware Cloudlet-based Mobile Cloud model is formulated. A Cloudlet task-based offloading mechanism is proposed to achieve energy-aware offloading resource preparation and scheduling on the Cloudlet. A Cloud task-centric scheduling algorithm is presented for the green collaborative offloading processing between Cloudlet and remote Cloud. Considering the dynamic and heterogeneity of the offloading environment, a hybrid offloading model to solve the heterogeneous resource-constraint offloading issues on the dynamic Cloudlets. A queue-based offloading framework is developed to formulate and analyze the mixed migration-based and partition-based offloading behaviours on the Cloudlet. The execution and energy-aware heterogeneous offloading resource allocation problem is formalized and solved. A time series-based load prediction model is designed on the Cloudlet to achieve fine-grain proactive resource allocation. Regarding the mobility of User Equipment and the diverse priority of offloading tasks, an edge-based mobility-aware offloading model is modeled to solve the intra-Cloudlet offloading scheduling issue and inter-Cloudlet load-aware heterogeneous resource allocation issue. A priority-based queueing model is designed to formulate the intra-Cloudlet mobility-aware offloading scheduling problem, resolved by a heuristic solution. The energy-aware inter-Cloudlet resource selection procedure is formalized in a mobility-aware multi-site resource allocation model, which is further solved by lightweight dynamic load balancing.
APA, Harvard, Vancouver, ISO, and other styles
46

Olsson, Philip. "A Study of OpenStack Networking Performance." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191023.

Full text
Abstract:
Cloud computing is a fast-growing sector among software companies. Cloud platforms provide services such as spreading out storage and computational power over several geographic locations, on-demand resource allocation and flexible payment options. Virtualization is a technology used in conjunction with cloud technology and offers the possibility to share the physical resources of a host machine by hosting several virtual machines on the same physical machine. Each virtual machine runs its operating system which makes the virtual machines hardware independent. The cloud and virtualization layers add additional layers of software to the server environments to provide the services. The additional layers cause an overlay in latency which can be problematic for latency sensitive applications. The primary goal of this thesis is to investigate how the networking components impact the latency in an OpenStack cloud compared to a traditional deployment. The networking components were benchmarked under different load scenarios, and the results indicate that the additional latency added by the networking components is not too significant in the used network setup. Instead, a significant performance degradation could be seen on the applications running in the virtual machine which caused most of the added latency in the cloud environment.
Molntjänster är en snabbt växande sektor bland mjukvaruföretag. Molnplattformar tillhandahåller tjänster så som utspridning av lagring och beräkningskraft över olika geografiska områden, resursallokering på begäran och flexibla betalningsmetoder. Virtualisering är en teknik som används tillsammans med molnteknologi och erbjuder möjligheten att dela de fysiska resurserna hos en värddator mellan olika virtuella maskiner som kör på samma fysiska dator. Varje virtuell maskin kör sitt egna operativsystem vilket gör att de virtuella maskinerna blir hårdvaruoberoende. Moln och virtualiseringslagret lägger till ytterligare mjukvarulager till servermiljöer för att göra teknikerna möjliga. De extra mjukvarulagrerna orsakar ett pålägg på responstiden vilket kan vara ett problem för applikationer som kräver snabb responstid. Det primära målet i detta examensarbete är att undersöka hur de extra nätverkskomponenterna i molnplattformen OpenStack påverkar responstiden. Nätverkskomonenterna var utvärderade under olika belastningsscenarion och resultaten indikerar att den extra responstiden som orsakades av de extra nätverkskomponenterna inte har allt för stor betydelse på responstiden i den använda nätverksinstallationen. En signifikant perstandaförsämring sågs på applikationerna som körde på den virtuella maskinen vilket stod för den större delen av den ökade responstiden.
APA, Harvard, Vancouver, ISO, and other styles
47

Cheng, Luwei, and 程芦伟. "Network performance isolation for virtual machines." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47753183.

Full text
Abstract:
Cloud computing is a new computing paradigm that aims to transform computing services into a utility, just as providing electricity in a “pay-as-you-go” manner. Data centers are increasingly adopting virtualization technology for the purpose of server consolidation, flexible resource management and better fault tolerance. Virtualization-based cloud services host networked applications in virtual machines (VMs), with each VM provided the desired amount of resources using resource isolation mechanisms. Effective network performance isolation is fundamental to data centers, which offers significant benefit of performance predictability for applications. This research is application-driven. We study how network performance isolation can be achieved for latency-sensitive cloud applications. For media streaming applications, network performance isolation means both predicable network bandwidth and low-jittered network latency. The current resource sharing methods for VMs mainly focus on resource proportional share, whereas ignore the fact that I/O latency in VM-hosted platforms is mostly related to resource provisioning rate. The resource isolation with only quantitative promise does not sufficiently guarantee performance isolation. Even the VM is allocated with adequate resources such as CPU time and network bandwidth, problems such as network jitter (variation in packet delays) can still happen if the resources are provisioned at inappropriate moments. So in order to achieve performance isolation, the problem is not only how many/much resources each VM gets, but more importantly whether the resources are provisioned in a timely manner. How to guarantee both requirements to be achieved in resource allocation is challenging. This thesis systematically analyzes the causes of unpredictable network latency in VM-hosted platforms, with both technical discussion and experimental illustration. We identify that the varied network latency is jointly caused by VMM CPU scheduler and network traffic shaper, and then address the problem in these two parts. In our solutions, we consider the design goals of resource provisioning rate and resource proportionality as two orthogonal dimensions. In the hypervisor, a proportional share CPU scheduler with soft real-time support is proposed to guarantee predictable scheduling delay; in network traffic shaper, we introduce the concept of smooth window to smooth packet delay and apply closed-loop feedback control to maintain network bandwidth consumption. The solutions are implemented in Xen 4.1.0 and Linux 2.6.32.13, which are both the latest versions when this research was conducted. Extensive experiments have been carried out using both real-life applications and low-level benchmarks. Testing results show that the proposed solutions can effectively guarantee network performance isolation, by achieving both predefined network bandwidth and low-jittered network latency.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
48

Sreenibha, Reddy Byreddy. "Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDA." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16846.

Full text
Abstract:
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
APA, Harvard, Vancouver, ISO, and other styles
49

Chauhan, Maneesh. "Measurement and Analysis of Networking Performance in Virtualised Environments." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177327.

Full text
Abstract:
Mobile cloud computing, having embraced the ideas like computation ooading, mandates a low latency, high speed network to satisfy the quality of service and usability assurances for mobile applications. Networking performance of clouds based on Xen and Vmware virtualisation solutions has been extensively studied by researchers, although, they have mostly been focused on network throughput and bandwidth metrics. This work focuses on the measurement and analysis of networking performance of VMs in a small, KVM based data centre, emphasising the role of virtualisation overheads in the Host-VM latency and eventually to the overall latency experienced by remote clients. We also present some useful tools such as Driftanalyser, VirtoCalc and Trotter that we developed for carrying out specific measurements and analysis. Our work proves that an increase in a VM's CPU workload has direct implications on the network Round trip times. We also show that Virtualisation Overheads (VO) have significant bearing on the end to end latency and can contribute up to 70% of the round trip time between the Host and VM. Furthermore, we thoroughly study Latency due to Virtualisation Overheads as a networking performance metric and analyse the impact of CPU loads and networking workloads on it. We also analyse the resource sharing patterns and their effects amongst VMs of different sizes on the same Host. Finally, having observed a dependency between network performance of a VM and the Host CPU load, we suggest that in a KVM based cloud installation, workload profiling and optimum processor pinning mechanism can be e ectively utilised to regulate network performance of the VMs. The ndings from this research work are applicable to optimising latency oriented VM provisioning in the cloud data centres, which would benefit most latency sensitive mobile cloud applications.
Mobil cloud computing, har anammat ideerna som beräknings avlastning, att en låg latens, höghastighetsnät för att tillfredsställa tjänsternas kvalitet och användbarhet garantier för mobila applikationer. Nätverks prestanda moln baserade på Xen och VMware virtualiseringslösningar har studerats av forskare, även om de har mestadels fokuserat på nätverksgenomströmning och bandbredd statistik. Arbetet är inriktat på mätning och analys av nätverksprestanda i virtuella maskiner i en liten, KVM baserade datacenter, betonar betydelsen av virtualiserings omkostnader i värd-VM latens och så småningom till den totala fördröjningen upplevs av fjärrklienter. Wealso presentera några användbara verktyg som Driftanalyser, VirtoCalc och Trotter som vi utvecklat för att utföra specifika mätningar och analyser. Vårt arbete visar att en ökning av en VM processor arbetsbelastning har direkta konsekvenser för nätverket Round restider. Vi visar också att Virtualiserings omkostnader (VO) har stor betydelse för början till slut latens och kan bidra med upp till 70 % av rundtrippstid mellan värd och VM. Dessutom är vi noga studera Latency grund Virtualiserings Omkostnader som en nätverksprestanda och undersöka effekterna av CPU-belastning och nätverks arbetsbelastning på den. Vi analyserar också de resursdelningsmönster och deras effekter bland virtuella maskiner i olika storlekar på samma värd. Slutligen, efter att ha observerat ett beroende mellan nätverksprestanda i ett VM och värd CPU belastning, föreslar vi att i en KVM baserad moln installation, arbetsbelastning profilering och optimal processor pinning mekanism kan anvandas effektivt för att reglera VM nätverksprestanda. Resultaten från denna forskning gäller att optimera latens orienterade VM provisione i molnet datacenter, som skulle dra störst latency känsliga mobila molnapplikationer.
APA, Harvard, Vancouver, ISO, and other styles
50

Killian, Rudi. "Dynamic superscalar grid for technical debt reduction." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2726.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2018.
Organizations and the private individual, look to technology advancements to increase their ability to make informed decisions. The motivation for technology adoption by entities sprouting from an innate need for value generation. The technology currently heralded as the future platform to facilitate value addition, is popularly termed cloud computing. The move to cloud computing however, may conceivably increase the obsolescence cycle for currently retained Information Technology (IT) assets. The term obsolescence, applied as the inability to repurpose or scale an information system resource for needed functionality. The incapacity to reconfigure, grow or shrink an IT asset, be it hardware or software is a well-known narrative of technical debt. The notion of emergent technical debt realities is professed to be all but inevitable when informed by Moore’s Law, as technology must inexorably advance. Of more imminent concern however are that major accelerating factors of technical debt are deemed as non-holistic conceptualization and design conventions. Should management of IT assets fail to address technical debt continually, the technology platform would predictably require replacement. The unrealized value, functional and fiscal loss, together with the resultant e-waste generated by technical debt is meaningfully unattractive. Historically, the cloud milieu had evolved from the grid and clustering paradigms which allowed for information sourcing across multiple and often dispersed computing platforms. The parallel operations in distributed computing environments are inherently value adding, as enhanced effective use of resources and efficiency in data handling may be achieved. The predominant information processing solutions that implement parallel operations in distributed environments are abstracted constructs, styled as High Performance Computing (HPC) or High Throughput Computing (HTC). Regardless of the underlying distributed environment, the archetypes of HPC and HTC differ radically in standard implementation. The foremost contrasting factors of parallelism granularity, failover and locality in data handling have recently been the subject of greater academic discourse towards possible fusion of the two technologies. In this research paper, we uncover probable platforms of future technical debt and subsequently recommend redeployment alternatives. The suggested alternatives take the form of scalable grids, which should provide alignment with the contemporary nature of individual information processing needs. The potential of grids, as efficient and effective information sourcing solutions across geographically dispersed heterogeneous systems are envisioned to reduce or delay aspects of technical debt. As part of an experimental investigation to test plausibility of concepts, artefacts are designed to generically implement HPC and HTC. The design features exposed by the experimental artefacts, could provide insights towards amalgamation of HPC and HTC.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography