Siga este enlace para ver otros tipos de publicaciones sobre el tema: Virtualization architecture.

Tesis sobre el tema "Virtualization architecture"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Virtualization architecture".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Köhler, Fredrik. "Network Virtualization in Multi-hop Heterogeneous Architecture". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-38696.

Texto completo
Resumen
Software Defined Networking is a technology that introduces modular and programmable networks that uses separated control- and data-planes. Traditional networks use mostly proprietary protocols and devices that needs to communicate with each other and share information about routes and paths in the network. SDN on the other hand uses a controller that communicates with devices through an open source protocol called OpenFlow. The routing rules of flows can be inserted into the networking devices by the controller. This technology is still new and it requires more research to provide evidence that it can perform better, or at least as good as, the conventional networks. By doing some experiments on different topologies, this thesis aims at discovering how delays of flows are affected by having OpenFlow in the network, and identifying overhead of using such technology. The results show that, the overhead can be to large for time and noise sensitive applications and average delay is comparable to conventional networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Korotich, Elena. "A Service Virtualization Architecture for Efficient Multimedia Delivery". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23608.

Texto completo
Resumen
This thesis provides a novel architecture for the creation and management of virtual multimedia adaptation services offered by a multimedia-enabled cloud. The aim of the proposed scheme is to provide an optimal yet a transparent user access to adapted media contents while isolating them from the heterogeneity of the utilized devices, diversity of media formats, as well as the details of the adaptation services and performance variations of the underlying network. This goal is achieved through the development of service virtualization models that provide various levels of abstraction of the actual physical services and their performance parameters. Such virtual models offer adaptation functions by comprising adaptation services with accordance to their parameters. Additionally, parameters describing the functional specifics of the adaptation functions, as well as multimedia content features, are organized into a hierarchical structure that facilitates extraction of the virtual models capable of satisfying the conditions expressed by the user requests. At the same time the paramter/feature organization structure itself is flexible enough to allow users to specify media delivery requests at various levels of request details (e.g., summarize video vs. drop specific frames). As a result, in response to a user request for a multimedia content, an optimal virtual service adaptation path is calculated, describing the needed media adaptation operations as well as the appropriate mapping to the physical resources capable of executing such functions. The selection of the adaptation path is done with the use of a novel performance-history based selection mechanism that takes into account the performance variations and relations of the services in a dynamically changing environment of multimedia clouds. A number of experiments are conducted to demonstrate the potential of the proposed work in terms of the enhanced processing time and service quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Metzker, Martin. "A network QoS management architecture for virtualization environments". Diss., Ludwig-Maximilians-Universität München, 2014. http://nbn-resolving.de/urn:nbn:de:bvb:19-177848.

Texto completo
Resumen
Network quality of service (QoS) and its management are concerned with providing, guaranteeing and reporting properties of data flows within computer networks. For the past two decades, virtualization has been becoming a very popular tool in data centres, yet, without network QoS management capabilities. With virtualization, the management focus shifts from physical components and topologies, towards virtual infrastructures (VI) and their purposes. VIs are designed and managed as independent isolated entities. Without network QoS management capabilities, VIs cannot offer the same services and service levels as physical infrastructures can, leaving VIs at a disadvantage with respect to applicability and efficiency. This thesis closes this gap and develops a management architecture, enabling network QoS management in virtulization environments. First, requirements are dervied, based on real world scenarios, yielding a validation reference for the proposed architecture. After that, a life cycle for VIs and a taxonomy for network links and virtual components are introduced, to arrange the network QoS management task with the general management of virtualization environments and enabling the creation of technology specific adaptors for integrating the technologies and sub-services used in virtualization environments. The core aspect, shaping the proposed management architecture, is a management loop and its corresponding strategy for identifying and ordering sub-tasks. Finally, a prototypical implementation showcases that the presented management approach is suited for network QoS management and enforcement in virtualization environments. The architecture fulfils its purpose, fulfilling all identified requirements. Ultimately, network QoS management is one amongst many aspects to management in virtualization environments and the herin presented architecture shows interfaces to other management areas, where integration is left as future work.
Verwaltungsaufgaben für Netzdienstgüte umfassen das Bereitstellen, Sichern und Berichten von Flusseigenschaften in Rechnernetzen. Während der letzen zwei Jahrzehnte entwickelte sich Virtualisierung zu einer Schlüsseltechnologie für Rechenzentren, bisher ohne Möglichkeiten zum Verwalten der Netzdienstgüte. Der Einsatz von Virtualisierung verschiebt den Fokus beim Betrieb von Rechenzentren weg von physischen Komponenten und Netzen, hin zu virtuellen Infrastrukturen (VI) und ihren Einsatzzwecken. VIs werden als unabhängige, voneinander isolierte Einheiten entwickelt und verwaltet. Ohne Netzdienstgüte, sind VIs nicht so vielseitig und effizient einsetzbar wie physische Aufbauten. Diese Arbeit schließt diese Lücke mit der Entwicklung einer Managementarchitektur zur Verwaltung der Netzdienstgüte in Virtualisierungsumgebungen. Zunächst werden Anforderungen aus realen Szenarios abgeleitet, mit denen Architekturen bewertet werden können. Zur Abgrenzung der speziellen Aufgabe Netzdienstgüteverwaltung innerhalb des allgemeinen Managementproblems, wird anschließend ein Lebenszyklusmodell für VIs vorgestellt. Die Entwicklung einer Taxonomie für Kopplungen und Komponenten ermöglicht technologiespezifische Adaptoren zur Integration von in Virtualisierungsumgebungen eingesetzten Technologien. Kerngedanke hinter der entwickelten Architektur ist eine Rückkopplungsschleife und ihre einhergehende Methode zur Strukturierung und Anordnung von Teilproblemen. Abschließend zeigt eine prototypische Implementierung, dass dieser Ansatz für Verwaltung und Durchsetzung von Netzdienstgüte in Virtualisierungsumgebungen geeignet ist. Die Architektur kann ihren Zweck sowie die gestellten Anforderungen erfüllen. Schlussendlich ist Netzdienstgüte ein Bereich von vielen beim Betrieb von Virtualisierungsumgebungen. Die Architektur zeigt Schnittstellen zu anderen Bereichen auf, deren Integration zukünftigen Arbeiten überlassen bleibt.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Barros, Bruno Medeiros de. "Security architecture for network virtualization in cloud computing". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-18012017-094453/.

Texto completo
Resumen
Network virtualization has been a quite active research area in the last years, aiming to tackle the increasing demand for high performance and secure communication in cloud infrastructures. In special, such research eforts have led to security solutions focused on improving isolation among multiple tenant of public clouds, an issue recognized as critical both by the academic community and by the technology Industry. More recently, the advent of Software-Defined Networks (SDN) and of Network Function Virtualization (NFV) introduced new concepts and techniques for addressing issues related to the isolation of network resources in multi-tenant clouds while improving network manageability and flexibility. Similarly, hardware technologies such as Single Root I/O Virtualization (SR-IOV) enable network isolation in the hardware level while improving performance in physical and virtual networks. Aiming to provide a cloud network environment that effciently tackles multi-tenant isolation, we present three complementary strategies for addressing the isolation of resources in cloud networks. These strategies are then applied in the evaluation of existing network virtualization architectures, exposing the security gaps associated to current technologies, and paving the path for novel solutions. We then propose a security architecture that builds upon the strategies presented, as well as upon SDN, NFV and SR-IOV technologies, to implement secure cloud network domains. The theoretical and experimental analyses of the resulting architecture show a considerable reduction of the attack surface in tenant networks, with a small impact over tenants\' intra-domain and inter-domain communication performance.
Virtualização de redes é uma área de pesquisa que tem ganho bastante atenção nos últimos anos, motivada pela necessidade de se implementar sistemas de comunicação seguros e de alta performance em infraestruturas de computação em nuvem. Em particular, os esforços de pesquisa nesta área têm levado ao desenvolvimento de soluções de segurança que visam aprimorar o isolamento entre os múltiplos inquilinos de sistemas de computação em nuvem públicos, uma demanda reconhecidamente crítica tanto pela comunidade acadêmica quanto pela indústria de tecnologia. Mais recentemente, o advento das Redes Definidas por Software (do inglês Software-Defined Networks - SDN) e da Virtualização de Funções de Rede (do inglês Network Function Virtualization - NFV) introduziu novos conceitos e técnicas que podem ser utilizadas para abordar questões de isolamento de redes virtualizadas em sistemas de computação em nuvem com múltiplos inquilinos, enquanto aprimoram a capacidade de gerenciamento e a flexibilidade de suas redes. Similarmente, tecnologias de virtualização assistida por hardware como Single Root I/O Virtualization - SR-IOV permitem a implementação do isolamento de recursos de hardware, melhorando o desempenho de redes físicas e virtualizadas. Com o intuito de implementar uma solução de virtualização de redes que aborda de maneira eficiente o problema de isolamento entre múltiplos inquilinos, nós apresentamos três estratégias complementares para o isolamento de recursos de rede em sistemas computação em nuvem. As estratégias apresentadas são então aplicadas na avaliação de arquiteturas de virtualização de rede existentes, revelando lacunas de segurança associadas às tecnologias utilizadas atualmente, e abrindo caminho para o desenvolvimento de novas soluções. Nós então propomos uma arquitetura de segurança que utiliza as estratégias apresentadas, e tecnologias como SDN, NFV e SR-IOV, para implementar domínios de rede seguros. As análises teórica e experimental da arquitetura proposta mostram considerável redução das superfícies de ataque em redes virtualizadas, com um pequeno impacto sobre o desempenho da comunicação entre máquinas virtuais de inquilinos da nuvem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Montanari, Luca. "A Network Function Virtualization Architecture for Distributed IoT Gateways". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13345/.

Texto completo
Resumen
La virtualizzazione permette a diverse applicazioni di condividere lo stesso dispositivo IoT. Tuttavia, in ambienti eterogenei, reti di dispositivi IoT virtualizzati fanno emergere nuove sfide, come la necessità di fornire on-the-fly e in maniera dinamica, elastica e scalabile, gateway. NFV è un paradigma progettato per affrontare queste nuove sfide. Esso sfrutta tecnologie di virtualizzazione standard per consolidare specifici elementi di rete su generico hardware commerciale. Questa tesi presenta un'architettura NFV per gateway IoT distribuiti, nella quale istanze software dei moduli dei gateway sono ospitate su un'infrastruttura NFV distribuita, la quale è operata e gestita da un IoT gateway Provider. Considereremo diversi IoT Provider, ciascuno con le proprie marche, o loro combinazioni, di sensori e attuatori/robot. Ipotizzeremo che gli ambienti dei provider siano geograficamente distribuiti, per un'efficiente copertura di regioni estese. I sensori e gli attuatori possono essere utilizzati da una varietà di applicazioni, ciascuna delle quali può avere diversi requisiti per interfacce e QoS (latenza, throughput, consumi, ecc...). L'infrastruttura NFV consente di effettuare un deployment elastico, dinamico e scalabile dei moduli gateway in questo ambiente eterogeneo e distribuito. Inoltre, l'architettura proposta è in grado di riutilizzare moduli il cui deployment è stato precedentemente compiuto. Ciò è ottenuto attraverso Service Function Chaining e un'orchestrazione dinamica a runtime. Infine, presenteremo un prototipo basato sulla piattaforma OpenStack.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Oprescu, Mihaela Iuniana. "Virtualization and distribution of the BGP control plane". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00785007.

Texto completo
Resumen
L'Internet est organisé sous la forme d'une multitude de réseaux appelés Systèmes Autonomes (AS). Le Border Gateway Protocol (BGP) est le langage commun qui permet à ces domaines administratifs de s'interconnecter. Grâce à BGP, deux utilisateurs situés n'importe o'u dans le monde peuvent communiquer, car ce protocole est responsable de la propagation des messages de routage entre tous les réseaux voisins. Afin de répondre aux nouvelles exigences, BGP a dû s'améliorer et évoluer à travers des extensions fréquentes et de nouvelles architectures. Dans la version d'origine, il était indispensable que chaque routeur maintienne une session avec tous les autres routeurs du réseau. Cette contrainte a soulevé des problèmes de scalabilité, puisque le maillage complet des sessions BGP internes (iBGP) était devenu difficile à réaliser dans les grands réseaux. Pour couvrir ce besoin de connectivité, les opérateurs de réseaux font appel à la réflection de routes (RR) et aux confédérations. Mais si elles résolvent un problème de scalabilité, ces deux solutions ont soulevé des nouveaux défis car elles sont accompagnées de multiples défauts; la perte de diversité des routes candidates au processus de séléction BGP ou des anomalies comme par exemple des oscillations de routage, des déflections et des boucles en font partie. Les travaux menés dans cette thèse se concentrent sur oBGP, une nouvelle architecture pour redistribuer les routes externes à l'intérieur d'un AS. à la place des classiques sessions iBGP, un réseau de type overlay est responsable (I) de l'échange d'informations de routage avec les autres AS, (II) du stockage distribué des routes internes et externes, (III) de l'application de la politique de routage au niveau de l'AS et (IV) du calcul et de la redistribution des meilleures routes vers les destinations de l'Internet pour tous les routeurs
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Athreya, Manoj B. "Subverting Linux on-the-fly using hardware virtualization technology". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34844.

Texto completo
Resumen
In this thesis, we address the problem faced by modern operating systems due to the exploitation of Hardware-Assisted Full-Virtualization technology by attackers. Virtualization technology has been of growing importance these days. With the help of such a technology, multiple operating systems can be run on a single piece of hardware, with little or no modification to the operating system. Both Intel and AMD have contributed to x86 full-virtualization through their respective instruction set architectures. Hardware virtualization extensions can be found in almost all x86 processors these days. Hardware virtualization technologies have opened a whole new frontier for a new kind of attack. A system hacker can abuse hardware virualization technology to gain control over an operating system on-the-fly (i.e., without a system restart) by installing a thin Virtual Machine Monitor (VMM) below the native operating system. Such a VMM based malware is termed a Hardware-Assisted Virtual Machine (HVM) rootkit. We discuss the technique used by a rootkit named Blue Pill to subvert the Windows Vista operating system by exploiting the AMD-V (codenamed "Pacifica") virtualization extensions. HVM rootkits do not hook any operating system code or data regions; hence detecting the existence of such malware using conventional techniques becomes extremely difficult. This thesis discusses existing methods to detect such rootkits and their inefficiencies. In this work, we implement a proof-of-concept HVM rootkit using Intel-VT hardware virtualization technology and also discuss how such an attack can be defended against by using an autonomic architecture called SHARK, which was proposed by Vikas et al., in MICRO 2008.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

McBride, Daniel C. "Mapping, awareness, and virtualization network administrator training tool (MAVNATT) architecture and framework". Thesis, Monterey, California: Naval Postgraduate School, 2015. http://hdl.handle.net/10945/45900.

Texto completo
Resumen
Approved for public release; distribution is unlimited
Tactical networks are becoming more critical in maintaining centers of gravity for military operations as cyberspace becomes contested at all levels of war. As a result, the growth of network centric operations and increased operational tempo in the cyber domain has created a significant training gap for tactical network administrators. This research suggests that a computer-based environment can integrate the operational network and a training network into the same system to allow tactical network administrators to concurrently administer the network and conduct realistic training on an identical virtual network. A review of commercial and open-source tools identifies the baseline for an architecture and framework for this system. The architecture consists of a modular design comprised of mapping, awareness, and virtualization modules. The framework integrates these modules by defining a network topology format, programming language, graphical user interface solution, and virtualization solution. This research concludes by providing an implementation that demonstrates desired capabilities. While we demonstrate that the project goals are attainable, there is a need for further research and development to deploy this capability to fleet units.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Korikawa, Tomohiro. "Parallel Memory System Architectures for Packet Processing in Network Virtualization". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263787.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wagner, Ralf [Verfasser] y Bernhard [Akademischer Betreuer] Mitschang. "Integration management : a virtualization architecture for adapter technologies / Ralf Wagner. Betreuer: Bernhard Mitschang". Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/106910650X/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Metzker, Martin [Verfasser] y Dieter [Akademischer Betreuer] Kranzlmüller. "A network QoS management architecture for virtualization environments / Martin Metzker. Betreuer: Dieter Kranzlmüller". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2014. http://d-nb.info/1065610645/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Cooper, Andrew. "Towards a trusted grid architecture". Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:42268964-c1db-4599-9dbc-a1ceb1015ef1.

Texto completo
Resumen
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jensen, Deron Eugene. "System-wide Performance Analysis for Virtualization". PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1789.

Texto completo
Resumen
With the current trend in cloud computing and virtualization, more organizations are moving their systems from a physical host to a virtual server. Although this can significantly reduce hardware, power, and administration costs, it can increase the cost of analyzing performance problems. With virtualization, there is an initial performance overhead, and as more virtual machines are added to a physical host the interference increases between various guest machines. When this interference occurs, a virtualized guest application may not perform as expected. There is little or no information to the virtual OS about the interference, and the current performance tools in the guest are unable to show this interference. We examine the interference that has been shown in previous research, and relate that to existing tools and research in root cause analysis. We show that in virtualization there are additional layers which need to be analyzed, and design a framework to determine if degradation is occurring from an external virtualization layer. Additionally, we build a virtualization test suite with Xen and PostgreSQL and run multiple tests to create I/O interference. We show that our method can distinguish between a problem caused by interference from external systems and a problem from within the virtual guest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Furgiuele, Antonio. "Architecture of the cloud, virtualization takes command : learning from black boxes, data centers and an architecture of the conditioned environment". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/81746.

Texto completo
Resumen
Thesis (S.M. in History, Theory and Criticism of Art and Architecture)--Massachusetts Institute of Technology, Dept. of Architecture, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 127-128).
A single manageable architecture of the Cloud has been one of the most important social and technical changes of the 21st century. Cloud computing, our newest public utility is an attempt to confront and control cultural risk, it has rendered the environment of our exchanges calculable, manageable, seemingly predictable, and most importantly as a new form of capital. Cloud computing in its most basic terms is the system of virtualization of data storage and program access into an instantaneous service utility. The transformation of computing into a service industry is one of the key changes of the Information Age, and its logic is tied to the highly guarded mechanisms of a black box, an architecture machine, or more commonly known as the data center. In 2008, on a day with without the usual fanfare or barrage of academic manifestoes, grand claims of paradigm shifts, virtualization quietly took command. A seemingly simple moment where a cloud, the Cloud, emerged as a new form of managerial space that tied a large system of users to the hidden mechanisms of large scaled factories of information, a network of data centers. The project positions the Cloud and the data center into the architectural discourse, both historically and materially, through an analysis of its relationship to an emergent digital sublime and how it is managed, controlled and propelled through the obscure typologies of its architecture and images. The study of the Cloud and the data center through the notion of the sublime, and the organizational structures of typology we can more critically assess architecture's relationship to this new phase of the Information Age.
by Antonio Furgiuele.
S.M.in History, Theory and Criticism of Art and Architecture
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zahedi, Saed. "Virtualization Security Threat Forensic and Environment Safeguarding". Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-32144.

Texto completo
Resumen
The advent of virtualization technologies has evolved the IT infrastructure and organizations are migrating to virtual platforms. Virtualization is also the foundation for cloud platform services. Virtualization is known to provide more security into the infrastructure apart from agility and flexibility. However security aspects of virtualization are often overlooked. Various attacks to the virtualization hypervisor and its administration component are desirable for adversaries. The threats to virtualization must be rigorously scrutinized to realize common breaches and knowing what is more attractive for attackers. In this thesis a current state of perimeter and operational threats along with taxonomy of virtualization security threats is provided. The common attacks based on vulnerability database are investigated. A distribution of the virtualization software vulnerabilities, mapped to the taxonomy is visualized. The famous industry best practices and standards are introduced and key features of each one are presented for safeguarding the virtualization environments. A discussion of other possible approaches to investigate the severity of threats based on automatic systems is presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Polito, Guillermo. "Virtualization support for application runtime specialization and extension". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10025/document.

Texto completo
Resumen
Un environnement d'exécution est l'ensemble des éléments logiciels qui représentent une application pendant son exécution. Les environnements d'exécution doivent être adaptables à différents contextes. Les progrès des technologies de l'information, tant au niveau logiciel qu'au niveau matériel, rendent ces adaptations nécessaires. Par exemple, nous pouvons envisager d'étendre un langage de programmation pour améliorer la productivité des développeurs. Aussi, nous pouvons envisager de réduire la consommation mémoire des applications de manière transparente afin de les adapter à certaines contraintes d'exécution e.g., des réseaux lents ou de la mémoire limités. Nous proposons Espell, une infrastructure pour la virtualisation d'environnement d'éxécution de langages orienté-objets haut-niveau. Espell fournit une infrastructure généraliste pour le contrôle et la manipulation d'environnements d'exécution pour différentes situations. Une représentation de "premier-ordre" de l'environnement d'exécution orienté-objet, que nous appelons "object space", fournit une interface haut-niveau qui permet la manipulation de ces environnements et clarifie le contrat entre le langage et la machine virtuelle. Un hyperviseur est client d'un object space et le manipule soit directement au travers d'objets "miroirs", soit en y exécutant des expressions arbitraires. Nous avons implémenté un prototype de Espell sur Pharo. Nous montrons au travers de notre prototype que cet infrastructure supporte le "bootstrapping" (i.e., l'amorçage ou initialisation circulaire) des langages et le "tailoring"~(i.e., la construction sur-mesure ou "taille") d'environnement d'exécution. En utilisant l'amorçage nous initialisons un langage orienté-objet haut-niveau qui est auto-décrit. Un langage amorcé profite de ses propres abstractions se montrant donc plus simple à étendre. Nous avons amorcé quatre langages qui présentent des modèles de programmation différents e.g., avec des "traits", avec des variables d'instance de 'premier-ordre' ou avec une couche réflexive basé sur le concept de "miroirs". La taille d'environnements d'exécution est une technique qui génère une application spécialisé en extrayant seulement le code utilisé pendant l'exécution d'un programme. Une application taillée inclut seulement les classes et méthodes qu'elle nécessite, et évite que des librairies et des frameworks externes surchargent inutilement la base de code. Notre technique de taille basé sur Espell, que nous appelons "run-fail-grow" (i.e., exécuter-échouer-grandir), créé des versions spécialisées des applications, en sauvant entre un 95% et 99% de la mémoire en comparaison avec la distribution officielle de Pharo
An application runtime is the set of software elements that represent an application during its execution. Application runtimes should be adaptable to different contexts. Advances in computing technology both in hardware and software indeed demand it. For example, on one side we can think on extending a programming language to enhance the developers' productivity. On the other side we can also think on transparently reducing the memory footprint of applications to make them fit in constrained resource scenarios e.g., low networks or limited memory availability. We propose Espell, a virtualization infrastructure for object-oriented high-level languages runtimes. Espell provides a general purpose infrastructure to control and manipulate object-oriented runtimes in different situations. A first-class representation of an object-oriented runtime, namely an "object space", provides a high-level API that allows the manipulation of such runtime and clarifies the contract between the language and the virtual machine. A hypervisor is the client of an object space and manipulates it either directly through mirror objects, either by executing arbitrary expressions into it. We implemented a Espell prototype on Pharo. We show with this prototype that this infrastructure supports language "bootstrapping" and application runtime "tailoring". Using bootstrapping we describe an object-oriented high-level language initialization in terms of itself. A bootstrapped language takes benefit of its own abstractions and shows easier to extend. We bootstrapped four languages presenting different programming models e.g., traits, first-class instance variables and mirror-based reflection. Application runtime tailoring is a technique that generates a specialized application by extracting the elements of a program that are used during execution. A tailored application encompasses only the classes and methods it needs and avoids the code bloat that appears from the usage of third-party libraries and frameworks. Our run-fail-grow tailoring technique based on Espell succeeds in creating specialized versions of applications, saving between a 95% and 99% of memory in comparison with Pharo's official distribution
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Duan, Kewei. "Resource-oriented architecture based scientific workflow modelling". Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.698986.

Texto completo
Resumen
This thesis studies the feasibility and methodology of applying state-of-the-art computer technology in scientific workflow modelling, within a collaborative environment. The collaborative environment also indicates that the people involved include non-computer scientists or engineers from other disciplines. The objective of this research is to provide a systematic methodology based on a web environment for the purpose of lowering the barriers brought by the heterogeneous features of multi-institutions, multi-platforms and geographically distributed resources which are implied in the collaborative environment of scientific workflow.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Khalifa, Ahmed Abdelmonem Abuelfotooh Ali. "Collaborative Computing Cloud: Architecture and Management Platform". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/72866.

Texto completo
Resumen
We are witnessing exponential growth in the number of powerful, multiply-connected, energy-rich stationary and mobile nodes, which will make available a massive pool of computing and communication resources. We claim that cloud computing can provide resilient on-demand computing, and more effective and efficient utilization of potentially infinite array of resources. Current cloud computing systems are primarily built using stationary resources. Recently, principles of cloud computing have been extended to the mobile computing domain aiming to form local clouds using mobile devices sharing their computing resources to run cloud-based services. However, current cloud computing systems by and large fail to provide true on-demand computing due to their lack of the following capabilities: 1) providing resilience and autonomous adaptation to the real-time variation of the underlying dynamic and scattered resources as they join or leave the formed cloud; 2) decoupling cloud management from resource management, and hiding the heterogeneous resource capabilities of participant nodes; and 3) ensuring reputable resource providers and preserving the privacy and security constraints of these providers while allowing multiple users to share their resources. Consequently, systems and consumers are hindered from effectively and efficiently utilizing the virtually infinite pool of computing resources. We propose a platform for mobile cloud computing that integrates: 1) a dynamic real-time resource scheduling, tracking, and forecasting mechanism; 2) an autonomous resource management system; and 3) a cloud management capability for cloud services that hides the heterogeneity, dynamicity, and geographical diversity concerns from the cloud operation. We hypothesize that this would enable 'Collaborative Computing Cloud (C3)' for on-demand computing, which is a dynamically formed cloud of stationary and/or mobile resources to provide ubiquitous computing on-demand. The C3 would support a new resource-infinite computing paradigm to expand problem solving beyond the confines of walled-in resources and services by utilizing the massive pool of computing resources, in both stationary and mobile nodes. In this dissertation, we present a C3 management platform, named PlanetCloud, for enabling both a new resource-infinite computing paradigm using cloud computing over stationary and mobile nodes, and a true ubiquitous on-demand cloud computing. This has the potential to liberate cloud users from being concerned about resource constraints and provides access to cloud anytime and anywhere. PlanetCloud synergistically manages 1) resources to include resource harvesting, forecasting and selection, and 2) cloud services concerned with resilient cloud services to include resource provider collaboration, application execution isolation from resource layer concerns, seamless load migration, fault-tolerance, the task deployment, migration, revocation, etc. Specifically, our main contributions in the context of PlanetCloud are as follows. 1. PlanetCloud Resource Management • Global Resource Positioning System (GRPS): Global mobile and stationary resource discovery and monitoring. A novel distributed spatiotemporal resource calendaring mechanism with real-time synchronization is proposed to mitigate the effect of failures occurring due to unstable connectivity and availability in the dynamic mobile environment, as well as the poor utilization of resources. This mechanism provides a dynamic real-time scheduling and tracking of idle mobile and stationary resources. This would enhance resource discovery and status tracking to provide access to the right-sized cloud resources anytime and anywhere. • Collaborative Autonomic Resource Management System (CARMS): Efficient use of idle mobile resources. Our platform allows sharing of resources, among stationary and mobile devices, which enables cloud computing systems to offer much higher utilization, resulting in higher efficiency. CARMS provides system-managed cloud services such as configuration, adaptation and resilience through collaborative autonomic management of dynamic cloud resources and membership. This helps in eliminating the limited self and situation awareness and collaboration of the idle mobile resources. 2. PlanetCloud Cloud Management Architecture for resilient cloud operation on dynamic mobile resources to provide stable cloud in a continuously changing operational environment. This is achieved by using trustworthy fine-grained virtualization and task management layer, which isolates the running application from the underlying physical resource enabling seamless execution over heterogeneous stationary and mobile resources. This prevents the service disruption due to variable resource availability. The virtualization and task management layer comprises a set of distributed powerful nodes that collaborate autonomously with resource providers to manage the virtualized application partitions.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Di, Santi Silvio. "5G Network Architecture". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20432/.

Texto completo
Resumen
In this work the 5G core network architecture has been explored: starting from the enabling technologies that are supporting the "revolution" and looking in depth at the current 4G LTE network architecture, we tried to find a solution to bridge the gap between these two totally different architecture. Once the solution has been found, we used a simulation platform provided by the ONF (Open Networking Foundation) that demonstrates the feasibility of such approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Foukas, Xenofon. "Towards a programmable and virtualized mobile radio access network architecture". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31406.

Texto completo
Resumen
Emerging 5G mobile networks are envisioned to become multi-service environments, enabling the dynamic deployment of services with a diverse set of performance requirements, accommodating the needs of mobile network operators, verticals and over-the-top service providers. The Radio Access Network (RAN) part of mobile networks is expected to play a very significant role towards this evolution. Unfortunately, such a vision cannot be efficiently supported by the conventional RAN architecture, which adopts a fixed and rigid design. For the network to evolve, flexibility in the creation, management and control of the RAN components is of paramount importance. The key elements that can allow us to attain this flexibility are the programmability and the virtualization of the network functions. While in the case of the mobile core, these issues have been extensively studied due to the advent of technologies like Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) and the similarities that the core shares with other wired networks like data centers, research in the domain of the RAN is still in its infancy. The contributions made in this thesis significantly advance the state of the art in the domain of RAN programmability and virtualization in three dimensions. First, we design and implement a software-defined RAN (SD-RAN) platform called FlexRAN, that provides a flexible control plane designed with support for real-time RAN control applications, flexibility to realize various degrees of coordination among RAN infrastructure entities, and programmability to adapt control over time and easier evolution to the future following SDN/NFV principles. Second, we leverage the capabilities of the FlexRAN platform to design and implement Orion, which is a novel RAN slicing system that enables the dynamic on-the-fly virtualization of base stations, the flexible customization of slices to meet their respective service needs and which can be used in an end-to-end network slicing setting. Third, we focus on the use case of multi-tenancy in a neutral-host indoors small-cell environment, where we design Iris, a system that builds on the capabilities of FlexRAN and Orion and introduces a dynamic pricing mechanism for the efficient and flexible allocation of shared spectrum to the tenants. A number of additional use cases that highlight the benefits of the developed systems are also presented. The lessons learned through this research are summarized and a discussion is made on interesting topics for future work in this domain. The prototype systems presented in this thesis have been made publicly available and are being used by various research groups worldwide in the context of 5G research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Raad, Patrick. "Protocol architecture and algorithms for distributed data center networks". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066571/document.

Texto completo
Resumen
De nos jours les données ainsi que les applications dans le nuage (cloud) connaissent une forte croissance, ce qui pousse les fournisseurs à chercher des solutions garantissant un lien réseau stable et résilient à leurs utilisateurs. Dans cette thèse on étudie les protocoles réseaux et les stratégies de communication dans un environnement de centre de données distribués. On propose une architecture cloud distribuée, centrée sur l’utilisateur et qui a pour but de: (i) migrer des machines virtuelles entre les centres de données avec un temps d’indisponibilité faible; (ii) fournir un accès résilient aux machines virtuelles; (iii) minimiser le délai d'accès au cloud. On a identifié deux problèmes de décision: le problème d'orchestration de machines virtuelles, prenant en compte la mobilité des utilisateurs, et le problème de basculement et de configuration des localisateurs, prenant en compte les états des liens inter- et intra-centre de données. On évalue notre architecture en utilisant une plate-forme de test avec des centres de données distribués géographiquement et en simulant des scenarios basés sur des traces de mobilités réelles. On montre que, grâce à quelques modifications apportées aux protocoles d'overlay, on peut avoir des temps d'indisponibilité très faibles pendant la migration de machines virtuelles entre deux centres de données. Puis on montre qu’en reliant la mobilité des machines virtuelles aux déplacement géographiques des utilisateurs, on peut augmenter le débit de la connexion. De plus, quand l’objectif est de maximiser le débit entre l’utilisateur et sa ressource, on démontre par des simulations que la décision de l'emplacement des machines virtuelles est plus importante que la décision de basculement de point d'entrée du centre de données. Enfin, grâce à un protocole de transport multi-chemins, on montre comment optimiser les performances de notre architecture et comment à partir des solutions de routage intra-centre de données on peut piloter le basculement des localisateurs
While many business and personal applications are being pushed to the cloud, offering a reliable and a stable network connectivity to cloud-hosted services becomes an important challenge to face in future networks. In this dissertation, we design advanced network protocols, algorithms and communication strategies to cope with this evolution in distributed data center architectures. We propose a user-centric distributed cloud network architecture that is able to: (i) migrate virtual resources between data centers with an optimized service downtime; (ii) offer resilient access to virtual resources; (iii) minimize the cloud access latency. We identify two main decision making problems: the virtual machine orchestration problem, also taking care of user mobility, and the routing locator switching configuration problem, taking care of both extra and intra data center link states. We evaluate our architecture using real test beds of geographically distributed data centers, and we also simulate realistic scenarios based on real mobility traces. We show that migrating virtual machines between data centers at negligible downtime is possible by enhancing overlay protocols. We then demonstrate that by linking cloud virtual resource mobility to user mobility we can get a considerable gain in the transfer rates. We prove by simulations using real traces that the virtual machine placement decision is more important than the routing locator switching decision problem when the goal is to increase the connection throughput: the cloud access performance is primarily affected by the former decision, while the latter decision can be left to intra data center traffic engineering solutions. Finally, we propose solutions to take profit from multipath transport protocols for accelerating cloud access performance in our architecture, and to let link-state intra data center routing fabrics piloting the cloud access routing locator switching
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Dévigne, Clément. "Exécution sécurisée de plusieurs machines virtuelles sur une plateforme Manycore". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066138/document.

Texto completo
Resumen
Les architectures manycore, qui comprennent un grand nombre de cœurs, sont un moyen de répondre à l'augmentation continue de la quantité de données numériques à traiter par les infrastructures proposant des services de cloud computing. Ces données, qui peuvent concerner des entreprises aussi bien que des particuliers, sont sensibles par nature, et c'est pourquoi la problématique d'isolation est primordiale. Or, depuis le début du développement du cloud computing, des techniques de virtualisation sont de plus en plus utilisées pour permettre à différents utilisateurs de partager physiquement les mêmes ressources matérielles. Cela est d'autant plus vrai pour les architectures manycore, et il revient donc en partie aux architectures de garantir la confidentialité et l'intégrité des données des logiciels s'exécutant sur la plateforme. Nous proposons dans cette thèse un environnement de virtualisation sécurisé pour une architecture manycore. Notre mécanisme s'appuie sur des composants matériels et un logiciel hyperviseur pour isoler plusieurs systèmes d'exploitation s'exécutant sur la même architecture. L'hyperviseur est en charge de l'allocation des ressources pour les systèmes d'exploitation virtualisés, mais ne possède pas de droit d'accès sur les ressources allouées à ces systèmes. Ainsi, une faille de sécurité dans l'hyperviseur ne met pas en péril la confidentialité ou l'intégrité des données des systèmes virtualisés. Notre solution est évaluée en utilisant un prototype virtuel précis au cycle et a été implémentée dans une architecture manycore à mémoire partagée cohérente. Nos évaluations portent sur le surcoût matériel et sur la dégradation en performance induits par nos mécanismes. Enfin, nous analysons la sécurité apportée par notre solution
Manycore architectures, which comprise a lot of cores, are a way to answer the always growing demand for digital data processing, especially in a context of cloud computing infrastructures. These data, which can belong to companies as well as private individuals, are sensitive by nature, and this is why the isolation problematic is primordial. Yet, since the beginning of cloud computing, virtualization techniques are more and more used to allow different users to physically share the same hardware resources. This is all the more true for manycore architectures, and it partially comes down to the architectures to guarantee that data integrity and confidentiality are preserved for the software it executes. We propose in this thesis a secured virtualization environment for a manycore architecture. Our mechanism relies on hardware components and a hypervisor software to isolate several operating systems running on the same architecture. The hypervisor is in charge of allocating resources for the virtualized operating systems, but does not have the right to access the resources allocated to these systems. Thus, a security flaw in the hypervisor does not imperil data confidentiality and integrity of the virtualized systems. Our solution is evaluated on a cycle-accurate virtual prototype and has been implemented in a coherent shared memory manycore architecture. Our evaluations target the hardware and performance overheads added by our mechanisms. Finally, we analyze the security provided by our solution
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Roberts, Erik S. "Virtualization of AEGIS a study of the feasibility of applying open architecture to the surface navy's most complex automated weapon system". Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5473.

Texto completo
Resumen
Approved for public release; distribution is unlimited.
Rising costs of proprietary equipment in legacy electronic applications are increasingly drawing resources from vital programs. Growing interest in evaluating Open Architecture technology to replace closed systems is evidenced by the number of recent publications on the subject. Researchers have approached this topic from various angles, including lifecycle management, risk simulation, total cost of ownership, and knowledge-value added measures. This exploratory study uses open architecture hardware employing virtualization technology to test the feasibility of replacing legacy components of military systems. Virtualization has the potential to provide significant cost savings in terms of procurement, daily operation, and maintenance. Additionally, virtualization provides functional benefits such as load-balancing, greater processor utilization and storage flexibility, streamlined scalability, and simplified disaster recovery strategies. This thesis is original research in the form of a proof-of-concept study. It details performance results of a locally-constructed test platform, designed to simulate a portion of the U.S. Navy's AEGIS Weapon System. The scope of this work is to test the viability of using commodity-based hardware to achieve performance levels equal to, or greater than, current proprietary systems. Value-Added metrics are applied through cost comparisons between the test platform and typical AEGIS systems. While this study specifically targets AEGIS, the results can be generalized to non-military applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Nimgaonkar, Satyajeet. "Secure and Energy Efficient Execution Frameworks Using Virtualization and Light-weight Cryptographic Components". Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699986/.

Texto completo
Resumen
Security is a primary concern in this era of pervasive computing. Hardware based security mechanisms facilitate the construction of trustworthy secure systems; however, existing hardware security approaches require modifications to the micro-architecture of the processor and such changes are extremely time consuming and expensive to test and implement. Additionally, they incorporate cryptographic security mechanisms that are computationally intensive and account for excessive energy consumption, which significantly degrades the performance of the system. In this dissertation, I explore the domain of hardware based security approaches with an objective to overcome the issues that impede their usability. I have proposed viable solutions to successfully test and implement hardware security mechanisms in real world computing systems. Moreover, with an emphasis on cryptographic memory integrity verification technique and embedded systems as the target application, I have presented energy efficient architectures that considerably reduce the energy consumption of the security mechanisms, thereby improving the performance of the system. The detailed simulation results show that the average energy savings are in the range of 36% to 99% during the memory integrity verification phase, whereas the total power savings of the entire embedded processor are approximately 57%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wailly, Aurélien. "End-to-end security architecture for cloud computing environments". Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0020/document.

Texto completo
Resumen
La virtualisation des infrastructures est devenue un des enjeux majeurs dans la recherche, qui fournissent des consommations d'énergie moindres et des nouvelles opportunités. Face à de multiples menaces et des mécanismes de défense hétérogènes, l'approche autonomique propose une gestion simplifiée, robuste et plus efficace de la sécurité du cloud. Aujourd'hui, les solutions existantes s'adaptent difficilement. Il manque des politiques de sécurité flexibles, une défense multi-niveaux, des contrôles à granularité variable, ou encore une architecture de sécurité ouverte. Ce mémoire présente VESPA, une architecture d'autoprotection pour les infrastructures cloud. VESPA est construit autour de politiques qui peuvent réguler la sécurité à plusieurs niveaux. La coordination flexible entre les boucles d'autoprotection réalise un large spectre de stratégies de sécurité comme des détections et des réactions sur plusieurs niveaux. Une architecture extensible multi plans permet d'intégrer simplement des éléments déjà présents. Depuis peu, les attaques les plus critiques contre les infrastructures cloud visent la brique la plus sensible: l'hyperviseur. Le vecteur d'attaque principal est un pilote de périphérique mal confiné. Les mécanismes de défense mis en jeu sont statiques et difficile à gérer. Nous proposons une approche différente avec KungFuVisor, un canevas logiciel pour créer des hyperviseurs autoprotégés spécialisant l'architecture VESPA. Nous avons montré son application à trois types de protection différents : les attaques virales, la gestion hétérogène multi-domaines et l'hyperviseur. Ainsi la sécurité des infrastructures cloud peut être améliorée grâce à VESPA
Since several years the virtualization of infrastructures became one of the major research challenges, consuming less energy while delivering new services. However, many attacks hinder the global adoption of Cloud computing. Self-protection has recently raised growing interest as possible element of answer to the cloud computing infrastructure protection challenge. Yet, previous solutions fall at the last hurdle as they overlook key features of the cloud, by lack of flexible security policies, cross-layered defense, multiple control granularities, and open security architectures. This thesis presents VESPA, a self-protection architecture for cloud infrastructures. Flexible coordination between self-protection loops allows enforcing a rich spectrum of security strategies. A multi-plane extensible architecture also enables simple integration of commodity security components.Recently, some of the most powerful attacks against cloud computing infrastructures target the Virtual Machine Monitor (VMM). In many case, the main attack vector is a poorly confined device driver. Current architectures offer no protection against such attacks. This thesis proposes an altogether different approach by presenting KungFuVisor, derived from VESPA, a framework to build self-defending hypervisors. The result is a very flexible self-protection architecture, enabling to enforce dynamically a rich spectrum of remediation actions over different parts of the VMM, also facilitating defense strategy administration. We showed the application to three different protection scheme: virus infection, mobile clouds and hypervisor drivers. Indeed VESPA can enhance cloud infrastructure security
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Pham, Khoi Minh. "NEURAL NETWORK ON VIRTUALIZATION SYSTEM, AS A WAY TO MANAGE FAILURE EVENTS OCCURRENCE ON CLOUD COMPUTING". CSUSB ScholarWorks, 2018. https://scholarworks.lib.csusb.edu/etd/670.

Texto completo
Resumen
Cloud computing is one important direction of current advanced technology trends, which is dominating the industry in many aspects. These days Cloud computing has become an intense battlefield of many big technology companies, whoever can win this war can have a very high potential to rule the next generation of technologies. From a technical point of view, Cloud computing is classified into three different categories, each can provide different crucial services to users: Infrastructure (Hardware) as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS). Normally, the standard measurements for cloud computing reliability level is based on two approaches: Service Level Agreements (SLAs) and Quality of Service (QoS). This thesis will focus on IaaS cloud systems’ Error Event Logs as an aspect of QoS in IaaS cloud reliability. To have a better view, basically, IaaS is a derivation of the traditional virtualization system where multiple virtual machines (VMs) with different Operating System (OS) platforms, are run solely on one physical machine (PM) that has enough computational power. The PM will play the role of the host machine in cloud computing, and the VMs will play the role as the guest machines in cloud computing. Due to the lack of fully access to the complete real cloud system, this thesis will investigate the technical reliability level of IaaS cloud through simulated virtualization system. By collecting and analyzing the event logs generated from the virtualization system, we can have a general overview of the system’s technical reliability level based on number of error events occur in the system. Then, these events will be used on neural network time series model to detect the system failure events’ pattern, as well as predict the next error event that is going to occur in the virtualization system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Lastera, Maxime. "Architecture sécurisée pour les systèmes d'information des avions du futur". Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00938782.

Texto completo
Resumen
Traditionnellement, dans le domaine avionique les logiciels utilisés à bord de l'avion sont totalement séparés des logiciels utilisés au dehors afin d'éviter toutes interaction qui pourrait corrompre les systèmes critiques à bord de l'avion. Cependant, les nouvelles générations d'avions exigent plus d'interactions avec le monde ouvert avec pour objectif de proposer des services étendu, générant ainsi un flux d'information potentiellement dan- gereux. Dans une précédente étude, nous avons proposé l'utilisation de la virtualisation pour assurer la sûreté de fonctionnement d'applications critiques assurant des communi- cations bidirectionnelles entre systèmes critiques et systèmes non sûr. Dans cette thèse nous proposons deux contributions. La première contribution propose une méthode de comparaison d'hyperviseur. Nous avons développé un banc de test permettant de mesurer les performances d'un système virtualisé. Dans cette étude, différentes configurations ont été expérimentées, d'un système sans OS à une architecture complète avec un hyperviseur et un OS s'exécutant dans une machine virtuelle. Plusieurs tests (processeur, mémoire et réseaux) ont été mesurés et collectés sur différents hyperviseurs. La seconde contribution met l'accent sur l'amélioration d'une architecture de sécurité existante. Un mécanisme de comparaison basé sur l'analyse des traces d'exécution est utilisé pour détecter les anomalies entre instances d'application exécutées sur diverse ma- chines virtuelles. Nous proposons de renforcer le mécanisme de comparaison à l'exécution par l'utilisation d'un modèle d'exécution issu d'une analyse statique du bytecode Java. Afin de valider notre approche, nous avons développé un prototype basé sur un cas d'étude identifié avec Airbus qui porte sur l'utilisation d'un ordinateur portable dédié à la maintenance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Bielski, Maciej. "Nouvelles techniques de virtualisation de la mémoire et des entrées-sorties vers les périphériques pour les prochaines générations de centres de traitement de données basés sur des équipements répartis déstructurés". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT022/document.

Texto completo
Resumen
Cette thèse s'inscrit dans le contexte de la désagrégation des systèmes informatiques - une approche novatrice qui devrait gagner en popularité dans le secteur des centres de données. A la différence des systèmes traditionnels en grappes, où les ressources sont fournies par une ou plusieurs machines, dans les systèmes désagrégés les ressources sont fournies par des nœuds discrets, chaque nœud ne fournissant qu'un seul type de ressources (unités centrales de calcul, mémoire, périphériques). Au lieu du terme de machine, le terme de créneau (slot) est utilisé pour décrire une unité de déploiement de charge de travail. L'emplacement est assemblé dynamiquement avant un déploiement de charge de travail par l'orchestrateur système.Dans l'introduction nous abordons le sujet de la désagrégation et en présentons les avantages par rapport aux architectures en grappes. Nous ajoutons également au tableau une couche de virtualisation car il s'agit d'un élément crucial des centres de données. La virtualisation fournit une isolation entre les charges de travail déployées et un partitionnement flexible des ressources. Elle doit cependant être adaptée afin de tirer pleinement parti de la désagrégation. C'est pourquoi les principales contributions de ce travail se concentrent sur la prise en charge de la couche de virtualisation pour la mémoire désagrégée et la mise à disposition des périphériques.La première contribution principale présente les modifications de la pile logicielle liées au redimensionnement flexible de la mémoire d'une machine virtuelle (VM). Elles permettent d'ajuster la quantité de RAM hébergée (c'est à dire utilisée par la charge de travail en cours d'exécution dans une VM) pendant l'exécution avec une granularité d'une section mémoire. Du point de vue du logiciel il est transparent que la RAM proviennent de banques de mémoire locales ou distantes.La deuxième contribution discute des notions de partage de mémoire entre machines virtuelles et de migration des machines virtuelles dans le contexte de la désagrégation. Nous présentons d'abord comment des régions de mémoire désagrégées peuvent être partagées entre des machines virtuelles fonctionnant sur différents nœuds. De plus, nous discutons des différentes variantes de la méthode de sérialisation des accès simultanés. Nous expliquons ensuite que la notion de migration de VM a acquis une double signification avec la désagrégation. En raison de la désagrégation des ressources, une charge de travail est associée au minimum à un nœud de calcul et a un nœud mémoire. Il est donc possible qu'elle puisse être migrée vers des nœuds de calcul différents tout en continuant à utiliser la même mémoire, ou l'inverse. Nous discutons des deux cas et décrivons comment cela peut ouvrir de nouvelles opportunités pour la consolidation des serveurs.La dernière contribution de cette thèse est liée à la virtualisation des périphériques désagrégés. Partant de l'hypothèse que la désagrégation de l'architecture apporte de nombreux effets positifs en général, nous expliquons pourquoi elle n'est pas immédiatement compatible avec la technique d'attachement direct, est pourtant très populaire pour sa performance quasi native. Pour remédier à cette limitation, nous présentons une solution qui adapte le concept d'attachement direct à la désagrégation de l'architecture. Grâce à cette solution, les dispositifs désagrégés peuvent être directement attachés aux machines virtuelles, comme s'ils étaient branchés localement. De plus, l'OS hébergé, pour lequel la configuration de l'infrastructure sous-jacente n'est pas visible, n'est pas lui-même concerné par les modifications introduites
This dissertation is positioned in the context of the system disaggregation - a novel approach expected to gain popularity in the data center sector. In traditional clustered systems resources are provided by one or multiple machines. Differently to that, in disaggregated systems resources are provided by discrete nodes, each node providing only one type of resources (CPUs, memory and peripherals). Instead of a machine, the term of a slot is used to describe a workload deployment unit. The slot is dynamically assembled before a workload deployment by the unit called system orchestrator.In the introduction of this work, we discuss the subject of disaggregation and present its benefits, compared to clustered architectures. We also add a virtualization layer to the picture as it is a crucial part of data center systems. It provides an isolation between deployed workloads and a flexible resources partitioning. However, the virtualization layer needs to be adapted in order to take full advantage of disaggregation. Thus, the main contributions of this work are focused on the virtualization layer support for disaggregated memory and devices provisioning.The first main contribution presents the software stack modifications related to flexible resizing of a virtual machine (VM) memory. They allow to adjust the amount of guest (running in a VM) RAM at runtime on a memory section granularity. From the software perspective it is transparent whether they come from local or remote memory banks.As a second main contribution we discuss the notions of inter-VM memory sharing and VM migration in the disaggregation context. We first present how regions of disaggregated memory can be shared between VMs running on different nodes. This sharing is performed in a way that involved guests which are not aware of the fact that they are co-located on the same computing node or not. Additionally, we discuss different flavors of concurrent accesses serialization methods. We then explain how the VM migration term gained a twofold meaning. Because of resources disaggregation, a workload is associated to at least one computing node and one memory node. It is therefore possible that it is migrated to a different computing node and keeps using the same memory, or the opposite. We discuss both cases and describe how this can open new opportunities for server consolidation.The last main contribution of this dissertation is related to disaggregated peripherals virtualization. Starting from the assumption that the architecture disaggregation brings many positive effects in general, we explain why it breaks the passthrough peripheral attachment technique (also known as a direct attachment), which is very popular for its near-native performance. To address this limitation we present a design that adapts the passthrough attachment concept to the architecture disaggregation. By this novel design, disaggregated devices can be directly attached to VMs, as if they were plugged locally. Moreover, all modifications do not involve the guest OS itself, for which the setup of the underlying infrastructure is not visible
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Abdullah, Miran Taha. "Smart Client-Server Protocol and Architecture for Adaptive Multimedia Streaming". Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/103324.

Texto completo
Resumen
En los últimos años, el consumo de servicios multimedia ha aumentado y se prevé que esta tendencia continúe en un futuro próximo, convirtiendo el tema de la evaluación de la Calidad de la Experiencia (QoE) en un tema muy importante para valorar el servicio de los proveedores. En este sentido, la optimización de la QoE recibe cada vez más atención ya que las soluciones actuales no han tenido en cuenta, la adaptación, la viabilidad, la rentabi-lidad y la fiabilidad. La presente memoria se centra en la caracterización, diseño, desarrollo y evaluación de diferentes aplicaciones multimedia, con el fin de optimizar la QoE. Por tanto, este trabajo investiga la influencia que la infraestructura de redes, las características de los videos y los terminales de los usuarios, presentan en la QoE de los servicios multimedia actuales en Internet. Esta tesis se basa en la investigación exhaustiva de la evaluación subjetiva y objetiva de QoE en redes heterogéneas. Los desafíos y cuestiones relacionados con el estado de la técnica y se discuten en esta disertación. En la primera fase, diseñamos una metodología de prueba para evaluar la QoE en la transmisión de video en directo y a través de plataformas de video bajo demanda en redes Wi-Fi y celulares. A partir de esta fase inicial, propondremos los problemas a investigar y las preguntas para resolver a lo largo de esta disertación. Nuestra metodología hace uso de métricas subjetivas y objetivas para evaluar la QoE percibida por los usuarios finales. Se realiza un conjunto de experimentos en laboratorio donde nuestra metodología de pruebas es aplicada. Los resultados obtenidos se recopilan y analizan para extraer las relaciones entre la Calidad de servicio (QoS) y QoE. A partir de estos resultados, se propone un mapeo de QoS-QoE que permite predecir la QoE. En la siguiente fase de la investigación, desarrollamos los algoritmos de optimización de QoE basados en la administración del sistema de red para redes Wi-Fi y celulares. Los algoritmos usan los parámetros clave que se tuvieron en cuenta para la evaluación de QoE. El objetivo de estos algorit-mos es proporcionar un sistema de gestión flexible para las redes con el ob-jetivo de lograr un equilibrio controlado entre la maximización de QoE y la eficiencia del uso de los recursos. Por último, se diseña el banco de pruebas del sistema para evaluar el rendimiento de las aplicaciones de servicios multimedia genéricos en los diferentes entornos de prueba. El banco de pruebas del sistema se basa en el enfoque de virtualización; usa los recursos compartidos de un hardware fí-sico para virtualizar todos los componentes. El banco de pruebas virtualiza-do proporciona funciones de red virtualizadas para diferentes escenarios, como Internet (las redes de distribución de contenido - CDNs) y redes inalámbricas. Por lo tanto, se adoptan protocolos livianos y mecanismos ágiles en el sistema, para proporcionar un mejor servicio a los usuarios fina-les. Los resultados de QoE son proporcionados a los proveedores de servi-cios de acuerdo con los parámetros que se definen en el proceso de la eva-luación. Como resultado hemos obtenido un sistema que presenta un servi-cio rentable como una forma factible para la evaluación de la prueba.
In recent years, multimedia services consumption has increased and it is expected that this trend will continue in the near future, becoming the evaluation of Quality of Experience (QoE) as a very important issue for assessing the quality of providers' services. In this sense, the optimization of the QoE is progressively receiving much attention considering that current solutions are not based on the adaptation, feasibility, cost-effectiveness, and reliability. The present dissertation is focused on the characterization, design, development and evaluation of different multimedia applications aimed to optimize the QoE. Therefore, this work investigates the influence that the networks infrastructure, the videos' characteristics and the users' terminals present on QoE of the current Internet multimedia services. The work is based on a comprehensive research of subjective and objective assessments in heterogeneous networks. Challenges and research questions related to the state of the art are discussed in this dissertation. In the first phase of this dissertation, we design a test methodology for assessing QoE of live video streaming and video on demand platforms to be transmitted over Wi-Fi and cellular networks. From this initial step, we will propound the related research issues and questions to solve this dissertation. Our methodology considers the use of subjective and objective metrics to evaluate the QoE perceived by end-users. A set of laboratory experiments is conducted where our proposed methodology is applied. The obtained results are gathered and analyzed to extract the relations between Quality of Service (QoS) and QoE. From the results, we propose a QoS-QoE mapping which allows predicting QoE. In the next phase of the research, we develop QoE-optimization algorithms based on network system management for Wi-Fi and cellular networks. The algorithms use the key parameters that were taken into account for QoE assessment. The goal of these algorithms is to provide a flexible management system for the networks in order to achieve the desirable trade-off between QoE maximization and resource usage efficiency. Lastly, the system testbed is designed in order to evaluate the performance of generic multimedia services applications for the different environments under test. The system testbed is based on virtualization approach; it uses the shared resources of a physical hardware to virtualize all components. The virtualized testbed provides virtualized network functions for the different scenarios such as the Internet (Content Delivery Networks - CDNs) and wireless networks. Therefore, lightweight protocols and agile mechanisms are adopted in the system to provide enhanced service to end-users. The QoE results are reported to the service providers according to the parameters defined in the evaluation process. As a result, we have obtained a cost-effective system, which is considered as a feasible way for test evaluation.
En els últims anys, el consum de serveis multimèdia ha augmentat i es preveu que aquesta tendència continue en un futur pròxim, convertitnt el tema de l'avaluació de la Qualitat d'Experiència (QoE) una tasca molt im-portant per a valorar el servei dels proveïdors. En aquest sentit, l'optimització de la QoE rep cada vegada més atenció degut a que les solucions actuals no tenen en compte, l'adaptació, la viabilitat, el rendiment i la fiabilitat. La present memòria se centra en la caracterització, disseny, desenvolupament i avaluació de diferents aplicacions multimèdia, amb la finalitat d'optimitzar la QoE. Per tant, aquest treball investiga la influència que la infraestructura de les xarxes, les característiques dels videos i els terminals dels usuaris tenen sobre la QoE dels serveis multimèdia actuals d'Iinternet. Aquesta tesi es basa en una recerca exhaustiva de l'avaluació subjectiva i objectiva de QoE en xarxes heterogènies. Els desafiaments i preguntes relacionats amb l'estat de la tècnica es discuteixen en aquesta dissertació. En la primera fase, dissenyem la metodologia de prova per a avaluar la QoE de transmissió de video en directe i de plataformes de video baix demanda en xarxes Wi-Fi i cel·lulars. A partir d'aquest primer pas, proposem els problemes de recerca relacionats i les preguntes a resoldre a través d'a-questa tesi. La nostra metodologia fa ús de mètriques subjectives i objecti-ves per a avaluar la QoE dels usuaris finals. Es realitzen un conjunt d'expe-riments en laboratori on s'aplica la nostra metodología. Els resultats obtin-guts es recopilen i analitzen per a extraure les relacions entre la QoS i la QoE. A partir d'aquests resultats, esproposa un mapatge de QoS-QoE que ens permetrà predir la QoE. En la següent fase de la recerca, desenvolupem els algoritmes d'optimi-tzació de la QoE per a l'administració de xarxes Wi-Fi i cel·lulars. Els algo-ritmes utilitzen els paràmetres clau que es van tenir en compte per a l'ava-luació de QoE. L'objectiu d'aquests algoritmes és proporcionar un sistema de gestió flexible per ales xarxes que permetrá aconseguir un equilibri con-trolat entre la maximització de la QoE i l'us eficient dels recursos. Finalment, el banc de proves del sistema està dissenyat per a avaluar el rendiment de les aplicacions de serveis multimèdia genèrics en els diferents entorns de prova. El banc de proves del sistema es basa en l'enfocament de virtualització; usa els recursos compartits d'un equip físic que virtualitza tots els components. El banc de proves virtualitzat proporciona les funcions de xarxa virtualitzades per a diferents escenaris, com Internet (les xarxes de distribució de continguts - CDNs) i xarxes sense fils. Per tant, s'adopten protocols lleugers i mecanismes àgils en el sistema per a proporcionar un millor servei als usuaris finals. Els resultats de QoE son proporcionats als proveïdors de serveis d'acord amb els paràmetres que es defineixen en el procés de l'avaluació. Com a resultat, hem obtés un sistema que presenta un servei rendible i com a viable per a l'avaluació de la prova.
Abdullah, MT. (2018). Smart Client-Server Protocol and Architecture for Adaptive Multimedia Streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/103324
TESIS
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Sandberg, Andreas. "Understanding Multicore Performance : Efficient Memory System Modeling and Simulation". Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-220652.

Texto completo
Resumen
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution.
CoDeR-MP
UPMARC
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Ruwase, Olatunji O. "Improving Device Driver Reliability through Decoupled Dynamic Binary Analyses". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/233.

Texto completo
Resumen
Device drivers are Operating Systems (OS) extensions that enable the use of I/O devices in computing systems. However, studies have identified drivers as an Achilles’ heel of system reliability, their high fault rate accounting for a significant portion of system failures. Consequently, significant effort has been directed towards improving system robustness by protecting system components (e.g., OS kernel, I/O devices, etc.) from the harmful effects of driver faults. In contrast to prior techniques which focused on preventing unsafe driver interactions (e.g., with the OS kernel), my thesis is that checking a driver’s execution for correctness violations results in the detection and mitigation of more faults. To validate this thesis, I present Guardrail, a flexible and powerful framework that enables instruction-grained dynamic analysis (e.g., data race detection) of unmodified kernel-mode driver binaries to safeguard I/O operations and devices from driver faults. Guardrail decouples the analysis tool from driver execution to improve performance, and runs it in user-space to simplify the deployment of new tools. Moreover, Guardrail leverages virtualization to be transparent to both the driver and device, and enable support for arbitrary driver/device combinations. To demonstrate Guardrail’s generality, I implemented three novel dynamic checking tools within the framework for detecting memory faults, data races and DMA faults in drivers. These tools found 25 serious bugs, including previously unknown bugs, in Linux storage and network drivers. Some of the bugs existed in several Linux (and driver) releases, suggesting their elusiveness to existing approaches. Guardrail easily detected these bugs using common driver workloads. Finally, I present an evaluation of using Guardrail to protect network and storage I/O operations from memory faults, data races and DMA faults in drivers. The results show that with hardware-assisted logging for decoupling the heavyweight analyses from driver execution, standard I/O workloads generally experienced negligible slowdown on their end-to-end performance. In conclusion, Guardrail’s high fidelity fault detection and efficient monitoring performance makes it a promising approach for improving the resilience of computing systems to the wide variety of driver faults.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Hnarakis, Ryan. "In Perfect Xen, A Performance Study of the Emerging Xen Scheduler". DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1121.

Texto completo
Resumen
Fifty percent of Fortune 500 companies trust Xen, an open-source bare-metal hypervisor, to virtualize their websites and mission critical services in the cloud. Providing superior fault tolerance, scalability, and migration, virtualization allows these companies to run several isolated operating systems simultaneously on the same physical server. These isolated operating systems, called virtual machines, require a virtual traffic guard to cooperate with one another. This guard known as the Credit2 scheduler along with the newest Xen hypervisor was recently developed to supersede the older schedulers. Since wasted CPU cycles can be costly, the Credit2 prototype must undergo significant performance validation before being released into production. Furthermore, leading commercial virtualization products, including VMWare and Microsoft Hyper-V frequently adopt Xen's proven technologies. This thesis provides quantitative performance measurements of the Credit1 and Credit2 schedulers, and provides recommendations for building hypervisor schedulers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Nitu, Vlad-Tiberiu. "Improving energy efficiency of virtualized datacenters". Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/23799/1/NITU_Vlad%20Tiberiu.pdf.

Texto completo
Resumen
Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Siqueira, Marcos Antonio de 1978. "Redes ópticas de transporte definidas por software com suporte à virtualização e operação autônoma com base em políticas". [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260678.

Texto completo
Resumen
Orientador: Christian Rodolfo Esteve Rothenberg
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-27T08:39:20Z (GMT). No. of bitstreams: 1 Siqueira_MarcosAntoniode_D.pdf: 7263131 bytes, checksum: fad0f39a012c338503de664fbec8fe8c (MD5) Previous issue date: 2015
Resumo: Esta tese apresenta uma proposta de arquitetura para controle de redes ópticas de transporte que utiliza o paradigma de redes definidas por software, com suporte a operação autonômica com base em políticas. A arquitetura é constituída pelos seguintes pilares: (i) modelagem dos elementos de rede, incluindo suas interconexões, restrições, capacidades, entre outros, utilizando a linguagem YANG; (ii) composição dos modelos dos elementos de rede e suas relações em um modelo que representa a rede, suportando transformações para representação da rede como grafos de propriedades; e (iii) um modelo de políticas baseado em objetos associados ao grafo de propriedades da rede que viabiliza a operação autonômica do controlador. A proposta foi validada através de provas de conceito realizadas por simulações, protótipos e experimentos, incluindo casos de uso de segmentação e virtualização da rede óptica de transporte, aplicações SDN para ajuste de parâmetros operacionais da rede com base em políticas, bem como a operação autônoma do controlador SDN com auxílio de ferramentas de simulação com rotinas de planejamento automatizado
Abstract: This thesis proposes an architecture for optical transport networks control, using the software defined networking paradigm, with support for policy-based autonomic operation. The architecture is composed of three pillars: (i) modeling of network elements, its interconnections, constraints and capabilities using the YANG language; (ii) composition of the network element models and its interconnections forming a network model, supporting transformations for representing the network as property graphs; and (iii) a policy model based on objects associated to the network graph designed for allowing autonomic operation of the network controller. The proposal has been validated through a set of proofs of concept performed via simulations, prototypes and experiments, including use cases for optical transport network slicing and virtualization, SDN applications for policy-based operational parameters adjustment, and autonomic operation of the SDN controller assisted by simulation tools with routines for automated planning
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Nasim, Robayet. "Architectural Evolution of Intelligent Transport Systems (ITS) using Cloud Computing". Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-35719.

Texto completo
Resumen
With the advent of Smart Cities, Intelligent Transport System (ITS) has become an efficient way of offering an accessible, safe, and sustainable transportation system. Utilizing advances in Information and Communication Technology (ICT), ITS can maximize the capacity of existing transportation system without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design. This thesis studies how to design a highly flexible and deployable architecture for ITS, which can utilize the recent technologies such as - cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications for transport authorities in the cloud, which reduces infrastructure cost, improves management flexibility and also ensures better resource utilization. Moreover, we use a publish/subscribe system as a building block for developing a low latency ITS application, which is a promising technology for designing scalable and distributed applications within the ITS domain. Although cloud-based architectures provide the flexibility of adding, removing or moving ITS services within the underlying physical infrastructure, it may be difficult to provide the required quality of service (QoS) which decrease application productivity and customer satisfaction, leading to revenue losses. Therefore, we investigate the impact of service mobility on related QoS in the cloud-based infrastructure. We investigate different strategies to improve performance of a low latency ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes. Evaluation results from a private cloud testbed using OpenStack show that our proposed architecture is suitable for hosting ITS applications which have stringent performance requirements in terms of scalability, QoS and latency.
Baksidestext: Intelligent Transport System (ITS) can utilize advances in Information and Communication Technology (ICT) and maximize the capacity of existing transportation systems without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design.  This thesis studies how to design an efficient deployable architecture for ITS, which can utilize the advantages of cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications, and to use a publish/subscribe system as a building block for developing a low latency ITS application. We investigate different strategies to improve performance of an ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes.

Artikel 4 Network Centric Performance Improvement for Live VM Migration finns i avhandlingen som manuskript. Nu publicerat konferenspaper. 

Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Carbone, Martim. "Semantic view re-creation for the secure monitoring of virtual machines". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44841.

Texto completo
Resumen
The insecurity of modern-day software has created the need for security monitoring applications. Two serious deficiencies are commonly found in these applications. First, the absence of isolation from the system being monitored allows malicious software to tamper with them. Second, the lack of secure and reliable monitoring primitives in the operating system makes them easy to be evaded. A technique known as Virtual Machine Introspection attempts to solve these problems by leveraging the isolation and mediation properties of full-system virtualization. A problem known as semantic gap, however, occurs as a result of the low-level separation enforced by the hypervisor. This thesis proposes and investigates novel techniques to overcome the semantic gap, advancing the state-of-the-art on the syntactic and semantic view re-creation for applications that conduct passive and active monitoring of virtual machines. First, we propose a new technique for reconstructing a syntactic view of the guest OS kernel's heap state by applying a combination of static code and dynamic memory analysis. Our key contribution is the accuracy and completeness of our analysis. We also propose a new technique that allows out-of-VM applications to invoke and securely execute API functions inside the monitored guest's kernel, eliminating the need for the application to know details of the guest's internals. Our key contribution is the ability to overcome the semantic gap in a robust and secure manner. Finally, we propose a new virtualization-based event monitoring technique based on the interception of kernel data modifications. Our key contribution is the ability to monitor operating system events in a general and secure fashion.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Srivastava, Abhinav. "Robust and secure monitoring and attribution of malicious behaviors". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41161.

Texto completo
Resumen
Worldwide computer systems continue to execute malicious software that degrades the systemsâ performance and consumes network capacity by generating high volumes of unwanted traffic. Network-based detectors can effectively identify machines participating in the ongoing attacks by monitoring the traffic to and from the systems. But, network detection alone is not enough; it does not improve the operation of the Internet or the health of other machines connected to the network. We must identify malicious code running on infected systems, participating in global attack networks. This dissertation describes a robust and secure approach that identifies malware present on infected systems based on its undesirable use of network. Our approach, using virtualization, attributes malicious traffic to host-level processes responsible for the traffic. The attribution identifies on-host processes, but malware instances often exhibit parasitic behaviors to subvert the execution of benign processes. We then augment the attribution software with a host-level monitor that detects parasitic behaviors occurring at the user- and kernel-level. User-level parasitic attack detection happens via the system-call interface because it is a non-bypassable interface for user-level processes. Due to the unavailability of one such interface inside the kernel for drivers, we create a new driver monitoring interface inside the kernel to detect parasitic attacks occurring through this interface. Our attribution software relies on a guest kernelâ s data to identify on-host processes. To allow secure attribution, we prevent illegal modifications of critical kernel data from kernel-level malware. Together, our contributions produce a unified research outcome --an improved malicious code identification system for user- and kernel-level malware.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Kundu, Sajib. "Improving Resource Management in Virtualized Data Centers using Application Performance Models". FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/874.

Texto completo
Resumen
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Ministr, Martin. "Virtuální platformy pro simulaci instrukčních sad". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-235424.

Texto completo
Resumen
This master's thesis deals with creation of generators of the code for existing virtual platforms QEMU and OVP. This work consist of study of techniques, which are used by virtual machines for their work. Main part of this work is the design of process, which transforms input instruction sets to the code used by these virtual platforms. As the result of this work functional programs, which generate the code for these virtual platforms, was created.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Haddad, Ahmed. "GeRoFan : une architecture et un plan de contrôle basés sur la radio-sur-fibre pour la mutualisation des réseaux d'accès mobile de nouvelle génération". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0025/document.

Texto completo
Resumen
L’architecture actuelle des réseaux d’accès radio n’est pas adaptée en terme de capacité à supporter l’accroissement continu du trafic dans les systèmes cellulaires 4G et au-delà. L’objectif de cette thèse est de proposer une architecture réseau générique, GeRoFAN (Generic Radio over Fiber Access Network) pour la fédération des stations de base des systèmes cellulaires de nouvelle génération (WiMAX, 4G LTE). Deux innovations technologiques majeures sont utilisées pour l’implémentation de l’architecture GeRoFAN: la radio-sur-fibre (RoF) et les modulateurs réflexifs éléctro-absorbants. La thèse vise aussi à concevoir pour l’architecture GeRoFAN un plan contrôle et un canal de signalisation adapté permettant le basculement des ressources radio, selon la fluctuation du trafic, entre un grand nombre de cellules réparties à l’échelle métropolitaine. Cependant, il a été bien avéré que la transmission optique de plusieurs canaux radios en utilisant la RoF analogique est assujettie à des multiples facteurs de dégradation physique altérant la qualité du signal de ces canaux et induisant une perte dans leur capacité de Shannon. L’originalité du plan de contrôle de GeRoFAN est de réaliser une affectation optimisée des canaux radios sur les porteuses optiques, grace au multiplexage par sous-porteuse (SCM), afin d’ajuster la capacité de Shannon dans chaque cellule radio à la charge de trafic à laquelle elle est soumise. A cet effet, une connaissance fine des contraintes physiques de la transmission RoF est requise pour le plan de contrôle. Cette connaissance est acquise par l’élaboration d’un modèle analytique des divers bruits de transmission du système GeRoFAN. Contrairement à des propositions comparables, le plan contrôle de GeRoFAN se doit d’être le plus transparent que possible à la technologie des systèmes radio concernés. Sa nature " MAC radio agnostique " vise à permettre, grâce au multiplexage en longueur d’onde et au routage optique WDM, la fédération de plusieurs opérateurs utilisant différentes technologies radio sur la même infrastructure. Plus généralement, avec la mutualisation de l’architecture GeRoFAN, le plan de contrôle permet de virtualiser les ressources radiofréquences et de promouvoir de nouveaux modèles économiques pour les opérateurs Télécoms. Le dernier volet de la thèse se focalise sur la valeur "business" du paradigme GeRoFAN. Les contours du nouveau éco-system d’affaire promu par GeRoFAN sont définis. Les motivations/attentes des différentes parties prenantes dans cet éco-system sont esquissées, les contraintes réglementaires et organisationnelles soulevées sont adressées afin d’assurer un déploiement sans heurts de GeRoFAN. Bien qu’exigeant un nouveau modèle réglementaire, il s’agit de mettre en évidence l’intérêt économique de la solution GeRoFAN, tout particulièrement en comparaison à la RoF digitale, à travers des études technico-économiques chiffrant les couts d’investissement (CapEx), les couts opérationnels (OpEx) et les possibles retours sur investissement. A cet effet, deux modèles économiques sont proposés mettant en évidence la valeur ajoutée de GeRoFAN tout au long de la chaine de valeur
Current radio access networks architectures are not suited in terms of capacity and backhauling capabilities to fit the continuing traffic increase of 4G cellular systems. The objective of the thesis is to propose an innovative and generic mobile backhauling network architecture, called GeRoFAN (Generic Radio-over-Fiber Access Network), for next generation mobile systems (WiMAX, 4G LTE). Two major technological innovations are used to implement GeRo-FAN: analog Radio-over-Fiber (RoF) and reflective amplified absorption modulators. The aim of this thesis is to design for such an architecture an original Control Plane (CP) and a signaling channel enabling to balance radio resources between a set of neighboring cells at the access/metropolitan scale according to traffic fluctuations. The transmission of several radio frequencies by means of an analog RoF link suffers from several impairments that may degrade the capacity of the radio system. The originality of the GeRoFAN-CP consists in mapping radio frequencies with optical carriers by means of Sub-Carrier Multiplexing (SCM) in order to optimize the Shannon’s capacity within the various cells covered by the system according to the current traffic load. For that purpose, a deep analysis and modeling of the various physical layer impairments impacting the quality of the radio signal is carried out. Unlike comparable approaches, the GeRoFAN-CP is as independent as possible from the radio layer protocols. Thus, the "radio MAC-agnostic" nature of the GeRoFAN-CP enables to federate multiple operators using different radio technologies onto the same backhauling optical infrastructure. Subcarrier and wavelength division multiplexing (SCM/WDM) as well as WDM optical routing capabilities are exploited onto the GeRoFAN transparent architecture. More globally, the GeRoFAN-CP enables a form of "radio frequency virtualization" while promoting new business models for Telecom service providers. The last part of the thesis focuses on the business value of the GeRoFAN paradigm. The expectations of the different stake-holders and main regulatory/organizational entities that could be involved in the deployment of GeRoFAN infrastructures should be addressed in order to achieve a smooth deployment of this new type of mobile backhauling. Economics of the GeRoFAN architecture are investigated in terms of OpEx/CapEx valuation and investment profitability, especially in reference to digitized RoF. Two business models are then proposed to study how GeRoFAN contributes to enriching the cellular backhauling service value chain
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kominos, Charalampos Gavriil. "Performance analysis of different virtualization architectures using OpenStack". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-318099.

Texto completo
Resumen
Cloud computing is a modern model for having on demand access to a pool ofconfigurable resources like CPU, storage etc. Despite its relative youth however, it has already changed the face of present-day IT. The ability to request computing power presents a whole new list of opportunities and challenges. Virtual machines, containers and bare-metal machines are the three possible computing resources which a cloud user can ask from a cloud provider. In the context of this master thesis, we will discuss and benchmark these three different deployment methods for a private OpenStack cloud. We will compare and contrast these systems in terms of CPU, networking behavior, disk I/O and RAM performance in order to determine the performance deterioration of each subsystem. We will also try to empirically determine if private clouds based on containers and physical machines are viable alternatives to the traditional VM based scenario.To achieve these goals, a number of software suites have been selected to act as benchmarks with the aim of stressing their respective subsystem. The output of these benchmarks is collected and the results are compared against each other. Finally, the different types of overhead which take place between these three types are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Amaral, Marcelo. "Improving resource efficiency in virtualized datacenters". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666753.

Texto completo
Resumen
In recent years there has been an extraordinary growth of the Internet of Things (IoT) and its protocols. The increasing diffusion of electronic devices with identification, computing and communication capabilities is laying ground for the emergence of a highly distributed service and networking environment. The above mentioned situation implies that there is an increasing demand for advanced IoT data management and processing platforms. Such platforms require support for multiple protocols at the edge for extended connectivity with the objects, but also need to exhibit uniform internal data organization and advanced data processing capabilities to fulfill the demands of the application and services that consume IoT data. One of the initial approaches to address this demand is the integration between IoT and the Cloud computing paradigm. There are many benefits of integrating IoT with Cloud computing. The IoT generates massive amounts of data, and Cloud computing provides a pathway for that data to travel to its destination. But today’s Cloud computing models do not quite fit for the volume, variety, and velocity of data that the IoT generates. Among the new technologies emerging around the Internet of Things to provide a new whole scenario, the Fog Computing paradigm has become the most relevant. Fog computing was introduced a few years ago in response to challenges posed by many IoT applications, including requirements such as very low latency, real-time operation, large geo-distribution, and mobility. Also this low latency, geo-distributed and mobility environments are covered by the network architecture MEC (Mobile Edge Computing) that provides an IT service environment and Cloud-computing capabilities at the edge of the mobile network, within the Radio Access Network (RAN) and in close proximity to mobile subscribers. Fog computing addresses use cases with requirements far beyond Cloud-only solution capabilities. The interplay between Cloud and Fog computing is crucial for the evolution of the so-called IoT, but the reach and specification of such interplay is an open problem. This thesis aims to find the right techniques and design decisions to build a scalable distributed system for the IoT under the Fog Computing paradigm to ingest and process data. The final goal is to explore the trade-offs and challenges in the design of a solution from Edge to Cloud to address opportunities that current and future technologies will bring in an integrated way. This thesis describes an architectural approach that addresses some of the technical challenges behind the convergence between IoT, Cloud and Fog with special focus on bridging the gap between Cloud and Fog. To that end, new models and techniques are introduced in order to explore solutions for IoT environments. This thesis contributes to the architectural proposals for IoT ingestion and data processing by 1) proposing the characterization of a platform for hosting IoT workloads in the Cloud providing multi-tenant data stream processing capabilities, the interfaces over an advanced data-centric technology, including the building of a state-of-the-art infrastructure to evaluate the performance and to validate the proposed solution. 2) studying an architectural approach following the Fog paradigm that addresses some of the technical challenges found in the first contribution. The idea is to study an extension of the model that addresses some of the central challenges behind the converge of Fog and IoT. 3) Design a distributed and scalable platform to perform IoT operations in a moving data environment. The idea after study data processing in Cloud, and after study the convenience of the Fog paradigm to solve the IoT close to the Edge challenges, is to define the protocols, the interfaces and the data management to solve the ingestion and processing of data in a distributed and orchestrated manner for the Fog Computing paradigm for IoT in a moving data environment.
En els últims anys hi ha hagut un gran creixement del Internet of Things (IoT) i els seus protocols. La creixent difusió de dispositius electrònics amb capacitats d'identificació, computació i comunicació esta establint les bases de l’aparició de serveis altament distribuïts i del seu entorn de xarxa. L’esmentada situació implica que hi ha una creixent demanda de plataformes de processament i gestió avançada de dades per IoT. Aquestes plataformes requereixen suport per a múltiples protocols al Edge per connectivitat amb el objectes, però també necessiten d’una organització de dades interna i capacitats avançades de processament de dades per satisfer les demandes de les aplicacions i els serveis que consumeixen dades IoT. Una de les aproximacions inicials per abordar aquesta demanda és la integració entre IoT i el paradigma del Cloud computing. Hi ha molts avantatges d'integrar IoT amb el Cloud. IoT genera quantitats massives de dades i el Cloud proporciona una via perquè aquestes dades viatgin a la seva destinació. Però els models actuals del Cloud no s'ajusten del tot al volum, varietat i velocitat de les dades que genera l'IoT. Entre les noves tecnologies que sorgeixen al voltant del IoT per proporcionar un escenari nou, el paradigma del Fog Computing s'ha convertit en la més rellevant. Fog Computing es va introduir fa uns anys com a resposta als desafiaments que plantegen moltes aplicacions IoT, incloent requisits com baixa latència, operacions en temps real, distribució geogràfica extensa i mobilitat. També aquest entorn està cobert per l'arquitectura de xarxa MEC (Mobile Edge Computing) que proporciona serveis de TI i capacitats Cloud al edge per la xarxa mòbil dins la Radio Access Network (RAN) i a prop dels subscriptors mòbils. El Fog aborda casos d’us amb requisits que van més enllà de les capacitats de solucions només Cloud. La interacció entre Cloud i Fog és crucial per a l'evolució de l'anomenat IoT, però l'abast i especificació d'aquesta interacció és un problema obert. Aquesta tesi té com objectiu trobar les decisions de disseny i les tècniques adequades per construir un sistema distribuït escalable per IoT sota el paradigma del Fog Computing per a ingerir i processar dades. L'objectiu final és explorar els avantatges/desavantatges i els desafiaments en el disseny d'una solució des del Edge al Cloud per abordar les oportunitats que les tecnologies actuals i futures portaran d'una manera integrada. Aquesta tesi descriu un enfocament arquitectònic que aborda alguns dels reptes tècnics que hi ha darrere de la convergència entre IoT, Cloud i Fog amb especial atenció a reduir la bretxa entre el Cloud i el Fog. Amb aquesta finalitat, s'introdueixen nous models i tècniques per explorar solucions per entorns IoT. Aquesta tesi contribueix a les propostes arquitectòniques per a la ingesta i el processament de dades IoT mitjançant 1) proposant la caracterització d'una plataforma per a l'allotjament de workloads IoT en el Cloud que proporcioni capacitats de processament de flux de dades multi-tenant, les interfícies a través d'una tecnologia centrada en dades incloent la construcció d'una infraestructura avançada per avaluar el rendiment i validar la solució proposada. 2) estudiar un enfocament arquitectònic seguint el paradigma Fog que aborda alguns dels reptes tècnics que es troben en la primera contribució. La idea és estudiar una extensió del model que abordi alguns dels reptes centrals que hi ha darrere de la convergència de Fog i IoT. 3) Dissenyar una plataforma distribuïda i escalable per a realitzar operacions IoT en un entorn de dades en moviment. La idea després d'estudiar el processament de dades en el Cloud, i després d'estudiar la conveniència del paradigma Fog per resoldre els desafiaments de IoT a prop del Edge, és definir els protocols, les interfícies i la gestió de dades per resoldre la ingestió i processament de dades d’una manera més eficient
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Marchal, Xavier. "Architectures et fonctions avancées pour le déploiement progressif de réseaux orientés contenus". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0049/document.

Texto completo
Resumen
Les protocoles historiques d’Internet (TCP/IP) qui servaient à interconnecter les tous premiers ordinateurs ne sont plus adaptés à la diffusion massive de contenus qui en est fait aujourd’hui. De nouveaux protocoles réseau centrés sur les contenus (Information-Centric Networking) sont actuellement conçus pour optimiser ces échanges en pariant sur un changement de paradigme où les contenus, plutôt que les machines sont adressables à l’échelle d’Internet. Cependant, un tel changement ne peut se faire que progressivement et si tous les impératifs opérationnels sont assurés. Ainsi, cette thèse a pour objectif d’étudier et de lever les principaux verrous technologiques empêchant l’adoption du protocole NDN (Name Data Networking) par les opérateur en garantissant la sécurité, les performances, l’interopérabilité, la bonne gestion et le déploiement automatisé d’un réseau NDN. Dans un premier temps, nous évaluons les performances actuelles d’un réseau NDN à l’aide d’un outil de notre conception, ndnperf, et constatons le coût élevé pour un serveur utilisant ce protocole. Puis nous proposons un ensemble de solutions pour améliorer l’efficacité d’un serveur NDN pouvant être jusqu’à 6,4 fois plus efficient que les paramètres de base. Ensuite nous nous intéressons à la sécurité de NDN à travers l’évaluation de l’attaque par empoisonnement de contenus, connue pour être critique mais jamais caractérisée. Cette étude se base sur deux scénarios, en utilisant un serveur et un client pour effectuer la pollution, ou en exploitant une faille dans le traitement des paquets au niveau du routeur. Nous montrons ainsi la dangerosité de l’attaque et proposons une correction de la faille la permettant. Dans un troisième temps, nous cherchons à adapter le protocole HTTP pour qu’il puisse être transporté sur un réseau NDN à des fins d’interopérabilité. Pour ce faire, nous avons développé deux passerelles qui effectuent les conversions nécessaires pour qu’un contenu web puisse rentrer ou sortir d’un réseau NDN. Après avoir décrit notre solution, nous l’évaluons et l’améliorons afin de pouvoir bénéficier d’une fonctionnalité majeure de NDN, à savoir la mise en cache des contenus dans le réseau à hauteur de 61,3% lors de tests synthétiques et 25,1% lors de simulations de navigation avec plusieurs utilisateurs utilisant une loi Zipf de paramètre 1,5. Pour finir, nous proposons une architecture à base de microservices virtualisés et orchestrés pour le déploiement du protocole NDN en suivant le paradigme NFV (Network Function Virtualization). Les sept microservices présentés reprennent soit une fonction atomique du routeur, soit proposent un nouveau service spécifique. Ces fonctions peuvent ensuite être chaînées pour constituer un réseau optimisé. Cette architecture est orchestrée à l’aide d’un manager qui nous permet de pleinement tirer parti des avantages des microservices comme la mise à l’échelle des composants les plus lents ou encore le changement dynamique de topologie en cas d’attaque.Une telle architecture, associée aux contributions précédentes, permettrait un déploiement rapide du protocole NDN, notamment grâce à un développement facilité des fonctions, à l’exécution sur du matériel conventionnel, ou encore grâce à la flexibilité qu’offre ce type d’architecture
Internet historical protocols (TCP/IP) that were used to interconnect the very first comput-ers are no longer suitable for the massive distribution of content that is now being made. New content-based network protocols (Information-Centric Networking) are currently being designed to optimize these exchanges by betting on a paradigm shift where content, rather than machines, are addressable across the Internet. However, such a change can only be made gradually and if all operational imperatives are met. Thus, this thesis aims to study and remove the main tech-nological obstacles preventing the adoption of the NDN (Name Data Networking) protocol by operators by guaranteeing the security, performance, interoperability, proper management and automated deployment of an NDN network. First, we evaluate the current performance of an NDN network thanks to a tool we made, named ndnperf, and observe the high cost for a provider delivering fresh content using this protocol. Then, we propose some optimizations to improve the efficiency of packet generation up to 6.4 times better than the default parameters. Afterwards, we focus on the security of the NDN protocol with the evaluation of the content poisoning attack, known as the second more critical attack on NDN, but never truly characterized. Our study is based on two scenarios, with the usage of a malicious user and content provider, or by exploiting a flaw we found in the packet processing flow of the NDN router. Thus, we show the danger of this kind of attacks and propose a software fix to prevent the most critical scenario. Thirdly, we are trying to adapt the HTTP protocol in a way so that it can be transported on an NDN network for interoperability purposes. To do this, we designed an adaptation protocol and developed two gateways that perform the necessary conversions so that web content can seamlessly enter or exit an NDN network. After describing our solution, we evaluate and improve it in order to make web content benefit from a major NDN feature, the in-network caching, and show up to 61.3% cache-hit ratio in synthetic tests and 25.1% in average for browsing simulations with multiple users using a Zipf law of parameter 1.5. Finally, we propose a virtualized and orchestrated microservice architecture for the deploy-ment of an NDN network following the Network Fonction Virtualization (NFV) paradigm. We developed seven microservices that represent either an atomic function of the NDN router or a new one for specific purposes. These functions can then be chained to constitute a full-fledged network. Our architecture is orchestrated with the help of a manager that allows it to take the full advantages of the microservices like scaling the bottleneck functions or dynamically change the topology for the current needs (an attack for example). Our architecture, associated with our other contributions on performance, security and in-teroperability, allows a better and more realistic deployment of NDN, especially with an easier development of new features, a network running on standard hardware, and the flexibility allowed by this kind of architecture
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Bui, Vo Quoc Bao. "Virtualization of micro-architectural components using software solutions". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0082.

Texto completo
Resumen
Le cloud computing est devenu un paradigme dominant dans l'industrie informatique en raison de sa flexibilité et de son efficacité dans le partage et la gestion des ressources. La technologie clé qui permet le cloud computing est la virtualisation. L'isolation et la prédictibilité de performance sont des exigences essentielles d'un système virtualisé où plusieurs machines virtuelles (MVs) s'exécutent sur une même machine physique. Pour mettre en œuvre ces propriétés, le logiciel de virtualisation (appelé l'hyperviseur) doit trouver un moyen de diviser les ressources physiques (par exemple, la mémoire physique, le temps processeur) du système et de les allouer aux MVs en fonction de la quantité de ressources virtuelles définies pour chaque MV. Cependant, les machines modernes ont des architectures complexes et certaines ressources de niveau microarchitectural telles que les caches de processeur, les contrôleurs de mémoire, les interconnexions ne peuvent pas être divisées et allouées aux MVs. Elles sont partagées globalement entre toutes les MVs qui rivalisent pour leur utilisation, ce qui conduit à la contention. Par conséquent, l'isolation et la prédictibilité de performance sont compromises. Dans cette thèse, nous proposons des solutions logicielles pour prevenir la non-prédictibilité des performances due aux composants micro-architecturaux. La première contribution s'appelle Kyoto, une solution pour le problème de contention du cache, inspirée du principe pollueur-payeur. Une MV est pollueuse si elle provoque des remplacements importants de lignes de cache qui ont un impact sur la performance des autres MVs. Désormais, en utilisant le système Kyoto, le fournisseur peut encourager les utilisateurs du cloud à réserver des permis de pollution pour leurs MVs. La deuxième contribution aborde le problème de la virtualisation efficace des machines NUMA. Le défi majeur vient du fait que l'hyperviseur reconfigure régulièrement le placement d'une MV sur la topologie NUMA. Cependant, ni les systèmes d'exploitation (OSs) invités ni les librairies de l'environnement d'exécution (par exemple, HotSpot) ne sont conçus pour prendre en compte les changements de topologie NUMA pendant leur exécution, conduisant les applications de l'utilisateur final à des performances imprévisibles. Nous présentons eXtended Para-Virtualization (XPV), un nouveau principe pour virtualiser efficacement une architecture NUMA. XPV consiste à revisiter l'interface entre l'hyperviseur et l'OS invité, et entre l'OS invité et les librairies de l'environnement d'exécution afin qu'ils puissent prendre en compte de manière dynamique les changements de topologie NUMA
Cloud computing has become a dominant computing paradigm in the information technology industry due to its flexibility and efficiency in resource sharing and management. The key technology that enables cloud computing is virtualization. Essential requirements in a virtualized system where several virtual machines (VMs) run on a same physical machine include performance isolation and predictability. To enforce these properties, the virtualization software (called the hypervisor) must find a way to divide physical resources (e.g., physical memory, processor time) of the system and allocate them to VMs with respect to the amount of virtual resources defined for each VM. However, modern hardware have complex architectures and some microarchitectural-level resources such as processor caches, memory controllers, interconnects cannot be divided and allocated to VMs. They are globally shared among all VMs which compete for their use, leading to contention. Therefore, performance isolation and predictability are compromised. In this thesis, we propose software solutions for preventing unpredictability in performance due to micro-architectural components. The first contribution is called Kyoto, a solution to the cache contention issue, inspired by the polluters pay principle. A VM is said to pollute the cache if it provokes significant cache replacements which impact the performance of other VMs. Henceforth, using the Kyoto system, the provider can encourage cloud users to book pollution permits for their VMs. The second contribution addresses the problem of efficiently virtualizing NUMA machines. The major challenge comes from the fact that the hypervisor regularly reconfigures the placement of a VM over the NUMA topology. However, neither guest operating systems (OSs) nor system runtime libraries (e.g., HotSpot) are designed to consider NUMA topology changes at runtime, leading end user applications to unpredictable performance. We presents eXtended Para-Virtualization (XPV), a new principle to efficiently virtualize a NUMA architecture. XPV consists in revisiting the interface between the hypervisor and the guest OS, and between the guest OS and system runtime libraries so that they can dynamically take into account NUMA topology changes
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Palopoli, Amedeo. "Containerization in Cloud Computing: performance analysis of virtualization architectures". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14818/.

Texto completo
Resumen
La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Gao, Meihui. "Models and Methods for Network Function Virtualization (NFV) Architectures". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0025/document.

Texto completo
Resumen
Avec la croissance exponentielle des demandes de service, les opérateurs ont déployé de nombreux équipements, et par conséquent, la gestion du réseau est devenue de plus en plus difficile et coûteuse. La virtualisation des fonctions réseau (NFV) a été proposée comme un nouveau paradigme pour réduire les coûts liés à l’acquisition et à la maintenance pour les réseaux de télécommunications. Dans ce travail de thèse, nous nous intéressons aux problèmes du chaînage des fonctions virtuelles (VNFs) qui combinent des décisions de localisation des VNFs et de routage des demandes. D'un point de vue d'optimisation, ce problème est une combinaison des problèmes de localisation (pour la partie d'installation des VNFs) et de conception de réseaux (pour la partie de routage). Ces deux problèmes ont été largement étudié dans la littérature. Cependant, leur combinaison représente des divers challenges en termes de modélisation et de résolution. Dans la première partie de cette thèse, nous considérons une version réaliste du problème du chaînage des VNFs (VNF-PR) afin de comprendre l'impact des différents aspects sur les coûts et les performances de gestion du réseau. Dans ce but, nous étendons le travail dans~\cite{Addis2015} en considérant des caractéristiques et des contraintes plus réalistes des infrastructures NFV et nous proposons un modèle de programmation linéaire et une heuristique mathématique pour le résoudre. Dans le but de mieux comprendre la structure du problème et ses propriétés, la deuxième partie de la thèse est orientée vers l'étude théorique du problème, où nous avons étudié une version compacte du problème du chaînage des VNFs. Nous fournissons des résultats sur la complexité de calcul sous divers cas de topologie et de capacité. Ensuite, nous proposons deux modèles et nous les testons sur un testbed avec plus de 100 instances différentes avec différents cas de capacité. Au final, nous abordons la scalabilité du problème en proposant des méthodes constructives et des méthodes heuristiques basées sur la programmation linéaire entière pour traiter efficacement des instances de taille grande (jusqu'à 60 nœuds et 1800 demandes). Nous montrons que les heuristiques proposées sont capables de résoudre efficacement des instances de taille moyenne (avec jusqu'à 30 nœuds et 1 000 demandes) de cas de capacité difficiles et de trouver de bonnes solutions pour les instances dures, où le modèle ne peut fournir aucune solution avec un temps de calcul limité
Due to the exponential growth of service demands, telecommunication networks are populated with a large and increasing variety of proprietary hardware appliances, and this leads to an increase in the cost and the complexity of the network management. To overcome this issue, the NFV paradigm is proposed, which allows dynamically allocating the Virtual Network Functions (VNFs) and therefore obtaining flexible network services provision, thus reducing the capital and operating costs. In this thesis, we focus on the VNF Placement and Routing (VNF-PR) problem, which aims to find the location of the VNFs to allocate optimally resources to serve the demands. From an optimization point of view, the problem can be modeled as the combination of a facility location problem (for the VNF location and server dimensioning) and a network design problem (for the demands routing). Both problems are widely studied in the literature, but their combination represents, to the best of our knowledge, a new challenge. We start working on a realistic VNF-PR problem to understand the impact of different policies on the overall network management cost and performance. To this end, we extend the work in [1] by considering more realistic features and constraints of NFV infrastructures and we propose a linear programming model and a math-heuristic to solve it. In order to better understand the problem structure and its properties, in the second part of our work, we focus on the theoretical study of the problem by extracting a simplified, yet significant variant. We provide results on the computational complexity under different graph topology and capacity cases. Then, we propose two mathematical programming formulations and we test them on a common testbed with more than 100 different test instances under different capacity settings. Finally, we address the scalability issue by proposing ILP-based constructive methods and heuristics to efficiently deal with large size instances (with up to 60 nodes and 1800 demands). We show that our proposed heuristics can efficiently solve medium size instances (with up to 30 nodes and 1000 demands) of challenging capacity cases and provide feasible solutions for large size instances of the most difficult capacity cases, for which the models cannot find any solution even with a significant computational time
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Delgado, Javier. "Scheduling Medical Application Workloads on Virtualized Computing Systems". FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/633.

Texto completo
Resumen
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Vítek, Daniel. "Cloud computing s ohledem na technologické aspekty a změny v infrastruktuře". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-72548.

Texto completo
Resumen
This thesis discusses the new way of delivering IT services over the Internet widely known as cloud computing. In its opening part, cloud computing is put into a historical context of the evolution of enterprise computing, and the dominant issues the IT department faces today are mentioned. Further, the paper deals with several components that make up the architecture of cloud computing and reviews the benefits and drawbacks an enterprise can have while it adopts this new model. One of the primary aims of this thesis is to identify the impact of the technology trends on cloud computing. The thesis brings together four major computing trends, namely virtualization, multi-tenant architecture, service-oriented architecture and grid computing. Another aim is to focus on two trends related to IT infrastructure that will lead to fundamental changes in IT industry. The first of them is the emergence of extremely large-scale data centers at low cost locations, which can serve tremendous amount of customers and achieve considerable economies of scale. The second trend this paper points out is the shift from multi-purpose all-in-one computers into a wide range of mobile devices dedicated to a specific user's needs. The last aim of this thesis is to clarify the economic impact of cloud computing in terms of costs and changes in business models. The thesis concludes by evaluating the current adoption and predicting the future trend of cloud computing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Stiti, Oussama. "Étude de l'Urbanisation des Accès Virtuels et Stratégie de Métamorphose de Réseaux". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066632/document.

Texto completo
Resumen
La virtualisation, originalement introduite dans les réseaux pour en réduire les coûts de maintenance et de déploiement, a connu une explosion fulgurante remodelant le paysage des réseaux informatiques et télécoms. La virtualisation permet la mutualisation des ressources physiques pour instancier des machines virtuelles complétement isolées mais puisant leurs ressources du même matériel physique. Plus récemment le NFV (Network Functions Virtualisation) est apparu, et a permis de virtualiser des classes entières de fonctions de nœud de réseau dans des blocs qui peuvent se connecter pour créer des services de communication. Cette thèse s’inscrit dans ce contexte pour virtualiser les nœuds des réseaux d’accès, à savoir les points d’accès Wi-Fi. Le Wi-Fi est devenu la technologie de tous les enjeux pour les opérateurs mobile. Cette technologie leur permet notamment d’y délester une partie du trafic de données clients via des hotspots. Le problème qui se pose dans un tel mécanisme est que les normes Wi-Fi existantes ainsi que les logiciels de gestion de connexion d’un appareil mobile n’ont pas été développés dans l’optique du hotspot. A cet effet, la norme Hotspot2.0 a été créée, pour rendre l’expérience Wi-Fi similaire à celle du cellulaire en termes d’itinérance, de transparence et de sécurité. Nous avons dans nos travaux, appliqué le concept de NFV en virtualisant ces points d’accès Wi-Fi de nouvelle génération. La problématique face à laquelle nous avons été confrontés est la forte sécurité imposée par de tels dispositifs exigeants notamment l’enregistrement et l’installation de certificats de sécurité clients dans les lieux publics. Dans notre thèse nous proposons une architecture innovante permettant le rapatriement de ces éléments de sécurité à travers des bornes NFC. Ces mêmes bornes, dans une volonté d’urbanisation des points d’accès, permettront aux utilisateurs de créer leurs propres points d’accès Wi-Fi virtuels à la volée. Enfin, le dernier aspect de cette thèse touche à la problématique de gérance des entités virtualisées changeant les schémas de communication des réseaux traditionnels. Dans ce contexte, SDN (Software Defined Network) a émergé dans les datacenters pour redéfinir la façon de penser les réseaux plus en adéquation avec le contexte virtualisé. Cette thèse reprend le SDN pour l’appliquer en périphérie de réseaux sur les points d’accès Wi-Fi virtuels que nous avons créés. Plus qu’un nouveau paradigme de communications réseaux, nous verrons que l’introduction des concepts NFV/SDN aux réseaux Wi-Fi permettra dans un avenir proche de rendre les réseaux Wi-Fi plus souples, plus ouverts et plus évolutifs
Virtualization was originally introduced in networks to reduce maintenance and deployment costs. It has experienced tremendous growth and reshaped the landscape of IT and ITC networks. Virtualization permits the sharing of physical resources for instantiating isolated virtual machines despite the fact that it is drawing its resources from the same physical hardware. More recently NFV (Network Functions Virtualization) appeared, it allows to virtualize entire classes of network functions node in blocks that can connect to create communication services. In this thesis we virtualize the access network nodes, namely Wi-Fi access points. The Wi-Fi became one of the hot-topic technology for mobile operators, it allows them to offload some of the customers’ data traffic via hotspots. The problem that arises in such a mechanism is the existing wireless standards, and mobile devices connection management software have not been developed for this purpose. The Hotspot2.0 standard was created to overcome this limitation, by making the Wi-Fi experience similar to the cellular in terms of roaming, transparency and security. We have in our work, applied the concept of NFV by virtualizing these brand new Wi-Fi access points. One of the problems that we faced is the high security required by such standard, including the provisioning of client credentials in public areas. In our thesis we propose an innovative architecture for the repatriation of these credentials through NFC terminals. These same terminals will be used for access points’ urbanization by allowing users to create their own Wi-Fi virtual access point on the fly. The last aspect of this thesis is related to the management of virtualized entities changing communication patterns of legacy networks. In this context, SDN (Software Defined Network) emerged in data centers to redefine the way we think about networks, and is designed for virtualized environments. In this thesis we brought SDN to the edge of the network in our Wi-Fi virtual access points. More than a new paradigm of networks communications, we will see that NFV/SDN in Wi-Fi networks will in the near future make Wi-Fi networks more flexible, open and scalable
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Rosa, Raphael Vicente 1988. "Uma arquitetura para aprovisionamento de redes virtuais definidas por software em redes de data center". [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275513.

Texto completo
Resumen
Orientador: Edmundo Roberto Mauro Madeira
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-25T12:25:32Z (GMT). No. of bitstreams: 1 Rosa_RaphaelVicente_M.pdf: 3185742 bytes, checksum: 0fe239607d96962513c068e7379c6f19 (MD5) Previous issue date: 2014
Resumo: Atualmente provedores de infraestrutura (Infrastructure Providers - InPs) alocam recursos virtualizados, computacionais e de rede, de seus data centers para provedores de serviços na forma de data centers virtuais (Virtual Data Centers - VDCs). Almejando maximizar seus lucros e usar de forma eficiente os recursos de seus data centers, InPs lidam com o problema de otimizar a alocação de múltiplos VDCs. Mesmo que a alocação de máquinas virtuais em servidores seja feita de maneira otimizada por diversas técnicas e algoritmos já existentes, aplicações de computação em nuvem ainda tem o desempenho prejudicado pelo gargalo do subaproveitamento de recursos de rede, explicitamente definidos por limitações de largura de banda e latência. Baseado no paradigma de Redes Definidas por Software, nós aplicamos o modelo de rede como serviço (Network-as-a-Service - NaaS) para construir uma arquitetura de data center bem definida para dar suporte ao problema de aprovisionamento de redes virtuais em data centers. Construímos serviços sobre o plano de controle da plataforma RouteFlow os quais tratam a alocação de redes virtuais de data center otimizando a utilização de recursos da infraestrutura de rede. O algoritmo proposto neste trabalho realiza a tarefa de alocação de redes virtuais, baseado na agregação de informações de um plano virtual executando o protocolo BGP eficientemente mapeado a uma topologia física de rede folded-Clos definida por \textit{switches} com suporte a OpenFlow 1.3. Em experimentos realizados, mostramos que o algoritmo proposto neste trabalho realiza a alocação de redes virtuais de data center de forma eficiente, otimizando o balanceamento de carga e, consequentemente, a utilização de recursos da infraestrutura de rede de data centers. A estratégia de alocação de largura de banda utilizada demonstra flexibilidade e simplicidade para atender a diferentes padrões de comunicação nas redes virtuais ao mesmo tempo que permite elasticidade ao balanceamento de carga na rede. Por fim, discutimos como a arquitetura e algoritmo propostos podem ser estendidos para atender desempenho, escalabilidade, e outros requisitos de arquiteturas de redes de data center
Abstract: Nowadays infrastructure providers (InPs) allocate virtualized resources, computational and network, of their data center to service providers (SPs) in the form of virtual data centers (VDCs). Aiming maximize revenues and thus efficiently use the resources of their virtualized data centers, InPs handle the problem to optimally allocate multiple VDCs. Even if the allocation of virtual machines in servers can be made using well known techniques and algorithms already existent, cloud computing applications still have performance limitations imposed by the bottleneck of network resources underutilization, which are explicitly defined by bandwidth and latency constraints. Based on Software Defined Network paradigm we apply the Network-as-a-Service model to build a data center network architecture well-suited to the problem of virtual networks embedding. We build services over the control plane of the RouteFlow platform that perform the allocation of virtual data center networks optimizing the utilization of network infrastructure resources. This task is performed by the algorithm proposed in this dissertation, which is based on aggregated information from a virtual routing plane using the BGP protocol and a folded-Clos physical network topology based on OpenFlow 1.3 devices. The experimental evaluation shows that the proposed algorithm performs efficient load balancing on the data center network and altogether yields better utilization of the physical resources. The proposed bandwidth allocation strategy exhibits simplicity and flexibility to attend different traffic communication patterns while yielding an elastic load balanced network. Finally, we argue that the algorithm and the architecture proposed can be extended to achieve performance, scalability and many other features required in data center network architectures
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía