To see the other types of publications on this topic, follow the link: Hypervizor hypervisor.

Dissertations / Theses on the topic 'Hypervizor hypervisor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Hypervizor hypervisor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Škultéty, Erik. "Aplikační rozhraní pro administraci projektu Libvirt." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255346.

Full text
Abstract:
Tato práce se zabývá problematikou virtualizace, konkrétně virtualizační knihovnou libvirt, cílem které je správa virtuálních strojů a podpora různých typů hypervizorů a virtualizačních řešení jednotným způsobem transparentním pro uživatele. Podstatná část funkcionality knihovny libvirt je na pozadí implementována formou démona libvirtd. Ačkoliv libvirtd démon poskytuje služby pro správu virtuálních strojů, neumožňuje správu sebe samého, kromě změn hodnot parametrů v konfiguračním souboru. Pro změnu nastavení je pak standardním přístupem změna v konfiguračním souboru a následný restart démona. Jelikož uvedený způsob mění pouze perzistentní konfiguraci a restart démona nemusí být vždy optimální řešení, vznikla idea administrativního rozhraní knihovny libvirt, které by umožnilo správu démona za běhu. Hlavním přínosem této práce je návrh a popis implementace aplikačního rozhraní pro administraci knihovny libvirt. Konkrétně pro tuto práci byla zvolena rozhraní pro konfiguraci počtu obslužných vláken, nastavení úrovně a filtrovacích parametrů pro žurnálovací podsystém a správu připojených klientů na straně démona libvirtd.
APA, Harvard, Vancouver, ISO, and other styles
2

Rybák, Martin. "Konsolidace serverů za použití virtualizace." Master's thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-77173.

Full text
Abstract:
The thesis deals with the topic of complexity of current IT. As a result, the consolidation of servers using virtualization is the answer to permanently growing complexity of server infrastructure. The thesis summarizes the basic aspects of this issue, compares the contributions and tries to analyze problems which can emerge. Further, it points a way of consolidation journey, compares different types of virtualization and elaborates the contributions of virtualization for corporate IS/ICT and its flexibility of solution. It analyzes present state of the market with virtualization tools, describes and compares some products of the market key players and analyzes the new opportunities for virtualization, e. g. the virtual desktop infrastructure. At the end, it suggests an approach for consolidated project solution in practice and tries to show some basic steps which should not be omitted. Besides the complex view to the topic, the thesis also presents the contributions, risks and questions to be raised and, at least partly, answers these questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Krempa, Peter. "Analysis of Entropy Levels in the Entropy Pool of Random Number Generator." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236179.

Full text
Abstract:
V informatice je pojem entropie obvykle znám jako nahodný proud dat.  Tato práce krátce shrnuje metody generovaní nahodných dat a popisuje generátor náhodnych čísel, jež je obsažen v jádře operačního systému Linux.  Dále se práce zabývá určením bitové rychlosti generování nahodných dat tímto generátorem ve virtualizovaném prosředí, které poskytují různé hypervizory.  Práce popíše problémy nízkého výkonu generátory nahodných dat ve virtualním prostředí a navrhne postup pro jejich řešení.  Poté je nastíňena implementace navržených postupů, které je podrobena testům a její vysledky jsou porovnány s původním systémem. Systém pro distribuci entropie může dále vylepšit množství entropie v sytémovém jádře o několik řádu, pokud je připojen k vykonému generátoru nahodných dat.
APA, Harvard, Vancouver, ISO, and other styles
4

Isenstierna, Tobias, and Stefan Popovic. "Computer systems in airborne radar : Virtualization and load balancing of nodes." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18300.

Full text
Abstract:
Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance? Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized. Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility. Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment. Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.
APA, Harvard, Vancouver, ISO, and other styles
5

Cardace, Antonio. "UMView, a Userspace Hypervisor Implementation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13184/.

Full text
Abstract:
UMView is a partial virtual machine and userspace hypervisor capable of intercepting system calls and modifying their behavior according to the calling process' view. In order to provide flexibility and modularity UMView supports modules loadable at runtime using a plugin architecture. UMView in particular is the implementation of the View-OS concept which negates the global view assumption which is so radically established in the world of OSes and virtualization.
APA, Harvard, Vancouver, ISO, and other styles
6

Klemperer, Peter Friedrich. "Efficient Hypervisor Based Malware Detection." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/466.

Full text
Abstract:
Recent years have seen an uptick in master boot record (MBR) based rootkits that load before the Windows operating system and subvert the operating system’s own procedures. As such, MBR rootkits are difficult to counter with operating system-based antivirus software that runs at the same privilege-level as the rookits. Hypervisors operate at a higher privilege level than the guests they manage, creating a high-ground position in the host. This high-ground position can be exploited to perform security checks on the virtual machine guests where the checking software is isolated from guest-based viruses. The efficient introspection system described in this thesis targets existing virtualized systems to improve security with real-time, concurrent memory introspection capabilities. Efficient introspection decouples memory introspection from virtual machine guest execution, establishes coherent and consistent memory views between the host and running guest, while maintaining normal guest operation. Existing introspection systems have provided one or two of these properties but not all three at once. This thesis presents a new concurrent-computing approach – high-performance memory snapshotting – to accelerating hypervisor based introspection of virtual machine guest memory that combines all three elements to improve performance and security. Memory snapshots create a coherent and consistent memory view of the guest that can be shared with the independently running introspection application. Three memory snapshotting mechanisms are presented and evaluated for their impact on normal guest operation. Existing introspection systems and security protection techniques that were previously dismissed as too slow are now be enabled by efficient introspection. This thesis explains why existing introspection systems are inadequate, describes how existing system performance can be improved, evaluates an efficient introspection prototype on both applications and microbenchmarks, and discusses two potential security applications that are enabled by efficient introspection. These applications point to efficient introspection’s utility for supporting useful security applications.
APA, Harvard, Vancouver, ISO, and other styles
7

McAdams, Sean. "Virtualization Components of the Modern Hypervisor." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/599.

Full text
Abstract:
Virtualization is the foundation on which cloud services build their business. It supports the infrastructure for the largest companies around the globe and is a key component for scaling software for the ever-growing technology industry. If companies decide to use virtualization as part of their infrastructure it is important for them to quickly and reliably have a way to choose a virtualization technology and tweak the performance of that technology to fit their intended usage. Unfortunately, while many papers exist discussing and testing the performance of various virtualization systems, most of these performance tests do not take into account components that can be configured to improve performance for certain scenarios. This study provides a comparison of how three hypervisors (VMWare vSphere, Citrix XenServer, and KVM) perform under different sets of configurations at this point and which system workloads would be ideal for these configurations. This study also provides a means in which to compare different configurations with each other so that implementers of these technologies have a way in which to make informed decisions on which components should be enabled for their current or future systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, Tawfiq M. "Radium: Secure Policy Engine in Hypervisor." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804971/.

Full text
Abstract:
The basis of today’s security systems is the trust and confidence that the system will behave as expected and are in a known good trusted state. The trust is built from hardware and software elements that generates a chain of trust that originates from a trusted known entity. Leveraging hardware, software and a mandatory access control policy technology is needed to create a trusted measurement environment. Employing a control layer (hypervisor or microkernel) with the ability to enforce a fine grained access control policy with hyper call granularity across multiple guest virtual domains can ensure that any malicious environment to be contained. In my research, I propose the use of radium's Asynchronous Root of Trust Measurement (ARTM) capability incorporated with a secure mandatory access control policy engine that would mitigate the limitations of the current hardware TPM solutions. By employing ARTM we can leverage asynchronous use of boot, launch, and use with the hypervisor proving its state and the integrity of the secure policy. My solution is using Radium (Race free on demand integrity architecture) architecture that will allow a more detailed measurement of applications at run time with greater semantic knowledge of the measured environments. Radium incorporation of a secure access control policy engine will give it the ability to limit or empower a virtual domain system. It can also enable the creation of a service oriented model of guest virtual domains that have the ability to perform certain operations such as introspecting other virtual domain systems to determine the integrity or system state and report it to a remote entity.
APA, Harvard, Vancouver, ISO, and other styles
9

Suryanarayana, Vidya. "Impact of hypervisor cache locking on performance." Diss., Wichita State University, 2013. http://hdl.handle.net/10057/10616.

Full text
Abstract:
Server virtualization has become the modern trend in various industries. Industries are resorting to the deployment of virtualized servers in order to cut the cost of additional, expensive hardware and consolidate the servers on minimal hardware for easier management and maintenance. Virtualized servers are often seen connected to disk arrays in many industries ranging from small to medium business to large data centers. In such a setup, assuring a low latency of data access from the disk array plays an important role in improving the performance and robustness of the overall system. Caching techniques have been researched and used in the past on traditional processors to reduce the number of memory accesses and have proven benefits in alleviating the response times of applications. The research done in this paper explores caching on the hypervisor and analyzes the performance of data cache locking technique in hypervisor caches. The research aims at reducing the Input / Output (I/O) latency in a server virtualized Storage Area Network (SAN) setup, which thereby increases the performance of applications running on the virtualized servers. The authors introduce a miss table that can determine the blocks of data in the hypervisor cache that need to be locked. Way cache locking method is used for locking, such that only selected lines of cache are locked (not the entire cache). The proposed cache locking technique is later evaluated with the introduction of a small victim buffer cache and probability based cache replacement algorithm. Valgrind simulation tool was used to generate memory traces by virtual machines (VMs). Experimental results show an improved cache hit percentage and a considerable reduction in the I/O response time due to the proposed cache locking technique when compared to the results without hypervisor cache locking.
Thesis (Ph.D.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
10

Chan, Lawrence L. "Modeling virtualized application performance from hypervisor counters." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66404.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 61-64).
Managing a virtualized datacenter has grown more challenging, as each virtual machine's service level agreement (SLA) must be satisfied, when the service levels are generally inaccessible to the hypervisor. To aid in VM consolidation and service level assurance, we develop a modeling technique that generates accurate models of service level. Using only hypervisor counters as inputs, we train models to predict application response times and predict SLA violations. To collect training data, we conduct a simulation phase which stresses the application across many workloads levels, and collects each response time. Simultaneously, hypervisor performance counters are collected. Afterwards, the data is synchronized and used as training data in ensemble-based genetic programming for symbolic regression. This modeling technique is quite efficient at dealing with high-dimensional datasets, and it also generates interpretable models. After training models for web servers and virtual desktops, we test generalization across different content. In our experiments, we found that our technique could distill small subsets of important hypervisor counters from over 700 counters. This was tested for both Apache web servers and Windows-based virtual desktop infrastructures. For the web servers, we accurately modeled the breakdown points and also the service levels. Our models could predict service levels with 90.5% accuracy on a test set. On a untrained scenario with completely different contending content, our models predict service levels with 70% accuracy, but predict SLA violation with 92.7% accuracy. For the virtual desktops, on test scenarios similar to training scenarios, model accuracy was 97.6%. Our main contribution is demonstrating that a completely data-driven approach to application performance modeling can be successful. In contrast to many other works, our models do not use workload level or response times as inputs to the models, but nevertheless predicts service level accurately. Our approach also lets the models determine which inputs are important to a particular model's performance, rather than hand choosing a few inputs to train on.
by Lawrence L. Chan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
11

Özcan, Mehmet Batuhan, and Gabriel Iro. "PARAVIRTUALIZATION IMPLEMENTATION IN UBUNTU WITH XEN HYPERVISOR." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2011.

Full text
Abstract:
With the growing need for efficiency, cost reduction, reduced disposition of outdated electronics components as well as scalable electronics components, and also reduced health effects of our daily usage of electronics components. Recent trend in technology has seen companies manufacturing these products thinking in the mentioned needs when manufacturing and virtualizations is one important aspect of it. The need to share resources, the need to use lesser workspace, the need to reduce cost of purchase and manufacturing are all part of achievements of virtualization techniques. For some people, setting up a computer to run different virtual machines at the same time can be difficult especially if they have no prior basic knowledge of working in terminal environment and hiring a skilled personnel to do the job can be expensive. The motivation for this thesis is to help people with little or no basic knowledge on how to set up virtual machine with Ubuntu operating system on XEN hypervisor.
APA, Harvard, Vancouver, ISO, and other styles
12

Govindharajan, Hariprasad. "Porting Linux to a Hypervisor Based Embedded System." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-205568.

Full text
Abstract:
Virtualization is used to improve overall system security, isolate the hardware and it properly manages the available system resources also. The main purpose of using virtualization in embedded systems is to increase the system security by isolating the underlying hardware and also by providing multiple secure execution environments for the guests. A hypervisor also called as the Virtual Machine monitor is responsible for mapping virtual resources to physical resources.Hypervisor based virtualization is gaining more popularity in embedded systems because of the security focussed mission critical applications. Linux OS is chosen because of its popular use in embedded systems. In this thesis, we list out the modifications required to port a Linux kernel onto a hypervisor. This Linux Kernel is already ported to ARM CPU and the hypervisor in question has been developed by Swedish Institute of Computer Science (SICS).
APA, Harvard, Vancouver, ISO, and other styles
13

Douglas, Heradon. "Thin Hypervisor-Based Security Architectures for Embedded Platforms." Thesis, SICS, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-23667.

Full text
Abstract:
Virtualization has grown increasingly popular, thanks to its benefits of isolation, management, and utilization, supported by hardware advances. It is also receiving attention for its potential to support security, through hypervisor-based services and advanced protections supplied to guests. Today, virtualization is even making inroads in the embedded space, and embedded systems, with their security needs, have already started to benefit from virtualization’s security potential. In this thesis, we investigate the possibilities for thin hypervisor-based security on embedded platforms. In addition to significant background study, we present implementation of a low-footprint, thin hypervisor capable of providing security protections to a single FreeRTOS guest kernel on ARM. Backed by performance test results, our hypervisor provides security to a formerly unsecured kernel with minimal performance overhead, and represents a first step in a greater research effort into the security advantages and possibilities of embedded thin hypervisors. Our results show that thin hypervisors are both possible and beneficial even on limited embedded systems, and sets the stage for more advanced investigations, implementations, and security applications in the future.
SVAMP
APA, Harvard, Vancouver, ISO, and other styles
14

Hugo, Andra-Ecaterina. "Composability of parallel codes on heterogeneous architectures." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0373/document.

Full text
Abstract:
Pour répondre aux besoins de précision et d'efficacité des simulations scientifiques, la communauté du Calcul Haute Performance augmente progressivement les demandes en terme de parallélisme, rajoutant ainsi un besoin croissant de réutiliser les bibliothèques parallèles optimisées pour les architectures complexes.L'utilisation simultanée de plusieurs bibliothèques de calcul parallèle au sein d'une application soulève bien souvent des problèmes d 'efficacité. En compétition pour l'obtention des ressources, les routines parallèles, pourtant optimisées, se gênent et l'on voit alors apparaître des phénomènes de surcharge, de contention ou de défaut de cache.Dans cette thèse, nous présentons une technique de cloisonnement de flux de calculs qui permet de limiter les effets de telles interférences. Le cloisonnement est réalisé à l'aide de contextes d'exécution qui partitionnement les unités de calculs voire en partagent certaines. La répartition des ressources entre les contextes peut être modifiée dynamiquement afin d'optimiser le rendement de la machine. A cette fin, nous proposons l'utilisation de certaines métriques par un superviseur pour redistribuer automatiquement les ressources aux contextes. Nous décrivons l'intégration des contextes d'ordonnancement au support d'exécution pour machines hétérogènes StarPU et présentons des résultats d'expériences démontrant la pertinence de notre approche. Dans ce but, nous avons implémenté une extension du solveur direct creux qr mumps dans la quelle nous avons fait appel à ces mécanismes d'allocation de ressources. A travers les contextes d'ordonnancement nous décrivons une nouvelle méthode de décomposition du problème basée sur un algorithme de \proportional mapping". Le superviseur permet de réadapter dynamiquement et automatiquement l'allocation des ressources au parallèlisme irrégulier de l'application. L'utilisation des contextes d'ordonnancement et du superviseur a amélioré la localité et la performance globale du solveur
To face the ever demanding requirements in term of accuracy and speed of scientific simulations, the High Performance community is constantly increasing the demands in term of parallelism, adding thus tremendous value to parallel libraries strongly optimized for highly complex architectures.Enabling HPC applications to perform efficiently when invoking multiple parallel libraries simultaneously is a great challenge. Even if a uniform runtime system is used underneath, scheduling tasks or threads coming from dfferent libraries over the same set of hardware resources introduces many issues, such as resource oversubscription, undesirable cache ushes or memory bus contention.In this thesis, we present an extension of StarPU, a runtime system specifically designed for heterogeneous architectures, that allows multiple parallel codes to run concurrently with minimal interference. Such parallel codes run within scheduling contexts that provide confined executionenvironments which can be used to partition computing resources. Scheduling contexts can be dynamically resized to optimize the allocation of computing resources among concurrently running libraries. We introduced a hypervisor that automatically expands or shrinks contexts using feedback from the runtime system (e.g. resource utilization). We demonstrated the relevance of this approach by extending an existing generic sparse direct solver (qr mumps) to use these mechanisms and introduced a new decomposition method based on proportional mapping that is used to build the scheduling contexts. In order to cope with the very irregular behavior of the application, the hypervisor manages dynamically the allocation of resources. By means of the scheduling contexts and the hypervisor we improved the locality and thus the overall performance of the solver
APA, Harvard, Vancouver, ISO, and other styles
15

Bolignano, Pauline. "Formal models and verification of memory management in a hypervisor." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S026/document.

Full text
Abstract:
Un hyperviseur est un logiciel qui virtualise les ressources d'une machine physique pour permettre à plusieurs systèmes d'exploitation invités de s'exécuter simultanément dessus. L'hyperviseur étant le gestionnaire des ressources, un bug peut être critique pour les systèmes invités. Dans cette thèse nous nous intéressons aux propriétés d'isolation de la mémoire d'un hyperviseur de type 1, qui virtualise la mémoire en utilisant des Shadow Page Tables. Plus précisément, nous présentons un modèle concret et un modèle abstrait de l'hyperviseur, et nous prouvons formellement que les systèmes d'exploitation invités ne peuvent pas altérer ou accéder aux données privées des autres s'ils n'en ont pas la permission. Nous utilisons le langage et l'assistant de preuve développés par Prove & Run pour ce faire. Le modèle concret comporte beaucoup d'optimisations, qui rendent les structures de données et les algorithmes complexes, il est donc difficile de raisonner dessus. C'est pourquoi nous construisons un modèle abstrait dans lequel il est plus facile de raisonner. Nous prouvons les propriétés sur le modèle abstrait, et nous prouvons formellement sa correspondance avec le modèle concret, de telle manière que les preuves sur le modèle abstrait s'appliquent au modèle concret. La preuve correspondance n'est valable que pour des états concrets qui respectent certaines propriétés, nous prouvons que ces propriétés sont des invariants du système concret. La preuve s'articule donc en trois phases : la preuve d'invariants au niveau concret, la preuve de correspondance entre les modèles abstraits et concret, et la preuve des propriétés de sécurité au niveau abstrait
A hypervisor is a software which virtualizes hardware resources, allowing several guest operating systems to run simultaneously on the same machine. Since the hypervisor manages the access to resources, a bug can be critical for the guest Oses. In this thesis, we focus on memory isolation properties of a type 1 hypervisor, which virtualizes memory using Shadow Page Tables. More precisely, we present a low-level and a high-level model of the hypervisor, and we formally prove that guest OSes cannot access or tamper with private data of other guests, unless they have the authorization to do so. We use the language and the proof assistant developed by Prove & Run. There are many optimizations in the low-level model, which makes the data structures and algorithms complexes. It is therefore difficult to reason on such a model. To circumvent this issue, we design an abstract model in which it is easier to reason. We prove properties on the abstract model, and we prove its correspondence with the low-level model, in such a way that properties proved on the abstract model also hold for the low-level model. The correspondence proof is valid only for low-level states which respect some properties. We prove that these properties are invariants of the low-level system. The proof can be divided into three parts : the proof of invariants preservation on the low-level, the proof of correspondence between abstract and low-level models, and proof of the security properties on the abstract level
APA, Harvard, Vancouver, ISO, and other styles
16

Madhugiri, Shamsundar Abhiram. "Probability based cache replacement algorithm for the hypervisor cache." Thesis, Wichita State University, 2012. http://hdl.handle.net/10057/5532.

Full text
Abstract:
Virtualization is one of the key technologies which help in server consolidation, disaster recovery, and dynamic load balancing. The ratio of virtual machine to physical machine can be as high as 1:10, and this makes caching a key parameter, which affects the performance of Virtualization. Researchers have proposed the idea of having an exclusive hypervisor cache at the Virtual Machine Monitor (VMM) which could ease congestion and also improve the performance of the caching mechanism. Traditionally the Least Recently Used (LRU) algorithm is the cache replacement policy used in most caches. This algorithm has many drawbacks, such as no scan resistance, and hence does form an ideal candidate to be utilized in the hypervisor cache. To overcome this, this research focuses on development of a new algorithm known as the “Probability Based Cache Replacement Algorithm”. This algorithm does not evict memory addresses based on just the recency of memory traces, but it also considers the access history of all the addresses, making it scan resistant. In this research, a hypervisor cache is simulated using a C program and different workloads are tested in order to validate our proposal. This research shows that there is considerable improvement in performance using the Probability Based Cache Replacement Algorithm in comparison with the Traditional LRU algorithm.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
17

Nyquist, Johan, and Alexander Manfredsson. "Jämförelse av Hypervisor & Zoner : Belastningstester vid drift av webbservrar." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-28576.

Full text
Abstract:
Virtualisering av datorer rent generellt innebär att man delar upp hela eller delar av enmaskinkonfiguration i flera exekveringsmiljöer. Det är inte bara datorn i sig som kanvirtualiseras utan även delar av det, såsom minnen, lagring och nätverk. Virtualiseringanvänds ofta för att kunna nyttja systemets resurser mer effektivt. En hypervisorfungerar som ett lager mellan operativsystemet och den underliggande hårdvaran. Meden hypervisor har virtuella maskiner sitt egna operativsystems kärna. En annan tekniksom bortser från detta mellanlager kallas zoner. Zoner är en naturlig del avoperativsystemet och alla instanser delar på samma kärna, vilket inte ger någon extraoverhead. Problemet är att hypervisorn är en resurskrävande teknik. Genom att användazoner kan detta problem undkommas genom att ta bort hypervisorlagret och istället köramed instanser som kommunicerar direkt med operativsystemets kärna. Detta ärteoretiskt grundande och ingen tidigare forskning har utförts, därmed påkallades dennautredning. För att belysa problemet använde vi oss av Apache som webbserver.Verktyget Httperf användes för att kunna utföra belastningstester mot webbservern.Genom att göra detta kunde vi identifiera att den virtualiserade servern presterade sämreän en fysisk server (referensmaskin). Även att den nyare tekniken zoner bidrar till lägreoverhead, vilket gör att systemet presterar bättre än med den traditionella hypervisorn.För att styrka vår teori utfördes två tester. Det första testet bestod utav en virtualiseradserver, andra testet bestod av tre virtuella servrar. Anledningen var att se hur de olikateknikerna presterade vid olika scenarion. Det visade sig i båda fallen att zonerpresterade bättre och att det inte tappade lika mycket i prestanda i förhållande tillreferensmaskinerna.
Virtualization of computers in general means that the whole or parts of a machineconfiguration is split in multiple execution enviornments. It is not just the computeritself that can be virtualized, but also the resources such as memory, storage andnetworking. Virtualization is often used to utilize system resources more efficient. Ahypervisor acts as a layer between the operating system and the underlying hardware.With a hypervisor a virtual machine has its own operating system kernel. Anothertechnique that doesn't use this middle layer is called zones. Zones are a natural part ofthe operating system and all instances share the same core, this does not provide anyadditional overhead. The problem with hypervisors is that it is a rescource-demandingtechnique. The advantage with zones is that you should be able to avoid the problem byremoving the hypervisor layer and instead run instances that communicate directly tothe operating system kernel. This is just a theoretical foundation. No previous researchhas been done, which result in this investigation. To illustrate the problem we usedApache as a web server. Httperf will be used as a tool to benchmark the web server. Bydoing this we were able to identify that the virtualized server did not perform quite aswell as a physical server. Also that the new technique (zones) did contribute with loweroverhead, making the system perform better than the traditional hypervisor. In order toprove our theory two tests were performed. The first test consisted of one virtual serverand the other test consisted of three virtual servers. The reason behind this was to seehow the different techniques performed in different scenarios. In both cases we foundthat zones performed better and did not drop as much performance in relation to ourreference machines.
APA, Harvard, Vancouver, ISO, and other styles
18

Evripidou, Christos. "Scheduling for mixed-criticality hypervisor systems in the automotive domain." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/20380/.

Full text
Abstract:
This thesis focuses on scheduling for hypervisor systems in the automotive domain. Current practices are primarily implementation-agnostic or are limited by lack of visibility during the execution of partitions. The tasks executed within the partitions are classified as event-triggered or time-triggered. A scheduling model is developed using a pair of a deferrable server and a periodic server per partition to provide low latency for event-triggered tasks and maximising utilisation. The developed approach enforces temporal isolation between partitions and ensures that time-triggered tasks do not suffer from starvation. The scheduling model was extended to support three criticality levels with two degraded modes. The first degraded mode provides the partitions with additional capacity by trading-off low latency of event-driven tasks with lower overheads and utilisation. Both models were evaluated by forming a case study using real ECU application code. A second case study was formed inspired from the Olympus Attitude and Orbital Control System (AOCS) to further evaluate the proposed mixed-criticality model. To conclude, the contributions of this thesis are addressed with respect to the research hypothesis and possible avenues for future work are identified.
APA, Harvard, Vancouver, ISO, and other styles
19

Do, Viktor. "Security Services on an Optimized Thin Hypervisor for Embedded Systems." Thesis, SICS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-23605.

Full text
Abstract:
Virtualization has been used in computer servers for a long time as a means to improve utilization, isolation and management. In recent years, embedded devices have become more powerful, increasingly connected and able to run applications on open source commodity operating systems. It only seems natural to apply these virtualization techniques on embedded systems, but with another objective. In computer servers, the main goal was to share the powerful computers with multiple guests to maximize utilization. In embedded systems the needs are different. Instead of utilization, virtualization can be used to support and increase security by providing isolation and multiple secure execution environments for its guests. This thesis presents the design and implementation of a security application, and demonstrates how a thin software virtualization layer developed by SICS can be used to increase the security for a single FreeRTOS guest on an ARM platform. In addition to this, the thin hypervisor was also analyzed for improvements in respect to footprint and overall performance. The selected improvements were then applied and verified with profiling tools and benchmark tests. Our results show that a thin hypervisor can be a very flexible and efficient software solution to provide a secure and isolated execution environment for security critical applications. The applied optimizations reduced the footprint of the hypervisor by over 52%, while keeping the performance overhead at a manageable level.
APA, Harvard, Vancouver, ISO, and other styles
20

Davidsson, Göran. "Evaluation of a Hypervisor Performance in a Distributed Embedded System." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-31817.

Full text
Abstract:
In modern industrial systems cloud computing plays an important role. Using this technology services are given to customers more efficiently with respect to cost and performance. The main idea of using cloud is that a platform is partitioned among several end users providing a complete isolation among the services. Therefore, the resources are used more effectively instead of assigning a complete platform for only one end user. In order to provide such a partitioning among the services several techniques are already being used. One of the prominent techniques is virtualization. Virtualization can be done with the use of a hypervisor, which is a software that allows running of multiple operating systems on one platform. This thesis aims at measuring the performance of a hypervisor in a distributed embedded system. The metrics for measuring the performance are delays, jitters and throughput that are influenced by different architectures and settings. The thesis also aims at finding out if it is possible to predict the delay, jitter and throughput depending on the number of virtual machines, number of switches and amount of network load. Finally, the thesis investigates whether different settings on the virtual machines influence the performance. For these purposes, a network consisting of two hypervisors and one or two network switches is setup. On each hypervisor several virtual machines are installed. Different tools for measurement of network performance, such as Iperf and Jperf, are installed on the virtual machines. The results show that network load is the main factor influencing the delay, jitter and throughput in the network. The number of switches influence to some degree due to the processing delay. The number of virtual machines has no or very low influence on the network performance. Finally, the results show that alternating configurations on the virtual machines has no observable differences in delay, jitter or throughput, albeit with the limited changes in the settings for this experiment.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Yu. "Performance Improvement of Hypervisors for HPC Workload." Universitätsverlag Chemnitz, 2018. https://monarch.qucosa.de/id/qucosa%3A31825.

Full text
Abstract:
The virtualization technology has many excellent features beneficial for today’s high-performance computing (HPC). It enables more flexible and effective utilization of the computing resources. However, a major barrier for its wide acceptance in HPC domain lies in the relative large performance loss for workloads. Of the major performance-influencing factors, memory management subsystem for virtual machines is a potential source of performance loss. Many efforts have been invested in seeking the solutions to reduce the performance overhead in guest memory address translation process. This work contributes two novel solutions - “DPMS” and “STDP”. Both of them are presented conceptually and implemented partially for a hypervisor - KVM. The benchmark results for DPMS show that the performance for a number of workloads that are sensitive to paging methods can be more or less improved through the adoption of this solution. STDP illustrates that it is feasible to reduce the performance overhead in the second dimension paging for those workloads that cannot make good use of the TLB.
Virtualisierungstechnologie verfügt über viele hervorragende Eigenschaften, die für das heutige Hochleistungsrechnen von Vorteil sind. Es ermöglicht eine flexiblere und effektivere Nutzung der Rechenressourcen. Ein Haupthindernis für Akzeptanz in der HPC-Domäne liegt jedoch in dem relativ großen Leistungsverlust für Workloads. Von den wichtigsten leistungsbeeinflussenden Faktoren ist die Speicherverwaltung für virtuelle Maschinen eine potenzielle Quelle der Leistungsverluste. Es wurden viele Anstrengungen unternommen, um Lösungen zu finden, die den Leistungsaufwand beim Konvertieren von Gastspeicheradressen reduzieren. Diese Arbeit liefert zwei neue Lösungen DPMS“ und STDP“. Beide werden konzeptionell vorgestellt und teilweise für einen Hypervisor - KVM - implementiert. Die Benchmark-Ergebnisse für DPMS zeigen, dass die Leistung für eine Reihe von pagingverfahren-spezifischen Workloads durch die Einführung dieser Lösung mehr oder weniger verbessert werden kann. STDP veranschaulicht, dass es möglich ist, den Leistungsaufwand im zweidimensionale Paging für diejenigen Workloads zu reduzieren, die die von dem TLB anbietende Vorteile nicht gut ausnutzen können.
APA, Harvard, Vancouver, ISO, and other styles
22

Mayap, Kamga Christine. "Gestion de ressources de façon "éco-énergétique" dans un système virtualisé : application à l'ordonnanceur de marchines virtuelles." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/11979/1/Mayap_Kamga.pdf.

Full text
Abstract:
Face au coût de la gestion locale des infrastructures informatiques, de nombreuses entreprises ont décidé de la faire gérer par des fournisseurs externes. Ces derniers, connus sous le nom de IaaS (Infrastructure as a Service), mettent des ressources à la disposition des entreprises sous forme de machine virtuelle (VM - Virtual Machine). Ainsi, les entreprises n'utilisent qu'un nombre limité de machines virtuelles capables de satisfaire leur besoin. Ce qui contribue à la réduction des coûts de l'infrastructure informatique des entreprises clientes. Cependant, cette externalisation soulève pour le fournisseur, les problèmes de respect d'accord de niveau de service (SLA - Service Layer Agreement) souscrit par le client et d'optimisation de la consommation énergétique de son infrastructure. Au regard de l'importance que revêt ces deux défis, de nombreux travaux de recherches se sont intéressés à cette problématique. Les solutions de gestion d'énergie proposées consistent à faire varier la vitesse d'exécution des périphériques concernés. Cette variation de vitesse est implémentée, soit de façon native parce que le périphérique dispose des mécaniques intégrés, soit par simulation à travers des regroupements (spatial et temporel) des traitements. Toutefois, cette variation de vitesse permet d'optimiser la consommation énergétique d'un périphérique mais, a pour effet de bord d'impacter le niveau de service des clients. Cette situation entraine une incompatibilité entre les politiques de variation de vitesse pour la baisse d'énergie et le respect de l'accord de niveau de service. Dans cette thèse, nous étudions la conception et l'implantation d'un gestionnaire de ressources "éco énergétique" dans un système virtualisé. Un tel gestionnaire doit permettre un partage équitable des ressources entre les machines virtuelles tout en assurant une utilisation optimale de l'énergie que consomment ces ressources. Nous illustrons notre étude avec un ordonnanceur de machines virtuelles. La politique de variation de vitesse est implantée par le DVFS (Dynamic Voltage Frequency Scaling) et l'allocation de la capacité CPU aux machines virtuelles l'accord de niveau de service à respecter.
APA, Harvard, Vancouver, ISO, and other styles
23

SASANK, HYDERKHAN. "Performance analysis of TCP in KVM virtualized environment." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10793.

Full text
Abstract:
The requirement of high quality services is increasing day by day. So, in order to meet up with this requirement new technologies are being developed one of them being virtualization. The main agenda of introducing virtualization is that though virtualization needs more powerful devices to run the hypervisor, the technique also helps to increase consolidation which makes efficient use of resources like increase in the CPU utilization. The virtualization technique helps us to run more VM’s (Virtual Machine) on the same platform i.e. on the same hypervisor. In virtualization as number of VM’s share the CPU will there be any effect on the performance of TCP with the performance influencing factors of virtualization. While TCP being the most widely used protocol and most reliable protocol can performance of TCP vary if different TCP congestion control mechanism are used in the virtualized environment are the main aims of this research.   In this study, we investigate the performance influencing factor of TCP in the virtualized environment and whether those influencing factors have any role to play with the performance of the TCP. Also which TCP congestion control mechanism is best suitable in order to download files when virtualization is used will be investigated by setting up a client-server test bed. The different TCP congestion control mechanism which have been used are CUBIC, BIC, Highspeed, Vegas, Veno, Yeah, Westwood, LP, Scalable, Reno, Hybla. Total download time has been compared in order to know which congestion control algorithm performs better in the virtualized environment.   The method that has been used to carry out the research is by experimentation. That is by changing the RAM sizes and CPU cores which are the performance influencing factors in virtualization and then analyzing the total download time while downloading a file by changing the TCP congestion control mechanisms by running a single guest VM. Apart from changing only congestion control mechanisms the other network parameters which effect the performance of the TCP such as Delay have been injected while downloading the file, to match up with the real time scenarios.   Results collected include average download time of a file by changing the different memory sizes and different CPU cores. Average Download time for different TCP congestion controls mechanisms with inclusion of the parameter that effects the total download time such as Delay.   From the results we got we can see that there is a slight influence on the performance of TCP by the performance influencing factors memory sizes and CPU cores allotted to the VM in the KVM virtualized environment and of all the TCP congestion control algorithms having TCP – BIC and TCP- YEAH performs the best in the KVM virtualized environment. The performance of TCP – LP is the least in the KVM virtualized environment.
APA, Harvard, Vancouver, ISO, and other styles
24

Chiba, Daniel Juzer. "Optimizing Boot Times and Enhancing Binary Compatibility for Unikernels." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/88865.

Full text
Abstract:
Unikernels are lightweight, single-purpose virtual machines designed for the cloud. They provide enhanced security, minimal resource utilisation, fast boot times, and the ability to optimize performance for the target application. Despite their numerous advantages, unikernels face significant barriers to their widespread adoption. We identify two such obstacles as unscalable boot procedures in hypervisors and the difficulty in porting native applications to unikernel models. This work presents a solution for the first based on the popular Xen hypervisor, and demonstrates a significant performance benefit when running a large number of guest VMs. The HermiTux unikernel aims to overcome the second obstacle by providing the ability to run unmodified binaries as unikernels. This work adds to HermiTux, enabling it to retain some of the important advantages of unikernels such as fast system calls and modularity.
MS
APA, Harvard, Vancouver, ISO, and other styles
25

Kinchla, Brendan. "Forensic recovery of evidence from deleted VMware vSphere Hypervisor virtual machines." Thesis, Utica College, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1587159.

Full text
Abstract:

The purpose of this research was to analyze the potential for recovering evidence from deleted VMware vSphere Hypervisor (ESXi) virtual machines (VMs). There exists an absence of scholarly research on the topic of deleted VM forensic recovery. Research dedicated to forensic recovery of ESXi VMs and VMware’s VM file system (VMFS) is nearly non -existent. This paper examined techniques to recover deleted ESXi VMs to a state where examination for forensic artifacts of user activity can occur. The paper examined the disk-provisioning methods for allocation of virtual disk files and the challenges for forensic recovery associated with each disk-provisioning type. The research determined that the two thick-provisioned virtual disk types provided the best opportunity for complete recovery, while certain characteristics of thin-provisioned virtual disk files made them less likely to recover in their entirety. Fragmentation of virtual disk files presented the greatest challenge for recovery of deleted VMs. Testing of alternate hypotheses attempting to reduce the likelihood of fragmentation within the virtual disk file met with mixed results, leaving fragmentation of virtual disk files as a significant challenge to successful VM recovery. The paper examined the techniques for recovering deleted files from VMFS volumes. Due to a lack of forensic tools with the ability to interpret the VMFS filesystem, forensic recovery focused on data stream searching through the VMFS volume image and file carving from consecutive disk sectors. This method proved to be inefficient, but ultimately successful in most of the test cases.

Keywords: Cybersecurity, Professor Cynthia Gonnella, virtualization, VMDK.

APA, Harvard, Vancouver, ISO, and other styles
26

Sridharan, Suganya. "A Performance Comparison of Hypervisors for Cloud Computing." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/269.

Full text
Abstract:
The virtualization of IT infrastructure enables the consolidation and pooling of IT resources so that they can be shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Virtualization provides a logical abstraction of physical computing resources and creates computing environments that are not restricted by physical configuration or implementation. Virtualization is very important for cloud computing because the delivery of services is simplified by providing a platform for optimizing complex IT resources in a scalable manner, which makes cloud computing more cost effective. Hypervisor plays an important role in the virtualization of hardware. It is a piece of software that provides a virtualized hardware environment to support running multiple operating systems concurrently using one physical server. Cloud computing has to support multiple operating environments and Hypervisor is the ideal delivery mechanism. The intent of this thesis is to quantitatively and qualitatively compare the performance of VMware ESXi 4.1, Citrix Systems Xen Server 5.6 and Ubuntu 11.04 Server KVM Hypervisors using standard benchmark SPECvirt_sc2010v1.01 formulated by Standard Performance Evaluation Corporation (SPEC) under various workloads simulating real life situations.
APA, Harvard, Vancouver, ISO, and other styles
27

Alndawi, Tara. "Replacing Virtual Machines and Hypervisors with Container Solutions." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54607.

Full text
Abstract:
We live in a world that is constantly evolving where new technologies and innovations are being introduced. This progress partly results in developing new technologies and also in the improvement of the current ones. Docker containers are a virtualization method that is one of these new technologies that has become a hot topic around the world as it is said to be a better alternative to today's current virtual machines. One of the aspects that has contributed to this statement is the difference from virtual machines where containers isolate processes from each other and not the entire operating system. The company Saab AB wants to be at the forefront of today's technology and is interested in investigating the possibilities with container technology. The purpose with this thesis work is partly to investigate whether the container solution is in fact an alternative to traditional VMs and what differences there are between these methods. This will be done with the help of an in-depth literature study of comperative studies between containers and VMs. The results of the comparative studies showed that containers are in fact a better alternative than VMs in certain aspects such as performance and scalability and are worthy for the company. Thus, in the second part of this thesis work, a proof of concept implementation was made, by recreating a part of the company’s subsystem TactiCall into containers, to ensure that this transition is possible for the concrete use-case and that the container solution works as intended. This task has succeeded in highlighting the benefits of containers and showing through a proof of concept that there is an opportunity for the company to transition from VMs into containers.
APA, Harvard, Vancouver, ISO, and other styles
28

Mylar, Balasubramanya Karthik. "Enhancement of cache utilization by isolation and thin provisioning of hypervisor cache." Thesis, Wichita State University, 2012. http://hdl.handle.net/10057/5535.

Full text
Abstract:
Storage resources are being consolidated and have led to the increased demand in sharing storage, depending on the type of application or class of customers. In such a scenario, cache consolidation becomes increasingly necessary. An analysis of cache utilization at various levels in a Storage Area Network such as the server cache, the virtual machine cache, and the array controller cache was done, drawing conclusions from the response time and execution time. A proposal is made for dynamic cache allocation algorithm in a virtualization environment, which takes into account the usage trends and varies the allocation depending on the type of the workload. The dynamic cache allocation algorithm helps in adapting the cache space available for a particular virtual machine based on the number of requests and the cache miss affected, with respect to a threshold value which is individual to each VM. The caching mechanism proposed in this work is an isolated, unshared, dynamic cache, where caching is done on the hypervisor instead of the virtual machines. Also, we prove by experimental results that a static allocation of the cache is not suitable for a varying workload in a virtualized setup.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
29

Kärnä, P. (Perttu). "Performance effects on servers using containers in comparison to hypervisor based virtualisation." Bachelor's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201805312378.

Full text
Abstract:
The current direction of web-based software is evolving has brought many challenges regarding locations, scalability, maintainability and performance. In most of the cases tiny differences in performance are not really important since the user base of the software may not be remarkably high. However, when the user base of the system expand and the stress of the system has high peaks, smaller things start to have meaning in the software and infrastructure. This paper addresses the performance differences between todays usual web software deployment solutions, containers and VMs. This study is based on literature review, which has been conducted by studying previous studies about the topic. The main focus in the study is Linux Container based containerization solutions such as Docker, and traditional hypervisor based virtual machines such as KVMs. The main categories of performance this paper is addressing to are memory, CPU, network, disk I/O, database applications, DevOps and overall system performance. The categorizing is based on the studies reviewed in this paper which obviously results in slight overlapping on some categories. In the end of this paper the results are summed up and some implications for them are presented.
APA, Harvard, Vancouver, ISO, and other styles
30

Bard, Robin, and Simon Banasik. "En prestanda- och funktionsanalys av Hypervisors för molnbaserade datacenter." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20491.

Full text
Abstract:
I dagens informationssamhälle pågår en växande trend av molnbaserade tjänster. Vid implementering av molnbaserade tjänster används metoden Virtualisering. Denna metod minskar behovet av antal fysiska datorsystem i ett datacenter. Vilket har en positiv miljöpåverkan eftersom energikonsumtionen minskar när hårdvaruresurser kan utnyttjas till sin fulla kapacitet. Molnbaserade tjänster skapar samhällsnytta då nya aktörer utan teknisk bakgrundskunskap snabbt kan komma igång med verksamhetsberoende tjänster. För tillämpning av Virtualisering används en så kallad Hypervisor vars uppgift är att distribuera molnbaserade tjänster. Efter utvärdering av vetenskapliga studier har vi funnit att det finns skillnader i prestanda och funktionalitet mellan olika Hypervisors. Därför väljer vi att göra en prestanda- samt funktionsanalys av Hypervisors som kommer från de största aktörerna på marknaden. Dessa är Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 och Citrix XenServer 6.1.0 Free edition. Vår uppdragsgivare är försvarsmakten som bekräftade en stor efterfrågan av vår undersökning. Rapporten innefattar en teoretisk grund som beskriver tekniker bakom virtualisering och applicerbara användningsområden. Genomförandet består av två huvudsakliga metoder, en kvalitativ- respektive kvantitativ del. Grunden till den kvantitativa delen utgörs av ett standardsystem som fastställdes utifrån varje Hypervisors begränsningar. På detta standardsystem utfördes prestandatester i form av dataöverföringar med en serie automatiserade testverktyg. Syftet med testverktygen var att simulera datalaster som avsiktligt påverkade CPU och I/O för att avgöra vilka prestandaskillnader som förekommer mellan Hypervisors. Den kvalitativa undersökningen omfattade en utredning av funktionaliteter och begränsningar som varje Hypervisor tillämpar. Med tillämpning av empirisk analys av de kvantitativa mätresultaten kunde vi fastställa orsaken bakom varje Hypervisors prestanda. Resultaten visade att det fanns en korrelation mellan hur väl en Hypervisor presterat och vilken typ av dataöverföring som den utsätts för. Den Hypervisor som uppvisade goda prestandaresultat i samtliga dataöverföringar är ESXi. Resultaten av den kvalitativa undersökningen visade att den Hypervisor som offererade mest funktionalitet och minst begränsningar är Hyper-V. Slutsatsen blev att ett mindre datacenter som inte planerar en expansion bör lämpligtvis välja ESXi. Ett större datacenter som både har behov av funktioner som gynnar molnbaserade tjänster och mer hårdvaruresurser bör välja Hyper-V vid implementation av molntjänster.
A growing trend of cloud-based services can be witnessed in todays information society. To implement cloud-based services a method called virtualization is used. This method reduces the need of physical computer systems in a datacenter and facilitates a sustainable environmental and economical development. Cloud-based services create societal benefits by allowing new operators to quickly launch business-dependent services. Virtualization is applied by a so-called Hypervisor whose task is to distribute cloud-based services. After evaluation of existing scientific studies, we have found that there exists a discernible difference in performance and functionality between different varieties of Hypervisors. We have chosen to perform a functional and performance analysis of Hypervisors from the manufacturers with the largest market share. These are Microsoft Hyper-V Core Server 2012, Vmware ESXi 5.1.0 and Citrix XenServer 6.1.0 Free edition. Our client, the Swedish armed forces, have expressed a great need of the research which we have conducted. The thesis consists of a theoretical base which describes techniques behind virtualization and its applicable fields. Implementation comprises of two main methods, a qualitative and a quantitative research. The basis of the quantitative investigation consists of a standard test system which has been defined by the limitations of each Hypervisor. The system was used for a series of performance tests, where data transfers were initiated and sampled by automated testing tools. The purpose of the testing tools was to simulate workloads which deliberately affected CPU and I/O to determine the performance differences between Hypervisors. The qualitative method comprised of an assessment of functionalities and limitations for each Hypervisor. By using empirical analysis of the quantitative measurements we were able to determine the cause of each Hypervisors performance. The results revealed that there was a correlation between Hypervisor performance and the specific data transfer it was exposed to. The Hypervisor which exhibited good performance results in all data transfers was ESXi. The findings in the qualitative research revealed that the Hypervisor which offered the most functionality and least amount of constraints was Hyper-V. The conclusion of the overall results uncovered that ESXi is most suitable for smaller datacenters which do not intend to expand their operations. However a larger datacenter which is in need of cloud service oriented functionalities and requires greater hardware resources should choose Hyper-V at implementation of cloud-based services.
APA, Harvard, Vancouver, ISO, and other styles
31

Chapman, Matthew Computer Science &amp Engineering Faculty of Engineering UNSW. "vNUMA: Virtual shared-memory multiprocessors." Publisher:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/42594.

Full text
Abstract:
Shared memory systems, such as SMP and ccNUMA topologies, simplify programming and administration. On the other hand, systems without hardware support for shared memory, such as clusters of commodity workstations, are commonly used due to cost and flexibility considerations. In this thesis, virtualisation is proposed as a technique that can bridge the gap between these architectures. The resulting system, vNUMA, is a hypervisor with a unique feature: it provides the illusion of shared memory across separate nodes on a fast network. This allows a cluster of workstations to be transformed into a single shared memory multiprocessor, supporting existing operating systems and applications. Such an approach could also have applications for emerging highly-parallel architectures, allowing a shared memory programming model to be retained while reducing hardware complexity. To build such a system, it is necessary to meld both a high-performance hypervisor and a high-performance distributed shared memory (DSM) system. This thesis addresses the challenges inherent in both of these tasks. First, designing an efficient hypervisor layer is considered; since vNUMA is implemented on the Itanium processor architecture, this is with particular reference to Itanium processor virtualisation. Then, novel DSM protocols are developed that allow SMP consistency models to be reproduced while providing better performance than a simple atomically-consistent DSM system. Finally, the system is evaluated, proving that it can provide good performance and compelling advantages for a variety of applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Sung, Mincheol. "Design and Implementation of a Network Server in LibrettOS." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/87066.

Full text
Abstract:
Traditional network stacks in monolithic kernels have reliability and security concerns. Any fault in a network stack affects the entire system owing to lack of isolation in the monolithic kernel. Moreover, the large code size of the network stack enlarges the attack surface of the system. A multiserver OS design solves this problem. In contrast to the traditional network stack, a multiserver OS pushes the network stack into the network server as a user process, which performs three enhancements: (i) allows the network server to run in user mode while having its own address space and isolating any fault occurring in the network server; (ii) minimizes the attack surface of the system because the trusted computing base contracts; (iii) enables failure recovery, which is an important feature supported by a multiserver OS. This thesis proposes a network server for LibrettOS, an operating system based on rumprun unikernels and the Xen Hypervisor developed by Virginia Tech. The proposed network server is a service domain providing an L2 frame forwarding service for application domains and based on rumprun such that the existing device drivers of NetBSD can be leveraged with little modification. In this model, the TCP/IP stack runs directly in the address space of applications. This allows retaining the client state even if the network server crashes and makes it possible to recover from a network server failure. We leverage the Xen PCI passthrough to access a NIC (Network Interface Controller) from the network server. Our experimental evaluation demonstrates that the performance of the network server is good and comparable with Linux and NetBSD. We also demonstrate the successful recovery after a failure.
This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. This research is also based upon work supported by the Office of Naval Research (ONR) under grants N00014-16-1-2104, N00014-16-1-2711, and N00014-18-1-2022.
Master of Science
When it comes to reliability and security in networking systems, concerns have been shown in traditional operating systems (OSs) such as Windows, MacOS, NetBSD, and Linux. Any fault in a networking system can have impacts on the entire system owing to lack of isolation in the OSs. Moreover, the large code size of a networking system enlarges the attack surface of the system. A multiserver OS design solves this problem by running a networking system as a network server, which performs three enhancements: (i) isolates any fault occurring in the network server itself; (ii) minimizes the attack surface of the system; and (iii) enables failure recovery. This thesis proposes a network server for LibrettOS, an operating system developed by Virginia Tech. The proposed network has two-pronged merits: (i) provides a system server providing a network packet forwarding service for applications; (ii) enables the existing device drivers of NetBSD to be leveraged with low amount of modification. Our experimental evaluation demonstrates that the performance of the network server outperforms state-of-the-art and comparable with Linux and that a successful recovery is possible after a failure.
APA, Harvard, Vancouver, ISO, and other styles
33

Jing, Wei. "Performance Isolation for Mixed Criticality Real-time System on Multicore with Xen Hypervisor." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-193603.

Full text
Abstract:
Multicore processors have imported the powerful computing capacity to real-time systems, allowing the multi-task execution on the co-running cores. Meanwhile, the competition for the shared resources among the on-die cores brings the side-effects which highly degrades the performance of single cores and puts the real-time requirements in danger. This thesis work is focused on addressing the memory access contentions on the real-time systems with mixed-criticality. A throttling algorithm is designed to control the memory access flows from the interfering side, securing the deadlines of critical tasks. We implemented the throttling framework on Xen Hypervisor and evaluated the overall isolating performance with a set of benchmarks. The results prove the effectiveness of our design.
APA, Harvard, Vancouver, ISO, and other styles
34

Nanavati, Mihir Sudarshan. "Breaking up is hard to do : security and functionality in a commodity hypervisor." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/35591.

Full text
Abstract:
Virtualization platforms have grown with an increasing demand for new technologies, with the modern enterprise-ready virtualization platform being a complex, feature-rich piece of software. Despite the small size of hypervisors, the trusted computing base (TCB) of most enterprise platforms is larger than that of most monolithic commodity operating systems. Several key components of the Xen platform reside in a special, highly-privileged virtual machine or the “Control VM”. We present Xoar, a modified version of the Xen platform that retrofits the modularity and isolation principles championed by microkernels onto a mature virtualization platform. Xoar divides the large, shared control VM of Xen’s TCB into a set of independent, isolated, single purpose components called shards. Shards improve security in several ways: components are restricted to the least privilege necessary for functioning and any sharing between guest VMs is explicitly configurable and auditable in tune with the desired risk exposure policies. Microrebooting components at configurable frequencies reduces the temporal attack surface. Our approach does not require any existing functionality to be sacrificed and allows components to be reused rather than rewritten from scratch. The low performance overhead leads us to believe that Xoar is viable alternative for deployment in enterprise environments.
APA, Harvard, Vancouver, ISO, and other styles
35

Maus, Stefan [Verfasser], and Andreas [Akademischer Betreuer] Podelski. "Verification of hypervisor subroutines written in Assembler = Verifikation von Hypervisorunterrutinen, geschrieben in Assembler." Freiburg : Universität, 2011. http://d-nb.info/1114828963/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Schwarz, Oliver. "No Hypervisor Is an Island : System-wide Isolation Guarantees for Low Level Code." Doctoral thesis, KTH, Teoretisk datalogi, TCS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192466.

Full text
Abstract:
The times when malware was mostly written by curious teenagers are long gone. Nowadays, threats come from criminals, competitors, and government agencies. Some of them are very skilled and very targeted in their attacks. At the same time, our devices – for instance mobile phones and TVs – have become more complex, connected, and open for the execution of third-party software. Operating systems should separate untrusted software from confidential data and critical services. But their vulnerabilities often allow malware to break the separation and isolation they are designed to provide. To strengthen protection of select assets, security research has started to create complementary machinery such as security hypervisors and separation kernels, whose sole task is separation and isolation. The reduced size of these solutions allows for thorough inspection, both manual and automated. In some cases, formal methods are applied to create mathematical proofs on the security of these systems. The actual isolation solutions themselves are carefully analyzed and included software is often even verified on binary level. The role of other software and hardware for the overall system security has received less attention so far. The subject of this thesis is to shed light on these aspects, mainly on (i) unprivileged third-party code and its ability to influence security, (ii) peripheral devices with direct access to memory, and (iii) boot code and how we can selectively enable and disable isolation services without compromising security. The papers included in this thesis are both design and verification oriented, however, with an emphasis on the analysis of instruction set architectures. With the help of a theorem prover, we implemented various types of machinery for the automated information flow analysis of several processor architectures. The analysis is guaranteed to be both sound and accurate.
Förr skrevs skadlig mjukvara mest av nyfikna tonåringar. Idag är våra datorer under ständig hot från statliga organisationer, kriminella grupper, och kanske till och med våra affärskonkurrenter. Vissa besitter stor kompetens och kan utföra fokuserade attacker. Samtidigt har tekniken runtomkring oss (såsom mobiltelefoner och tv-apparater) blivit mer komplex, uppkopplad och öppen för att exekvera mjukvara från tredje part. Operativsystem borde egentligen isolera känslig data och kritiska tjänster från mjukvara som inte är trovärdig. Men deras sårbarheter gör det oftast möjligt för skadlig mjukvara att ta sig förbi operativsystemens säkerhetsmekanismer. Detta har lett till utveckling av kompletterande verktyg vars enda funktion är att förbättra isolering av utvalda känsliga resurser. Speciella virtualiseringsmjukvaror och separationskärnor är exempel på sådana verktyg. Eftersom sådana lösningar kan utvecklas med relativt liten källkod, är det möjligt att analysera dem noggrant, både manuellt och automatiskt. I några fall används formella metoder för att generera matematiska bevis på att systemet är säkert. Själva isoleringsmjukvaran är oftast utförligt verifierad, ibland till och med på assemblernivå. Dock så har andra komponenters påverkan på systemets säkerhet hittills fått mindre uppmärksamhet, både när det gäller hårdvara och annan mjukvara. Den här avhandlingen försöker belysa dessa aspekter, huvudsakligen (i) oprivilegierad kod från tredje part och hur den kan påverka säkerheten, (ii) periferienheter med direkt tillgång till minnet och (iii) startkoden, samt hur man kan aktivera och deaktivera isolationstjänster på ett säkert sätt utan att starta om systemet. Avhandlingen är baserad på sex tidigare publikationer som handlar om både design- och verifikationsaspekter, men mest om säkerhetsanalys av instruktionsuppsättningar. Baserat på en teorembevisare har vi utvecklat olika verktyg för den automatiska informationsflödesanalysen av processorer. Vi har använt dessa verktyg för att tydliggöra vilka register oprivilegierad mjukvara har tillgång till på ARM- och MIPS-maskiner. Denna analys är garanterad att vara både korrekt och precis. Så vitt vi vet är vi de första som har publicerat en lösning för automatisk analys och bevis av informationsflödesegenskaper i standardinstruktionsuppsättningar.

QC 20160919


PROSPER
HASPOC
APA, Harvard, Vancouver, ISO, and other styles
37

Sundblad, Anton, and Gustaf Brunberg. "Secure hypervisor versus trusted execution environment : Security analysis for mobile fingerprint identification applications." Thesis, Linköpings universitet, Databas och informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139227.

Full text
Abstract:
Fingerprint identification is becoming increasingly popular as a means of authentication for handheld devices of different kinds. In order to secure such an authentication solution it is common to use a TEE implementation. This thesis examines the possibility of replacing a TEE with a hypervisor-based solution instead, with the intention of keeping the same security features that a TEE can offer. To carry out the evaluation a suitable method is constructed. This method makes use of fault trees to be able to find possible vulnerabilities in both systems, and these vulnerabilities are then documented. The vulnerabilities of both systems are also compared to each other to identify differences in how they are handled. It is concluded that if the target platform has the ability to implement a TEE solution, it can also implement the same solution using a hypervisor. However, the authors recommend against porting a working TEE solution, as TEEs often offer finished APIs for common operations that would require re-implementation in the examined hypervisor.
APA, Harvard, Vancouver, ISO, and other styles
38

Suryanarayana, Vidya Rao. "Credit scheduling and prefetching in hypervisors using Hidden Markov Models." Thesis, Wichita State University, 2010. http://hdl.handle.net/10057/3749.

Full text
Abstract:
The advances in storage technologies like storage area networking, virtualization of servers and storage have revolutionized the storage of the explosive data of modern times. With such technologies, resource consolidation has become an increasingly easy task to accomplish which has in turn simplified the access of remote data. Recent researches in hardware has boosted the capacity of drives and the hard disks have become very inexpensive than before. However, with such an increase in the storage technologies, there come some bottlenecks in terms of performance and interoperability. When it comes to virtualization, especially server virtualization, there will be a lot of guest operating systems running on the same hardware. Hence, it is very important to ensure each guest is scheduled at the right time and decrease the latency of data access. There are various hardware advances that have made prefetching of data into the cache easy and efficient. But, however, interoperability between vendors must be assured and more efficient algorithms need to be developed for these purposes. In virtualized environments where there can be hundreds of virtual machines running, very good scheduling algorithms need to be developed in order to reduce the latency and the wait time of the virtual machines in run queue. The current algorithms are more oriented in providing fair access to the virtual machines and are not very concerned about reducing the latency. This can be a major bottleneck in time critical applications like scientific applications that have now started deploying SAN technologies to store the explosive data. Also, when data needs to be extracted from these storage arrays to vii analyze and process them, the latency of a read operation has to be reduced in order to improve the performance. The research done in this thesis aims to reduce the scheduling delay in a XEN hypervisor and also to reduce the latency of reading data from the disk using Hidden Markov Models (HMM). The scheduling and prefetching scenarios are modeled using a Gaussian and a Discrete HMM and the latency involved is evaluated. The HMM is a statistical analysis technique used to classify and predict data that has a repetitive pattern over time. The results show that using a HMM decreases the scheduling and access latencies involved. The proposed technique is mainly intended for virtualization scenarios involving hypervisors and storage arrays. Various patterns of data access involving different ratios of reads and writes are considered and a discrete HMM (DHMM) is used to prefetch the next most probable block of data that might be read by a guest. Also, a Gaussian HMM is used to classify the arrival time of the requests in a XEN hypervisor and the GHMM is incorporated with the credit scheduler used in order to reduce the scheduling latency. The results are numerically evaluated and found that scheduling the virtual machines (domains) at the correct time indeed decreases the waiting times of the domains in the run queue.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
39

Tengvall, Martin. "Servervirtualisering : En jämförelse av hypervisorer." Thesis, University of Skövde, School of Humanities and Informatics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-4060.

Full text
Abstract:

Virtualisering av servrar är på frammarsch och ser ut att bli ett mer och mer frekvent inslag i datacenter världen över. När virtualisering ska införas i en organisation eller företag är det därför viktigt att veta om sina behov och därifrån sedan välja en virtualiseringslösning som passar.Denna rapport presenterar en jämförelse av de tre hypervisorer som är ledande på marknaden för virtualisering; VMware ESXi, Citrix XenServer och Microsoft Hyper-V. Den första delen av jämförelsen innefattar granskning av funktionalitet hos hypervisorerna, så som stöd för gästoperativsystem och hårdvarustöd. Den andra delen av jämförelsen mäter prestandan på de tre hypervisorerna på gästoperativsystemen Windows Server 2008, Suse Linux Enterprise Server 11 och Ubuntu Server 8.04 LTS.Prestandatester utförs med SysBench och de komponenter som testas är processor, RAM-minne och hårddisk. Resultaten visar på varierande resultat för de olika hårdvarukomponenterna på de olika systemen som testats.

APA, Harvard, Vancouver, ISO, and other styles
40

Kovalev, Mikhail [Verfasser], and Wolfgang J. [Akademischer Betreuer] Paul. "TLB virtualization in the context of hypervisor verification / Mikhail Kovalev. Betreuer: Wolfgang J. Paul." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2013. http://d-nb.info/1052903002/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bavelski, Alexei. "On the Performance of the Solaris Operating System under the Xen Security-enabled Hypervisor." Thesis, Linköping University, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9148.

Full text
Abstract:

This thesis presents an evaluation of the Solaris version of the Xen virtual machine monitor and a comparison of its performance to the performance of Solaris Containers under similar conditions. Xen is a virtual machine monitor, based on the paravirtualization approach, which provides an instruction set different to the native machine environment and therefore requires modifications to the guest operating systems. Solaris Zones is an operating system-level virtualization technology that is part of the Solaris OS. Furthermore, we provide a basic performance evaluation of the security modules for Xen and Zones, known as sHype and Solaris Trusted Extensions, respectively.

We evaluate the control domain (know as Domain-0) and the user domain performance as the number of user domains increases. Testing Domain-0 with an increasing number of user domains allows us to evaluate how much overhead virtual operating systems impose in the idle state and how their number influences the overall system performance. Testing one user domain and increasing the number of idle domains allows us to evaluate how the number of domains influences operating system performance. Testing concurrently loaded increasing numbers of user domains we investigate total system efficiency and load balancing dependent on the number of running systems.

System performance was limited by CPU, memory, and hard drive characteristics. In the case of CPU-bound tests Xen exhibited performance close to the performance of Zones and to the native Solaris performance, loosing 2-3% due to the virtualization overhead. In case of memory-bound and hard drive-bound tests Xen showed 5 to 10 times worse performance.

APA, Harvard, Vancouver, ISO, and other styles
42

Castagnoli, Carlo. "Cloud Computing: gli Hypervisor e la funzionalità di Live Migration nelle Infrastructure as a Service." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1856/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Fantini, Alessandro. "Virtualization technologies from hypervisors to containers: overview, security considerations, and performance comparisons." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12846/.

Full text
Abstract:
In un'epoca in cui quasi tutte le persone utilizzano quotidianamente applicazioni basate su cloud senza nemmeno farci caso e le organizzazioni del settore IT stanno investendo notevoli risorse in questo campo, non tutti sanno che il cloud computing non sarebbe stato possibile senza la virtualizzazione, una tecnica software che ha le sue radici nei primi anni sessanta. Lo scopo di questa tesi è fornire una panoramica delle tecnologie di virtualizzazione, dalla virtualizzazione hardware e gli hypervisor fino alla virtualizzazione a livello di sistema operativo basata su container, analizzare le loro architetture e fare considerazioni relative alla sicurezza. Inoltre, dal momento che le tecnologie basate su container si fondano su funzioni specifiche di contenimento del kernel Linux, alcune sezioni sono utilizzate per introdurre ed analizzare quest'ultime singolarmente, al livello di dettaglio appropriato. L'ultima parte di questo lavoro è dedicata al confronto quantitativo delle prestazioni delle tecnologie basate su container. In particolare, LXC e Docker sono raffrontati su una base di cinque test di vita reale e le loro prestazioni sono confrontate fianco a fianco, per evidenziare le differenze nella quantità di overhead che introducono.
APA, Harvard, Vancouver, ISO, and other styles
44

Vitner, Petr. "Instalace a konfigurace Octave výpočetního clusteru." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220659.

Full text
Abstract:
This paper explores the possibilities and tools for creating High-Performace Computing cluster. It contains a project for his creation and a detailed description of the setup and configuration in a virtual environment.
APA, Harvard, Vancouver, ISO, and other styles
45

Lipták, Roman. "Virtualizace a optimalizace IT infrastruktury ve společnosti." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2019. http://www.nusl.cz/ntk/nusl-399752.

Full text
Abstract:
Master’s thesis deals with the use of virtualization and consolidation technologies in order to optimize IT infrastructure in a selected company. The analysis contains current state of IT infrastructure and requirements for future upgrade. The theoretical part contains description of technologies and procedures used in virtualization and consolidation. Subsequently, the proposal of optimization and expansion of IT equipment is created together with management, implementation and economic evaluation of the solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Sheinidashtegol, Pezhman. "Impact of DDoS Attack on the Three Common HypervisorS(Xen, KVM, Virtual Box)." TopSCHOLAR®, 2016. http://digitalcommons.wku.edu/theses/1646.

Full text
Abstract:
Cloud computing is a technology of inter-connected servers and resources that use virtualization to utilize the resources, flexibility, and scalability. Cloud computing is accessible through the network. This accessibility and utilization have its own benefit and drawbacks. Utilization and scalability make this technology more economic and affordable for even small businesses. Flexibility drastically reduces the risk of starting businesses. Accessibility allows cloud customers not to be restricted in a specific location until they could have access to the network, and in most cases through the internet. These significant traits, however, have their own disadvantages. Easy accessibility makes it more convenient for the malicious user to have access to servers in the cloud. Virtualizations that come to existence by middleware software called Virtual Machine Managers (VMMs) or hypervisors come with different vulnerabilities. These vulnerabilities are adding to previously existed vulnerability of Networks and Operating systems and Applications. In this research we are trying to distinguish the most resistant Hypervisor between (Xen, KVM and Virtual Box) against Distributed Denial of Service (DDoS) attack, an attempt to saturate victim’s resources making them unavailable to legitimate users, or shutting down the services by using more than one machine as attackers by targeting three different resources (Network, CPU, Memory). This research will show how hypervisors act differently under the same attacks and conditions.
APA, Harvard, Vancouver, ISO, and other styles
47

Montironi, Adolfo Angel. "Digital Forensic Acquisition of Virtual Private Servers Hosted in Cloud Providers that Use KVM as a Hypervisor." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10845501.

Full text
Abstract:

Kernel-based Virtual Machine (KVM) is one of the most popular hypervisors used by cloud providers to offer virtual private servers (VPSs) to their customers. A VPS is just a virtual machine (VM) hired and controlled by a customer but hosted in the cloud provider infrastructure. In spite of the fact that the usage of VPS is popular and will continue to grow in the future, it is rare to find technical publications in the digital forensic field related to the acquisition process of a VPS involved in a crime. For this research, four VMs were created in a KVM virtualization node, simulating four independent VPSs and running different operating systems and applications. The utilities virsh and tcpdump were used at the hypervisor level to collect digital data from the four VPSs. The utility virsh was employed to take snapshots of the hard drives and to create images of the RAM content while the utility tcpdump was employed to capture in real-time the network traffic. The results generated by these utilities were evaluated in terms of efficiency, integrity, and completeness. The analysis of these results suggested both utilities were capable of collecting digital data from the VPSs in an efficient manner, respecting the integrity and completeness of the data acquired. Therefore, these tools can be used to acquire forensically-sound digital evidence from a VPS hosted in a cloud provider’s virtualization node that uses KVM as a hypervisor.

APA, Harvard, Vancouver, ISO, and other styles
48

Peiró, Frasquet Salvador. "Metodología para hipervisores seguros utilizando técnicas de validación formal." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/63152.

Full text
Abstract:
[EN] The availability of new processors with more processing power for embedded systems has raised the development of applications that tackle problems of greater complexity. Currently, the embedded applications have more features, and as a consequence, more complexity. For this reason, there exists a growing interest in allowing the secure execution of multiple applications that share a single processor and memory. In this context, partitioned system architectures based on hypervisors have evolved as an adequate solution to build secure systems. One of the main challenges in the construction of secure partitioned systems is the verification of the correct operation of the hypervisor, since, the hypervisor is the critical component on which rests the security of the partitioned system. Traditional approaches for Validation and Verification (V&V), such as testing, inspection and analysis, present limitations for the exhaustive validation and verification of the system operation, due to the fact that the input space to validate grows exponentially with respect to the number of inputs to validate. Given this limitations, verification techniques based in formal methods arise as an alternative to complement the traditional validation techniques. This dissertation focuses on the application of formal methods to validate the correctness of the partitioned system, with a special focus on the XtratuM hypervisor. The proposed methodology is evaluated through its application to the hypervisor validation. To this end, we propose a formal model of the hypervisor based in Finite State Machines (FSM), this model enables the definition of the correctness properties that the hypervisor design must fulfill. In addition, this dissertation studies how to ensure the functional correctness of the hypervisor implementation by means of deductive code verification techniques. Last, we study the vulnerabilities that result of the loss of confidentiality (CWE-200 [CWE08b]) of the information managed by the partitioned system. In this context, the vulnerabilities (infoleaks) are modeled, static code analysis techniques are applied to the detection of the vulnerabilities, and last the proposed techniques are validated by means of a practical case study on the Linux kernel that is a component of the partitioned system.
[ES] La disponibilidad de nuevos procesadores más potentes para aplicaciones empotradas ha permitido el desarrollo de aplicaciones que abordan problemas de mayor complejidad. Debido a esto, las aplicaciones empotradas actualmente tienen más funciones y prestaciones, y como consecuencia de esto, una mayor complejidad. Por este motivo, existe un interés creciente en permitir la ejecución de múltiples aplicaciones de forma segura y sin interferencias en un mismo procesador y memoria. En este marco surgen las arquitecturas de sistemas particionados basados en hipervisores como una solución apropiada para construir sistemas seguros. Uno de los principales retos en la construcción de sistemas particionados, es la verificación del correcto funcionamiento del hipervisor, dado que es el componente crítico sobre el que descansa la seguridad de todo el sistema particionado. Las técnicas tradicionales de V&V, como testing, inspección y análisis, presentan limitaciones para la verificación exhaustiva del comportamiento del sistema, debido a que el espacio de entradas a verificar crece de forma exponencial con respecto al número de entradas a verificar. Ante estas limitaciones las técnicas de verificación basadas en métodos formales surgen como una alternativa para completar las técnicas de validación tradicional. Esta disertación se centra en la aplicación de métodos formales para validar la corrección del sistema particionado, en especial del hipervisor XtratuM. La validación de la metodología se realiza aplicando las técnicas propuestas a la validación del hipervisor. Para ello, se propone un modelo formal del hipervisor basado en máquinas de autómatas finitos, este modelo formal permite la definición de las propiedades que el diseño hipervisor debe cumplir para asegurar su corrección. Adicionalmente, esta disertación analiza cómo asegurar la corrección funcional de la implementación del hipervisor por medio de técnicas de verificación deductiva de código. Por último, se estudian las vulnerabilidades de tipo information leak (CWE-200 [CWE08b]) debidas a la perdida de la confidencialidad de la información manejada en el sistema particionado. En este ámbito se modelan las vulnerabilidades, se aplican técnicas de análisis de código para la detección de vulnerabilidades en base al modelo definido y por último se valida la técnica propuesta por medio de un caso práctico sobre el núcleo del sistema operativo Linux que forma parte del sistema particionado.
[CAT] La disponibilitat de nous processadors amb major potencia de còmput per a aplicacions empotrades ha permès el desenvolupament de aplicacions que aborden problemes de major complexitat. Degut a açò, les aplicacions empotrades actualment tenen més funcions i prestacions, i com a conseqüència, una major complexitat. Per aquest motiu, existeix un interès creixent en per permetre la execució de múltiples aplicacions de forma segura i sense interferències en un mateix processador i memòria. En aquest marc sorgeixen les arquitectures de sistemes particionats basats en hipervisors com una solució apropiada per a la construcció de sistemes segurs Un dels principals reptes en la construcció de sistemes particionats, es la verificació del correcte funcionament del hipervisor, donat que aquest es el component crític sobre el que descansa la seguretat del sistema particionat complet. Les tècniques tradicionals de V&V, com són el testing, inspecció i anàlisi, presenten limitacions que fan impracticable la seva aplicació per a la verificació exhaustiva del comportament del sistema, degut a que el espai de entrades a verificar creix de forma exponencial amb el nombre de entrades a verificar. Front a aquestes limitacions les tècniques de verificació basades en mètodes formals sorgeixen com una alternativa per a completar les tècniques de validació tradicional. Aquesta dissertació es centra en la aplicació de mètodes formals per a validar la correcció del sistema particionat, en especial d del hipervisor XtratuM. La validació de la metodología es realitza aplicant les tècniques proposades a la validació del hipervisor. Per a aquest fi, es proposa un model formal del hipervisor basat en màquines de estats finits (FSM), aquest model formal permet la definició de les propietats que el disseny del hipervisor deu de complir per assegurar la seva correcció. Addicionalment, aquesta dissertació analitza com assegurar la correcció funcional de la implementació del hipervisor mitjançant tècniques de verificació deductiva de codi. Per últim, s'estudien les vulnerabilitats de tipus information leak (CWE-200 [CWE08b]) degudes a la pèrdua de la confidencialitat de la informació gestionada per el sistema particionat. En aquest àmbit, es modelen les vulnerabilitats, s'apliquen tècniques de anàlisis de codi per a la detecció de les vulnerabilitats en base al model definit, per últim es valida la tècnica proposada mitjançant un cas pràctic sobre el nucli del sistema operatiu Linux que forma part de l'arquitectura particionada.
Peiró Frasquet, S. (2016). Metodología para hipervisores seguros utilizando técnicas de validación formal [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/63152
TESIS
APA, Harvard, Vancouver, ISO, and other styles
49

Thierry, Philippe. "Systèmes véhiculaires à domaines de sécurité et de criticité multiples : une passerelle systronique temps réel." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1102/document.

Full text
Abstract:
De nos jours, les véhicules intègrent de plus en plus de systèmes interconnectés. Ces systèmes ont des fonctions aussi nombreuses que complexes et sont soumis à des contraintes de sureté de fonctionnement (dont le temps réel) mais également de plus en plus de sécurité. Avec l'apparition des véhicules connectés, il devient nécessaire de faire communiquer ces différents systèmes, tant pour les gérer au niveau véhiculaire que potentiellement à distance. Faire communiquer ces différents réseaux, a fortiori dans les véhicules militaires, implique la prise en compte de diverses contraintes. Ces contraintes nécessitent d'être traitées par des éléments en coupure entre les différents systèmes. Un tel élément est alors en charge de protéger ces derniers en termes de sûreté de fonctionnement et de sécurité mais doit également assurer un transfert efficace et borné de l'information. Dans cette thèse, nous avons proposé une architecture logicielle de passerelle permettant de répondre à ces différentes contraintes et d'assurer ainsi l'interconnexion de tous ces systèmes. La solution se présente comme un framework permettant d'intégrer divers modules sur une architecture partitionnée et sûre, afin de pouvoir répondre à divers besoins spécifiques aux systèmes véhiculaires
Nowadays, vehicular systems are composed of more and more interconnected systems. Those systems manage a lot of complex functions and must comply with various safety-critical requirements (such as real-time) but also more and more with security requirements. With the new connected vehicles, it is necessary to make these various systems communicate, in order to manage locally or remotely the overall vetronic system. Make these systems communicate, moreover in military vehicles, implies to support various constraints. Theses constraints need to be supported by specific elements, used as gateways between each vehicle system needing external communication. This gateway has to protect each system in term of safety and security, but also has to guarantee an efficient upper-bounded transfer between them. In this thesis, we have proposed a software architecture for these gateways, compliant with the various vehicular security and safety requirements. The solution is proposed as a framework, supporting a modular configuration and able to aggregate various modules on a partitioned software architecture. Such an aggregation is then able to respond to the various vehicular specific needs such as security and real-time
APA, Harvard, Vancouver, ISO, and other styles
50

Mahmud, Nesredin. "Automated Orchestra for Industrial Automation on Virtualized Multicore Environment." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-23672.

Full text
Abstract:
Industrial control systems are applied in many areas e.g., motion control for industrial robotics, process control of large plants such as in the area of oil and gas, and in large national power grids. Since the last decade with advancement and adoption of virtualization and multicore technology (e.g., Virtual Monitoring Machine, cloud computing, server virtualization, application virtualization), IT systems, automation industries have benefited from low investment, effective system management and high service availability. However, virtualization and multicore technologies have posed a serious challenge to real-time systems, which is violating timeliness and predictability of real-time application running on control systems. To address the challenge, we have extended a real-time component-based framework with virtual nodes; and evaluated the framework in the context of virtualized multicore environment. The evaluation is demonstrated by modeling and implementing an orchestra application with QoS for CPU, memory and network bandwidth. The orchestra application is a real-time and distributed application deployed on virtualized multicore PCs connected with speakers. The result shows undistorted orchestra performance played through speakers connected to physical computer nodes. The contribution of the thesis can be considered: 1) extending a real-time component-based framework, Future Automation Software Architecture (FASA) with virtual nodes using Virtual Computation Resource (VCR) and 2) design and installation of reusable test environment for development, debugging and testing of real-time application on a network of virtualized multicore environment.
Vinnova project “AUTOSAR for Multi-Core in Automotive and Automation Industries “
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography