To see the other types of publications on this topic, follow the link: System call virtual machine.

Dissertations / Theses on the topic 'System call virtual machine'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'System call virtual machine.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cardace, Antonio. "UMView, a Userspace Hypervisor Implementation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13184/.

Full text
Abstract:
UMView is a partial virtual machine and userspace hypervisor capable of intercepting system calls and modifying their behavior according to the calling process' view. In order to provide flexibility and modularity UMView supports modules loadable at runtime using a plugin architecture. UMView in particular is the implementation of the View-OS concept which negates the global view assumption which is so radically established in the world of OSes and virtualization.
APA, Harvard, Vancouver, ISO, and other styles
2

Pareschi, Federico. "Applying partial virtualization on ELF binaries through dynamic loaders." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5065/.

Full text
Abstract:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Whitaker, Andrew. "Building system services with virtual machine monitors /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Ankur Kumar. "VPLACEMENT: Contention Aware Virtual Machine Placement System." Research Showcase @ CMU, 2014. http://repository.cmu.edu/theses/60.

Full text
Abstract:
Maximizing the number of cohosted virtual machines (VMs) while still maintaining the desired performance level is a critical goal in cloud. As we pack more virtual machines on a physical machine (PM), the resource contention increases, thereby affecting the response time. This virtual machine placement problem has been vastly studied and most of effort has been in either allocating more resources to virtual machines (resizing) or migrating them to a higher capacity PM based on the resource demand estimation. Studies have also shown that in the presence of resource contention the resource demand estimation mechanisms could predict more resource requirement than actually needed. Hence deciding virtual machine placement and allocated resources based on utilization estimation could lead to inefficient usage of PM resources. We propose a novel approach to solve this problem which focuses on overall application response time rather than individual virtual machines. Large scale applications are deployed as multi-tier components. These components interact with each other so that application can perform its task. Our placement algorithm uses the dependency relationship between these components to understand application response time behavior. Our solution focuses on reducing the performance degradation because of resource contention. We propose a VM placement system termed as Vplacement. This system uses the traffic analysis to understand the dependency relationship between application components. This dependency relationship and traffic analysis provides some vital iii data like impact of component processing time on application response time, the probability of resource contention between a pair of component nodes (coArrival Probability) etc. The impact and coarrival probability is used by the placement engine of Vplacement to minimize the degradation of application performance because of resource contention by cohosting the low impact component nodes together.
APA, Harvard, Vancouver, ISO, and other styles
5

Jayaraman, Arunkumar, and Pavankumar Rayapudi. "Comparative Study of Virtual Machine Software Packages with Real Operating System." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1952.

Full text
Abstract:
Virtualization allows computer users to utilize their resources more efficiently and effectively. Operating system that runs on top of the Virtual Machine or Hypervisor is called guest OS. The Virtual Machine is an abstraction of the real physical machine. The main aim of this thesis work was to analyze different kinds of virtualization software packages and to investigate their advantages and disadvantages. In addition, we analyzed the performance of the virtual software packages with a real operating system in terms of web services. Web Servers play an important role on the Internet. The response time and throughput for a web server are different for different virtualization software packages and between a real host and a virtual host. In this thesis, we analyzed the web server performance on Linux. We compared the throughput for three different virtualization software packages (VMware, QEMU, and Virtual Box). The performance results clearly indicate that the real machine performance is better than the performance of the virtual machines. VMware has the better performance compared to other virtual software packages.
APA, Harvard, Vancouver, ISO, and other styles
6

Ye, Lei. "Energy Management for Virtual Machines." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/283603.

Full text
Abstract:
Current computing infrastructures use virtualization to increase resource utilization by deploying multiple virtual machines on the same hardware. Virtualization is particularly attractive for data center, cloud computing, and hosting services; in these environments computer systems are typically configured to have fast processors, large physical memory and huge storage capable of supporting concurrent execution of virtual machines. Subsequently, this high demand for resources is directly translating into higher energy consumption and monetary costs. Increasingly managing energy consumption of virtual machines is becoming critical. However, virtual machines make the energy management more challenging because a layer of virtualization separates hardware from the guest operating system executing inside a virtual machine. This dissertation addresses the challenge of designing energy-efficient storage, memory and buffer cache for virtual machines by exploring innovative mechanisms as well as existing approaches. We analyze the architecture of an open-source virtual machine platform Xen and address energy management on each subsystem. For storage system, we study the I/O behavior of the virtual machine systems. We address the isolation between virtual machine monitor and virtual machines, and increase the burstiness of disk accesses to improve energy efficiency. In addition, we propose a transparent energy management on main memory for any types of guest operating systems running inside virtual machines. Furthermore, we design a dedicated mechanism for the buffer cache based on the fact that data-intensive applications heavily rely on a large buffer cache that occupies a majority of physical memory. We also propose a novel hybrid mechanism that is able to improve energy efficiency for any memory access. All the mechanisms achieve significant energy savings while lowering the impact on performance for virtual machines.
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Jiangang. "Development of logical models for CNC machine tool motion control system with application to virtual machine tool design /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Yu. "JAVA VIRTUAL MACHINE DESIGN FOR EMBEDDED SYSTEMS: ENERGY, TIME PREDICTABILITY AND PERFORMANCE." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/dissertations/186.

Full text
Abstract:
Embedded systems can be found everywhere in our daily lives. Due to the great variety of embedded devices, the platform independent Java language provides a good solution for embedded system development. Java virtual machine (JVM) is the most critical component of all kinds of Java platforms. Hence, it is extremely important to study the special design of JVM for embedded systems. The key challenges of designing a successful JVM for embedded systems are energy efficiency, time predictability and performance, which are investigated in this dissertation, respectively. We first study the energy issue of JVM on embedded systems. With a cycle-accurate simulator, we study each stage of Java execution separately to test the effects of different configurations in both software and hardware. After that, an alternative Adaptive Optimization System (AOS) model is introduced, which estimated the cost/benefit using energy data instead of running time. We tuned the parameters of this model to study how to improve the dynamic compilation and optimization in Jikes RVM in terms of energy consumption. In order to further reduce the energy dissipation of JVM on embedded systems, we study adaptive drowsy cache control for Java applications, where JVM can be used to make better decision on drowsy cache control. We explore the impact of different phases of Java applications on the timing behavior of cache usage. Then we propose several techniques to adaptively control drowsy cache to reduce energy consumption with minimal impact on performance. It is observed that traditional Java code generation and instruction fetch path are not efficient. So we study three hardware-based code caching strategies, which attempt to write and read the dynamically generated Java code faster and more energy-efficiently. Time predictability is another key challenge for JVM on embedded systems. So we exploit multicore computing to reduce the timing unpredictability caused by dynamic compilation and adaptive optimization. Our goal is to retain high performance comparable to that of traditional dynamic compilation and, at the same time, obtain better time predictability for JVM. We study pre-compilation techniques to utilize another core more efficiently. Furthermore, we develop Pre-optimization on Another Core (PoAC) scheme to replace AOS in Jikes JVM, which is very sensitive to execution time variation and impacts time predictability greatly. Finally, we propose two new approaches that automatically parallelizes Java programs at run-time, in order to meet the performance challenge of JVM on embedded systems. These approaches rely on run-time trace information collected during program execution, and dynamically recompiles Java byte code that can be executed in parallel. One approach utilizes trace information to improve traditional loop parallelization, and the other parallelizes traces instead of loop iterations.
APA, Harvard, Vancouver, ISO, and other styles
9

Karlsson, Jan, and Patrik Eriksson. "How the choice of Operating System can affect databases on a Virtual Machine." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4848.

Full text
Abstract:
As databases grow in size, the need for optimizing databases is becoming a necessity. Choosing the right operating system to support your database becomes paramount to ensure that the database is fully utilized. Furthermore with the virtualization of operating systems becoming more commonplace, we find ourselves with more choices than we ever faced before. This paper demonstrates why the choice of operating system plays an integral part in deciding the right database for your system in a virtual environment. This paper contains an experiment which measured benchmark performance of a Database management system on various virtual operating systems. This experiment shows the effect a virtual operating system has on the database management system that runs upon it. These findings will help to promote future research into this area as well as provide a foundation on which future research can be based upon.
APA, Harvard, Vancouver, ISO, and other styles
10

Alabdulhafez, Abdulaziz. "Analysing and quantifying the influence of system parameters on virtual machine co-residency in public clouds." Thesis, University of Newcastle upon Tyne, 2015. http://hdl.handle.net/10443/2981.

Full text
Abstract:
Public Infrastructure-as-a-Service (IaaS) cloud promises significant efficiency to businesses and organisations. This efficiency is possible by allowing “co-residency” where Virtual Machines (VMs) that belong to multiple users share the same physical infrastructure. With co-residency being inevitable in public IaaS clouds, malicious users can leverage information leakage via side channels to launch several powerful attacks on honest co-resident VMs. Because co-residency is a necessary first step to launching side channel attacks, this motivates this thesis to look into understanding the co-residency probability (i.e. the probability that a given VM receives a co-resident VM). This thesis aims to analyse and quantify the influence of cloud parameters (such as the number of hosts and users) on the co-residency probability in four commonly used Placement Algorithms (PAs). These PAs are First Fit, Next Fit, Power Save and Random. This analysis then helps to identify the cloud parameters’ settings that reduce the coresidency probability in four PAs. Because there are many cloud parameters and parameters’ settings to consider, this forms the main challenge in this thesis. In order to overcome this challenge, fractional factorial design is used to reduce the number of required experiments to analyse and quantify the parameters’ influence in various settings. This thesis takes a quantitative experimental simulation and analytical prediction approach to achieve its aim. Using a purpose-built VM Co-residency simulator, (i) the most influential cloud parameters affecting co-residency probability in four PAs have been identified. Identifying the most influential parameters has helped to (ii) explore the best settings of these parameters that reduce the co-residency probability under the four PAs. Finally, analytical estimation, with the coexistence of different populations of attackers, has been derived to (iii) find the probability that a new co-residing VM belongs to an attacker. This thesis identifies the number of hosts to be the most influential cloud parameters on the coresidency probability in the four PAs. Also, this thesis presents evidence that VMs hosted in IaaS clouds that use Next Fit or Random are more resilient against receiving co-resident VMs compared to when First Fit or Power Save are used. Further, VMs in IaaS clouds with a higher number of hosts are less likely to exhibit co-residency. This thesis generates new insights into the potential of co-residency reduction to reduce the attack surface for side channel attacks. The outcome of this thesis is a plausible blueprint for IaaS cloud providers to consider the influence on the co-residency probability as an important selection factor for cloud settings and PAs.
APA, Harvard, Vancouver, ISO, and other styles
11

Westerberg, Simon. "Semi-Automating Forestry Machines : Motion Planning, System Integration, and Human-Machine Interaction." Doctoral thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-89067.

Full text
Abstract:
The process of forest harvesting is highly mechanized in most industrialized countries, with felling and processing of trees performed by technologically advanced forestry machines. However, the maneuvering of the vehicles through the forest as well as the control of the on-board hydraulic boom crane is currently performed through continuous manual operation. This complicates the introduction of further incremental productivity improvements to the machines, as the operator becomes a bottleneck in the process. A suggested solution strategy is to enhance the production capacity by increasing the level of automation. At the same time, the working environment for the operator can be improved by a reduced workload, provided that the human-machine interaction is adapted to the new automated functionality. The objectives of this thesis are 1) to describe and analyze the current logging process and to locate areas of improvements that can be implemented in current machines, and 2) to investigate future methods and concepts that possibly require changes in work methods as well as in the machine design and technology. The thesis describes the development and integration of several algorithmic methods and the implementation of corresponding software solutions, adapted to the forestry machine context. Following data recording and analysis of the current work tasks of machine operators, trajectory planning and execution for a specific category of forwarder crane motions has been identified as an important first step for short term automation. Using the method of path-constrained trajectory planning, automated crane motions were demonstrated to potentially provide a substantial improvement from motions performed by experienced human operators. An extension of this method was developed to automate some selected motions even for existing sensorless machines. Evaluation suggests that this method is feasible for a reasonable deviation of initial conditions. Another important aspect of partial automation is the human-machine interaction. For this specific application a simple and intuitive interaction method for accessing automated crane motions was suggested, based on head tracking of the operator. A preliminary interaction model derived from user experiments yielded promising results for forming the basis of a target selection method, particularly when combined with some traded control strategy. Further, a modular software platform was implemented, integrating several important components into a framework for designing and testing future interaction concepts. Specifically, this system was used to investigate concepts of teleoperation and virtual environment feedback. Results from user tests show that visual information provided by a virtual environment can be advantageous compared to traditional video feedback with regards to both objective and subjective evaluation criteria.
APA, Harvard, Vancouver, ISO, and other styles
12

Aichouch, Mohamed El Mehdi. "Evaluation of a multiple criticality real-time virtual machine system and configuration of an RTOS's resources allocation techniques." Thesis, Rennes, INSA, 2014. http://www.theses.fr/2014ISAR0014/document.

Full text
Abstract:
L'utilisation de la virtualisation dans le domaine des serveurs d'entreprise est aujourd'hui une méthode courante. La virtualisation est une technique qui permet de faire fonctionner sur une seule machine réelle plusieurs systèmes d'exploitation. Cette technique est train d'être adoptée dans le développement des systèmes embarqués suite à la disponibilité de nouveaux processeurs classiquement destiné à ce domaine. Cependant, il y a une différence de contraintes entre les applications d'entreprise et les applications embarquées, celleci doivent respecter des contraintes de temps-réel en réalisant leurs tâches. Dans nos travaux de recherche nous avons étudié l'impact de la virtualisation sur un système d'exploitation temps-réel. Nous avons mesuré le surcoût et la latence des fonctions internes du système d'exploitation déployé sur une machine virtuelle, et nous les avons comparés à celles du système installé sur une machine réelle. Les résultats ont montré que ces métriques sont plus élevées lorsque la virtualisation est utilisée. Notre analyse a révélé que la puce électronique doit inclure des mécanismes matériels qui assistent le logiciel de contrôle des machines virtuelles afin de réduire le surcoût de la virtualisation, mais il est aussi essentiel de choisir une politique d'allocation des ressources efficace afin de garantir le respect des contraintes de temps-réel demandées par les machines virtuelles. Notre second axe de recherche concerne la transformation d'un modèle de simulation d'un système d'exploitation vers des programmes exécutables sur un système-sur-puce. Cette transformation doit également préserver une caractéristique offerte par ce modèle qui est la facilité de configuration des techniques d'allocation de ressources. Pour transformer le modèle de système d'exploitation nous avons utilisé des techniques de l'ingénierie-dirigée par les modèles. Où dans un premier temps le modèle initiale est transformé vers un autre modèle, ensuite ce second modèle est à son tour transformé automatiquement en un code source. Pour assurer la configuration du système d'exploitation finale nous avons utilisé une librairie placée entre le système d'exploitation et l'application afin d'identifier les besoins de celle-ci en termes de ressources et adapter le système à ces besoins. L'évaluation des performances de la librairie a démontré la viabilité de l'approche
In the domain of server and mainframe systems, virtualizing a computing system's physical resources to achieve improved sharing and utilization has been well established for decades. Full virtualization of all system resources makes it possible to run multiple guest operating systems on a single physical platform. Recently, the availability of full virtualization on physical platforms that target embedded systems creates new use-cases in the domain of real-time embedded systems. In this dissertation we use an existing “virtual machines monitor” to evaluate the performance of a real-time operating system. We observed that the virtual machine monitor affects the internal overheads and latencies of the guest OS. Our analysis revealed that the hardware mechanisms that allow a virtual machine monitor to provide an efficient way to virtualize the processor, the memory management unit, and the input/output devices, are necessary to limit the overhead of the virtualization. More importantly, the scheduling of virtual machines by the VMM is essential to guarantee the temporal constraints of the system and have to be configured carefully. In a second work and starting from a previous project aiming at allowing a system designer to explore a software-hardware codesign of a solution using high-level simulation models, we proposed a methodology that allows the transformation of a simulation model into a binary executable on a physical platform. The idea is to provide the system designer with the necessary tools to rapidly explore the design space and validate it, and then to generate a configuration that could be used directly on top of a physical platform. We used a model-driven engineering approach to perform a model-to-model transformation to convert the simulation model into an executable model. And we used a middleware able to support a variety of the resources allocation techniques in order to implement the configuration previously selected by the system designer at simulation phase. We proposed a prototype that implements our methodology and validate our concepts. The results of the experiments confirmed the viability of this approach
APA, Harvard, Vancouver, ISO, and other styles
13

Fur, Filip. "The Improvement of Automating the Guest OS Configuration of Virtual Machines Deployed from Templates: A Case Study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153128.

Full text
Abstract:
This paper investigates the effects of automating system administration within a virtualized server environment. For system administrators, creation and configuration of new Virtual Machines has shown to be a common, and yet time and manual labour consuming task. Thus, this process has been studied thoroughly to find out in what degree it will lend itself to automation. The nature of the process was found to be well-suited for a high degree of automation. The automation tool is developed, presented and evaluated. A series of quantitative tests were orchestrated, testing both manual configuration and configuration by using the tool. The results were analysed, and it became visible that the manual configuration has an interruptive behaviour which is not the case in the produced process. The time improvements of the automation are approximated from the gathered test data and the results show a significant improvement in process speed-up with a test average of 300% corresponding to roughly 22 minutes per configured VM. Note that when calculating time saving and process speed-up the assumption is made that two employees are depending on the configuration which has been seen often to be the case. This work has shed light on the need for a more holistic estimation model of calculating process speed-up when you have factors as multiple people being dependent on a process and added time due to loss of operator focus (e.g. due to interruptive behaviour during the process). Furthermore, a strong case is made for the implementation of process automation in administrat ive tasks within virtualized server environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Driscoll, Michael T. "A machine vision system for capture and interpretation of an orchestra conductor's gestures." Link to electronic version, 1999. http://www.wpi.edu/Pubs/ETD/Available/etd-051199-142455/unrestricted/thesis.pdf.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: color thresholding; video for Windows; center of mass calculation; contour extraction; area calculation; virtual orchestra; Broadway. Includes bibliographical references (leaves 62-63).
APA, Harvard, Vancouver, ISO, and other styles
15

Lindgren, Jonas. "Analysis of requirements for an automated testing and grading assistance system." Thesis, Linköpings universitet, Datorteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-105692.

Full text
Abstract:
This thesis analyzes the configuration and security requirements of an auto-mated assignment testing system. The requirements for a flexible yet powerfulconfiguration format is discussed in depth, and an appropriate configurationformat is chosen. Additionally, the overall security requirements of this systemis discussed, analyzing the different alternatives available to fulfill the require-ments.

Framläggningen redan avklarad.

APA, Harvard, Vancouver, ISO, and other styles
16

Baruchi, Artur. "Memory Dispatcher: uma contribuição para a gerência de recursos em ambientes virtualizados." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-17082010-110618/.

Full text
Abstract:
As Máquinas Virtuais ganharam grande importância com o advento de processadores multi-core (na plataforma x86) e com o barateamento de componentes de hardware, como a memória. Por conta desse substancial aumento do poder computacional, surgiu o desafio de tirar proveito dos recursos ociosos encontrados nos ambientes corporativos, cada vez mais populados por equipamentos multi-core e com vários Gigabytes de memória. A virtualização, mesmo sendo um conceito já antigo, tornou-se novamente popular neste cenário, pois com ela foi possível utilizar melhor os recursos computacionais, agora abundantes. Este trabalho tem como principal foco estudar algumas das principais técnicas de gerência de recursos computacionais em ambientes virtualizados. Apesar de muitos dos conceitos aplicados nos projetos de Monitores de Máquinas Virtuais terem sido portados de Sistemas Operacionais convencionais com pouca, ou nenhuma, alteração; alguns dos recursos ainda são difíceis de virtualizar com eficiência devido a paradigmas herdados desses mesmos Sistemas Operacionais. Por fim, é apresentado o Memory Dispatcher (MD), um mecanismo de gerenciamento de memória, com o objetivo principal de distribuir a memória entre as Máquinas Virtuais de modo mais eficaz. Este mecanismo, implementado em C, foi testado no Monitor de Máquinas Virtuais Xen e apresentou ganhos de memória de até 70%.
Virtual Machines have gained great importance with advent of multi-core processors (on platform x86) and with low cost of hardware parts, like physical memory. Due to this computational power improvement a new challenge to take advantage of idle resources has been created. The virtualization technology, even being an old concept became popular in research centers and corporations. With this technology idle resources now can be exploited. This work has the objective to show the main techniques to manage computational resources in virtual environments. Although many of current concepts used in Virtual Machine Monitors project has been ported, with minimal changes, from conventional Operating Systems there are some resources that are difficult to virtualize with efficiency due to old paradigms still present in Operating Systems projects. Finally, the Memory Dispatcher (MD) is presented, a mechanism used to memory management. The main objective of MD is to improve the memory share among Virtual Machines. This mechanism was developed in C and it was tested in Xen Virtual Machine Monitor. The MD showed memory gains up to 70%.
APA, Harvard, Vancouver, ISO, and other styles
17

Tsiftes, Nicolas. "Storage-Centric System Architectures for Networked, Resource-Constrained Devices." Doctoral thesis, Uppsala universitet, Datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-267628.

Full text
Abstract:
The emergence of the Internet of Things (IoT) has increased the demand for networked, resource-constrained devices tremendously. Many of the devices used for IoT applications are designed to be resource-constrained, as they typically must be small, inexpensive, and powered by batteries. In this dissertation, we consider a number of challenges pertaining to these constraints: system support for energy efficiency; flash-based storage systems; programming, testing, and debugging; and safe and secure application execution. The contributions of this dissertation are made through five research papers addressing these challenges. Firstly, to enhance the system support for energy-efficient storage in resource-constrained devices, we present the design, implementation, and evaluation of the Coffee file system and the Antelope DBMS. Coffee provides a sequential write throughput that is over 92% of the attainable flash driver throughput, and has a constant memory footprint for open files. Antelope is the first full-fledged relational DBMS for sensor networks, and it provides two novel indexing algorithms to enable fast and energy-efficient database queries. Secondly, we contribute a framework that extends the functionality and increases the performance of sensornet checkpointing, a debugging and testing technique. Furthermore, we evaluate how different data compression algorithms can be used to decrease the energy consumption and data dissemination time when reprogramming sensor networks. Lastly, we present Velox, a virtual machine for IoT applications. Velox can enforce application-specific resource policies. Through its policy framework and its support for high-level programming languages, Velox helps to secure IoT applications. Our experiments show that Velox monitors applications' resource usage and enforces policies with an energy overhead below 3%. The experimental systems research conducted in this dissertation has had a substantial impact both in the academic community and the open-source software community. Several of the produced software systems and components are included in Contiki, one of the premier open-source operating systems for the IoT and sensor networks, and they are being used both in research projects and commercial products.
APA, Harvard, Vancouver, ISO, and other styles
18

Procházka, Boris. "Útoky na operační systém Linux v teorii a praxi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237139.

Full text
Abstract:
This master's thesis deals with Linux kernel security from the attacker's point of view. It maps methods and techniques of disguising the computing resources used by today's IT pirates. The thesis presents a unique method of attack directed on the system call interface and implemented in the form of two tools (rootkits). The thesis consists of a theoretical and a practical part. Emphasis is placed especially on the practical part, which manifests the presented information in the form of experiments and shows its use in real life. Readers are systematically guided as far as the creation of a unique rootkit, which is capable of infiltrating the Linux kernel by a newly discovered method -- even without support of loadable modules. A part of the thesis focuses on the issue of detecting the discussed attacks and on effective defence against them.
APA, Harvard, Vancouver, ISO, and other styles
19

Hinton, Michael Glenn. "Inter-Core Interference Mitigation in a Mixed Criticality System." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8648.

Full text
Abstract:
In this thesis, we evaluate how well isolation can be achieved between two virtual machines within a mixed criticality system on a multi-core processor. We achieve this isolation with Jailhouse, an open-source, minimalist hypervisor. We then enhance Jailhouse with core throttling, a technique we use to minimize inter-core interference between VMs. Then, we run workloads with and without core throttling to determine the effect throttling has on interference between a non-real time VM and a real-time VM. We find that Jailhouse provides excellent isolation between VMs even without throttling, and that core throttling suppresses the remaining inter-core interference to a large extent.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-ou'n, Ashraf M. S. "VM Allocation in Cloud Datacenters Based on the Multi-Agent System. An Investigation into the Design and Response Time Analysis of a Multi-Agent-based Virtual Machine (VM) Allocation/Placement Policy in Cloud Datacenters." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16067.

Full text
Abstract:
Recent years have witnessed a surge in demand for infrastructure and services to cover high demands on processing big chunks of data and applications resulting in a mega Cloud Datacenter. A datacenter is of high complexity with increasing difficulties to identify, allocate efficiently and fast an appropriate host for the requested virtual machine (VM). Establishing a good awareness of all datacenter’s resources enables the allocation “placement” policies to make the best decision in reducing the time that is needed to allocate and create the VM(s) at the appropriate host(s). However, current algorithms and policies of placement “allocation” do not focus efficiently on awareness of the resources of the datacenter, and moreover, they are based on conventional static techniques. Which are adversely impacting on the allocation progress of the policies. This thesis proposes a new Agent-based allocation/placement policy that employs some of the Multi-Agent system features to get a good awareness of Cloud Datacenter resources and also provide an efficient allocation decision for the requested VMs. Specifically, (a) The Multi-Agent concept is used as a part of the placement policy (b) A Contract Net Protocol is devised to establish good awareness and (c) A verification process is developed to fully dimensional VM specifications during allocation. These new results show a reduction in response time of VM allocation and the usage improvement of occupied resources. The proposed Agent-based policy was implemented using the CloudSim toolkit and consequently was compared, based on a series of typical numerical experiments, with the toolkit’s default policy. The comparative study was carried out in terms of the time duration of VM allocation and other aspects such as the number of available VM types and the amount of occupied resources. Moreover, a two-stage comparative study was introduced through this thesis. Firstly, the proposed policy is compared with four state of the art algorithms, namely the Random algorithm and three one-dimensional Bin-Packing algorithms. Secondly, the three Bin-Packing algorithms were enhanced to have a two-dimensional verification structure and were compared against the proposed new algorithm of the Agent-based policy. Following a rigorous comparative study, it was shown that, through the typical numerical experiments of all stages, the proposed new Agent-based policy had superior performance in terms of the allocation times. Finally, avenues arising from this thesis are included.
APA, Harvard, Vancouver, ISO, and other styles
21

Kreutz, Diego Luis. "FLEXVAPS: UM SISTEMA DE GERENCIAMENTO DE VIRTUAL APPLIANCES PARA MÁQUINAS VIRTUAIS HETEROGÊNEAS." Universidade Federal de Santa Maria, 2009. http://repositorio.ufsm.br/handle/1/5354.

Full text
Abstract:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Virtual appliance is a new concept derived from the virtual machine monitors world. It is considered a data package that can be electronically distributed. Today, there area free and commercial virtual appliance repositories available on the market. These data packages are mostly composed by pre-configured and pre-optimized operating systems and applications joined to solve a specific computing problem. The growing virtual machine monitor market has lead to a wide virtual appliance variety available to end users. This diversity naturally results in heterogeneous virtual machines, ie, instances of virtual appliances that run on different virtual machine monitors. Thus, this work has as main goal propose and prototype a virtual appliance management solution for heterogeneous virtual machines, something less explored until now. The aim is to provide end users the ability of managing different environments, using virtual appliances, such as computer labs, clusters, networks and other kinds of computing environments. Virtual appliance creation and maintenance is more practical and effective for both system administrators and end users. Moreover, the sharing of different virtual appliances avoids the waste of time and disk space, as usually occurs with traditional approaches. In this work it is proposed, prototyped and evaluated a system for managing virtual appliances for heterogeneous virtual machines. This system has proved to be feasible for using VAP repositories to manage networked machines and systems, simplifying the work of end users and system administrators.
Com o desenvolvimento e a proliferação dos monitores de máquinas virtuais, surgiu um novo conceito, uma nova tendência na área de virtualização de sistemas: virtual appliance. Este, segundo a origem da definição, é um pacote de dados que pode ser distribuído eletronicamente. Hoje, existem repositórios livres e comerciais de virtual appliances no mercado. Esses pacotes são constituídos por um sistema operacional e aplicativos configurados e otimizados para resolver um determinado problema computacional. Há uma grande variedade de virtual appliances, para os mais diversos monitores de máquinas virtuais, disponíveis aos usuários finais. Essa diversidade leva a máquinas virtuais heterogêneas, ou seja, instâncias de virtual appliances que são executadas sobre diferentes monitores de máquinas virtuais. Sendo assim, o objeto deste trabalho é desenvolver uma solução para o gerenciamento de virtual appliances para máquinas virtuais heterogêneas, algo ainda pouco explorado. O objetivo é proporcionar aos usuários finais a possibilidade de gerenciar ambientes diversos, como laboratórios de informática, aglomerados de computadores, redes de computadores e ambientes de pesquisa e produção em geral, utilizando-se da diversidade, flexibilidade e disponibilidade dos virtual appliances. Criar e manter esses pacotes de dados é algo mais prático e eficaz, tanto para administradores de sistemas, quanto para usuários finais. Ademais, compartilhar os diferentes virtual appliances evita desperdícios de tempo e espaço, como costuma ocorrer com abordagens tradicionais. Neste trabalho é proposto, prototipado e avaliado um sistema de gerenciamento de virtual appliances para máquinas virtuais heterogêneas. Este sistema demonstrou-se viável para o gerenciamento de repositórios de VAPs em parques de máquinas, simplificando o trabalho dos usuários finais e dos administradores de sistemas.
APA, Harvard, Vancouver, ISO, and other styles
22

Bonasegale, Simone. "Build automation systems per Java: analisi dello stato dell'arte." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23316/.

Full text
Abstract:
Il processo di sviluppo di un software richiede tanti compiti da svolgere, ed è interesse dello sviluppatore che questi vengano automatizzati per potersi concentrare sugli aspetti specifici dell'ambito del software che viene costruito. I build systems sono gli strumenti preposti a tale scopo, in questo documento ne viene analizzato lo stato dell'arte dei principali che lavorano per la Java Virtual Machine (JVM), anche attraverso lo stile di programmazione con cui essi vengono implementati.
APA, Harvard, Vancouver, ISO, and other styles
23

Benhalima, Djamel-Eddine. "Contribution à la conception d'un système d'analyse expérimentale SICOPE fondée sur une approche orientée-objet : Application à la communication graphique." Valenciennes, 1995. https://ged.uphf.fr/nuxeo/site/esupversions/3eea74bf-26cc-473d-9ec6-11405b54fb6c.

Full text
Abstract:
Le succès de l'analyse expérimentale dans des domaines aussi complexes que la communication graphique fait croitre encore plus le besoin en expérimentations. Or l'élaboration d'un protocole expérimental est une tâche difficile, vu la quantité considérable de paramètres mis en jeu et la multiplicité des critères d'optimisation. Cependant, il existe peu d'outils tels qu'une méthodologie, un système d'aide, qui permettent d'aider les expérimentateurs. Il serait illusoire, en effet, de penser que l'analyse expérimentale est entrée dans sa phase de maturité, car de nombreux problèmes d'ordre méthodologique et d'outils aptes à apporter une aide réelle au processus d'élaboration de protocoles expérimentaux restent posés. L'objectif de ce mémoire est de proposer d'une part, une méthodologie d'étude expérimentale et d'autre part, un système interactif de conception de protocoles expérimentaux SICOPE
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Xing. "Hybrid real-time operating system integrated with middleware for resource-constrained wireless sensor nodes." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22472/document.

Full text
Abstract:
Avec les avancées récentes en microélectronique, en traitement numérique et en technologie de communication, les noeuds de réseau de capteurs sans fil (noeud RCSF) deviennent de moins en moins encombrants et coûteux. De ce fait la technologie de RCSF est utilisée dans de larges domaines d’application. Comme les noeuds RCSF sont limités en taille et en coût, ils sont en général équipés d’un petit microcontrôleur de faible puissance de calcul et de mémoire etc. De plus ils sont alimentés par une batterie donc son énergie disponible est limitée. A cause de ces contraintes, la plateforme logicielle d’un RCSF doit consommer peu de mémoire, d’énergie, et doit être efficace en calcul. Toutes ces contraintes rendent les développements de logiciels dédiés au RCSF très compliqués. Aujourd’hui le développement d’un système d’exploitation dédié à la technologie RCSF est un sujet important. En effet avec un système d’exploitation efficient, les ressources matérielles d’une plateforme RCSF peuvent être utilisées efficacement. De plus, un ensemble de services système disponibles permet de simplifier le développement d’une application. Actuellement beaucoup de travaux de recherche ont été menés pour développer des systèmes d’exploitation pour le RCSF tels que TinyOS, Contiki, SOS, openWSN, mantisOS et simpleRTJ. Cependant plusieurs défis restent à relever dans le domaine de système d’exploitation pour le RCSF. Le premier des défis est le développement d’un système d’exploitation temps réel à faible empreinte mémoire dédié au RCSF. Le second défi est de développer un mécanisme permettant d’utiliser efficacement la mémoire et l’énergie disponible d’un RCSF. De plus, comment fournir un développement d’application pour le RCSF reste une question ouverte. Dans cette thèse, un nouveau système d’exploitation hybride, temps réel à énergie efficiente et à faible empreinte mémoire nommé MIROS dédié au RCSF a été développé. Dans MIROS, un ordonnanceur hybride a été adopté ; les deux ordonnanceurs évènementiel et multithread ont été implémentés. Avec cet ordonnanceur hybride, le nombre de threads de MIROS peut être diminué d’une façon importante. En conséquence, les avantages d’un système d’exploitation évènementiel qui consomme peu de ressource mémoire et la performance temps réel d’un système d’exploitation multithread ont été obtenues. De plus, l’allocation dynamique de la mémoire a été aussi réalisée dans MIROS. La technique d’allocation mémoire de MIROS permet l’augmentation de la zone mémoire allouée et le réassemblage des fragments de mémoire. De ce fait, l’allocation de mémoire de MIROS devient plus flexible et la ressource mémoire d’un noeud RCSF peut être utilisée efficacement. Comme l’énergie d’un noeud RCSF est une ressource à forte contrainte, le mécanisme de conservation d’énergie a été implanté dans MIROS. Contrairement aux autres systèmes d’exploitation pour RCSF où la conservation d’énergie a été prise en compte seulement en logiciel, dans MIROS la conservation d’énergie a été prise en compte à la fois en logiciel et en matériel. Enfin, pour fournir un environnement de développement convivial aux utilisateurs, un nouveau intergiciel nommé EMIDE a été développé et intégré dans MIROS. EMIDE permet le découplage d’une application de système. Donc le programme d’application est plus simple et la reprogrammation à distance est plus performante, car seulement les codes de l’application seront reprogrammés. Les évaluations de performance de MIROS montrent que MIROS est un système temps réel à faible empreinte mémoire et efficace pour son exécution. De ce fait, MIROS peut être utilisé dans plusieurs plateformes telles que BTnode, IMote, SenseNode, TelosB et T-Mote Sky. Enfin, MIROS peut être utilisé pour les plateformes RCSF à fortes contraintes de ressources
With the recent advances in microelectronic, computing and communication technologies, wireless sensor network (WSN) nodes have become physically smaller and more inexpensive. As a result, WSN technology has become increasingly popular in widespread application domains. Since WSN nodes are minimized in physical size and cost, they are mostly restricted to platform resources such as processor computation ability, memory resources and energy supply. The constrained platform resources and diverse application requirements make software development on the WSN platform complicated. On the one hand, the software running on the WSN platform should be small in the memory footprint, low in energy consumption and high in execution efficiency. On the other hand, the diverse application development requirements, such as the real-time guarantee and the high reprogramming performance, should be met by the WSN software. The operating system (OS) technology is significant for the WSN proliferation. An outstanding WSN OS can not only utilize the constrained WSN platform resources efficiently, but also serve the WSN applications soundly. Currently, a set of WSN OSes have been developed, such as the TinyOS, the Contiki, the SOS, the openWSN and the mantisOS. However, many OS development challenges still exist, such as the development of a WSN OS which is high in real-time performance yet low in memory footprint; the improvement of the utilization efficiency to the memory and energy resources on the WSN platforms, and the providing of a user-friendly application development environment to the WSN users. In this thesis, a new hybrid, real-time, energy-efficient, memory-efficient, fault-tolerant and user-friendly WSN OS MIROS is developed. MIROS uses the hybrid scheduling to combine the advantages of the event-driven system's low memory consumption and the multithreaded system's high real-time performance. By so doing, the real-time scheduling can be achieved on the severely resource-constrained WSN platforms. In addition to the hybrid scheduling, the dynamic memory allocators are also realized in MIROS. Differing from the other dynamic allocation approaches, the memory heap in MIROS can be extended and the memory fragments in the MIROS can be defragmented. As a result, MIROS allocators become flexible and the memory resources can be utilized more efficiently. Besides the above mechanisms, the energy conservation mechanism is also implemented in MIROS. Different from most other WSN OSes in which the energy resource is conserved only from the software aspect, the energy conservation in MIROS is achieved from both the software aspect and the multi-core hardware aspect. With this conservation mechanism, the energy cost reduced significantly, and the lifetime of the WSN nodes prolonged. Furthermore, MIROS implements the new middleware software EMIDE in order to provide a user-friendly application development environment to the WSN users. With EMIDE, the WSN application space can be decoupled from the low-level system space. Consequently, the application programming can be simplified as the users only need to focus on the application space. Moreover, the application reprogramming performance can be improved as only the application image other than the monolithic image needs to be updated during the reprogramming process. The performance evaluation works to the MIROS prove that MIROS is a real-time OS which has small memory footprint, low energy cost and high execution efficiency. Thus, it is suitable to be used on many WSN platforms including the BTnode, IMote, SenseNode, TelosB, T-Mote Sky, etc. The performance evaluation to EMIDE proves that EMIDE has less memory cost and low energy consumption. Moreover, it supports small-size application code. Therefore, it can be used on the high resource-constrained WSN platforms to provide a user-friendly development environment to the WSN users
APA, Harvard, Vancouver, ISO, and other styles
25

Ogura, Denis Ryoji. "Uma metodologia para caracterização de aplicações em ambientes de computação nas nuvens." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-12032012-114518/.

Full text
Abstract:
Computação nas nuvens e um novo termo criado para expressar uma tendência tecnológica recente que virtualiza o data center. Esse conceito busca um melhor aproveitamento dos recursos computacionais e dos aplicativos corporativos, virtualizados por meio de programas de virtualização de sistema operacional (SO), plataformas, infraestruturas, softwares, entre outros. Essa virtualização ocorre por intermédio de maquinas virtuais (MV) para executar aplicativos nesse ambiente virtualizado. Contudo, uma MV pode ser configurada de tal forma que seu desempenho poderá ter um atraso no processamento por conta de gargalo(s) em algum hardware alocado. A fim de maximizar a alocação do hardware na criação da MV, foi desenvolvido um método de caracterização de aplicações para a coleta de dados de desempenho e busca da melhor configuração de MV. A partir desse estudo, pode-se identificar pelo workload a classificação do tipo de aplicação e apresentar o ambiente mais adequado, um recomendado e não recomendado. Dessa forma, a tendência de se obter um desempenho satisfatório nos ambientes virtualizados pode ser descoberta pela caracterização dos programas, o que possibilita avaliar o comportamento de cada cenário e identificar situações importantes para seu bom funcionamento. Para provar essa linha de raciocínio, foram executados programas mono e multiprocessador em ambientes de monitores de maquinas virtuais. Os resultados obtidos foram satisfatórios e estão de acordo com cada característica de aplicação conhecida previamente. Porem, podem ocorrer situações de exceção nesse método, principalmente quando o monitor de maquinas virtuais, e submetido a processamentos intensos. Com isso, a aplicação pode ter um atraso no processamento por conta do gargalo de processamento no monitor de maquinas virtuais, o que modifica o ambiente ideal dessa aplicação. Portanto, este estudo apresenta um método para identificar a configuração ideal para a execução de um aplicativo.
Cloud computing represents a new age, raised to express a new technology trending that virtualizes the data center. This concept advanced to make a better use of the computational resources and corporate application, virtualizing through the programs of operating systems virtualization, platform, infrastructure, software, etc. This virtualization occurs through the virtual machine (VM) to execute virtualized applications in this environment. However, a VM may be configured in such a way that the performance delays on processing, due to overhead or other hardware allocation itself. In order to maximize the hardware allocation on MV creation, it was developed a methodology of application characterization to collect performance data and achieve the best VM configuration. After this study, based on workload metric, it is possible to identify the classification of the application type and present the best configuration, the recommended environment and the not recommended. This way, the trend to achieve a satisfactory performance in virtualized environment may be discovered through the program characterization, which possibly evaluate the behavior of each scenario and identify important conditions for its proper operation. In order to prove this argument, mono and multi core applications under monitors of virtual machines were executed. The collected results were satisfactory and are aligned with each previously known application characteristic. However, it may occur exceptions in this method, mainly when the monitor of the virtual machine monitor is submitted with high volume of processing.
APA, Harvard, Vancouver, ISO, and other styles
26

Campello, Daniel Jose. "Optimizing Main Memory Usage in Modern Computing Systems to Improve Overall System Performance." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2568.

Full text
Abstract:
Operating Systems use fast, CPU-addressable main memory to maintain an application’s temporary data as anonymous data and to cache copies of persistent data stored in slower block-based storage devices. However, the use of this faster memory comes at a high cost. Therefore, several techniques have been implemented to use main memory more efficiently in the literature. In this dissertation we introduce three distinct approaches to improve overall system performance by optimizing main memory usage. First, DRAM and host-side caching of file system data are used for speeding up virtual machine performance in today’s virtualized data centers. The clustering of VM images that share identical pages, coupled with data deduplication, has the potential to optimize main memory usage, since it provides more opportunity for sharing resources across processes and across different VMs. In our first approach, we study the use of content and semantic similarity metrics and a new algorithm to cluster VM images and place them in hosts where through deduplication we improve main memory usage. Second, while careful VM placement can improve memory usage by eliminating duplicate data, caches in current systems employ complex machinery to manage the cached data. Writing data to a page not present in the file system page cache causes the operating system to synchronously fetch the page into memory, blocking the writing process. In this thesis, we address this limitation with a new approach to managing page writes involving buffering the written data elsewhere in memory and unblocking the writing process immediately. This buffering allows the system to service file writes faster and with less memory resources. In our last approach, we investigate the use of emerging byte-addressable persistent memory technology to extend main memory as a less costly alternative to exclusively using expensive DRAM. We motivate and build a tiered memory system wherein persistent memory and DRAM co-exist and provide improved application performance at lower cost and power consumption with the goal of placing the right data in the right memory tier at the right time. The proposed approach seamlessly performs page migration across memory tiers as access patterns change and/or to handle tier memory pressure.
APA, Harvard, Vancouver, ISO, and other styles
27

Rybák, Martin. "Konsolidace serverů za použití virtualizace." Master's thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-77173.

Full text
Abstract:
The thesis deals with the topic of complexity of current IT. As a result, the consolidation of servers using virtualization is the answer to permanently growing complexity of server infrastructure. The thesis summarizes the basic aspects of this issue, compares the contributions and tries to analyze problems which can emerge. Further, it points a way of consolidation journey, compares different types of virtualization and elaborates the contributions of virtualization for corporate IS/ICT and its flexibility of solution. It analyzes present state of the market with virtualization tools, describes and compares some products of the market key players and analyzes the new opportunities for virtualization, e. g. the virtual desktop infrastructure. At the end, it suggests an approach for consolidated project solution in practice and tries to show some basic steps which should not be omitted. Besides the complex view to the topic, the thesis also presents the contributions, risks and questions to be raised and, at least partly, answers these questions.
APA, Harvard, Vancouver, ISO, and other styles
28

Sideris, Nikolaos. "Spatial decision support in urban environments using machine learning, 3D geo-visualization and semantic integration of multi-source data." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0083/document.

Full text
Abstract:
La quantité et la disponibilité sans cesse croissantes de données urbaines dérivées de sources variées posent de nombreux problèmes, notamment la consolidation, la visualisation et les perspectives d’exploitation maximales des données susmentionnées. Un problème prééminent qui affecte l’urbanisme est le choix du lieu approprié pour accueillir une activité particulière (service social ou commercial commun) ou l’utilisation correcte d’un bâtiment existant ou d’un espace vide. Dans cette thèse, nous proposons une approche pour aborder les défis précédents rencontrés avec les techniques d’apprentissage automatique, le classifieur de forêts aléatoires comme méthode dominante dans un système qui combine et fusionne divers types de données provenant de sources différentes, et les code à l’aide d’un nouveau modèle sémantique. qui peut capturer et utiliser à la fois des informations géométriques de bas niveau et des informations sémantiques de niveau supérieur et les transmet ensuite au classifieur de forêts aléatoires. Les données sont également transmises à d'autres classificateurs et les résultats sont évalués pour confirmer la prévalence de la méthode proposée. Les données extraites proviennent d’une multitude de sources, par exemple: fournisseurs de données ouvertes et organisations publiques s’occupant de planification urbaine. Lors de leur récupération et de leur inspection à différents niveaux (importation, conversion, géospatiale, par exemple), ils sont convertis de manière appropriée pour respecter les règles du modèle sémantique et les spécifications techniques des sous-systèmes correspondants. Des calculs géométriques et géographiques sont effectués et des informations sémantiques sont extraites. Enfin, les informations des étapes précédentes, ainsi que les résultats des techniques d’apprentissage automatique et des méthodes multicritères, sont intégrés au système et visualisés dans un environnement Web frontal capable d’exécuter et de visualiser des requêtes spatiales, permettant ainsi la gestion de trois processus. objets géoréférencés dimensionnels, leur récupération, transformation et visualisation, en tant que système d'aide à la décision
The constantly increasing amount and availability of urban data derived from varying sources leads to an assortment of challenges that include, among others, the consolidation, visualization, and maximal exploitation prospects of the aforementioned data. A preeminent problem affecting urban planning is the appropriate choice of location to host a particular activity (either commercial or common welfare service) or the correct use of an existing building or empty space. In this thesis we propose an approach to address the preceding challenges availed with machine learning techniques with the random forests classifier as its dominant method in a system that combines, blends and merges various types of data from different sources, encode them using a novel semantic model that can capture and utilize both low-level geometric information and higher level semantic information and subsequently feeds them to the random forests classifier. The data are also forwarded to alternative classifiers and the results are appraised to confirm the prevalence of the proposed method. The data retrieved stem from a multitude of sources, e.g. open data providers and public organizations dealing with urban planning. Upon their retrieval and inspection at various levels (e.g. import, conversion, geospatial) they are appropriately converted to comply with the rules of the semantic model and the technical specifications of the corresponding subsystems. Geometrical and geographical calculations are performed and semantic information is extracted. Finally, the information from individual earlier stages along with the results from the machine learning techniques and the multicriteria methods are integrated into the system and visualized in a front-end web based environment able to execute and visualize spatial queries, allow the management of three-dimensional georeferenced objects, their retrieval, transformation and visualization, as a decision support system
APA, Harvard, Vancouver, ISO, and other styles
29

Vial, Stéphane. "La structure de la révolution numérique : philosophie de la technologie." Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00776032.

Full text
Abstract:
De quoi la révolution numérique est-elle la révolution ? Le premier niveau d'analyse s'inscrit sur le plan de l'histoire. Il vise à dégager la structure historique de la révolution numérique, en délimitant d'abord son périmètre diachronique et en dégageant sa place particulière au sein de l'histoire générale des techniques. L'hypothèse, c'est que la révolution numérique n'est pas un changement d'outillage mais un événement d'histoire, qui s'inscrit dans le long processus de la machinisation de l'Occident et de la succession des systèmes techniques pour aboutir à l'émergence d'un nouveau " système technique " : la révolution numérique, c'est la révolution de notre infrastructure technique systémique, c'est-à-dire l'avènement du " système technique numérique ". Dans cette partie, nous privilégions le terrain historique et les données empiriques qu'il fournit, au nom d'une philosophie de la technologie fermement opposée à toute métaphysique misotechnique. Le second niveau d'analyse s'inscrit sur le plan de la perception. Au-delà de la seule révolution numérique, il vise à dégager la structure phénoménologique de toute révolution technique, en remontant aux conditions techniques de toute perception en général. L'hypothèse, c'est qu'une révolution technique est toujours une révolution ontophanique, c'est-à-dire un ébranlement du processus par lequel l'être (ontos) nous apparaît (phaïnô) et, par suite, un bouleversement de l'idée même que nous nous faisons de la réalité. Nous nous appuyons ici sur la notion de " phénoménotechnique " empruntée à Gaston Bachelard, qui nous conduit à défendre un constructivisme phénoménologique selon lequel toute technique est une matrice ontophanique, dans laquelle se coule notre expérience-du-monde possible. Comme les précédentes, la révolution numérique apparaît alors comme une révolution de nos structures perceptives, dont la violence phénoménologique permet au passage d'expliquer le succès et le déclin de la notion de virtuel. De cette dernière, nous proposons une généalogie critique et nous montrons qu'elle n'a été jusqu'ici qu'une tentative ratée d'élucider la phénoménalité numérique, en raison de la rêverie de l'irréel qu'elle induit. Le troisième niveau d'analyse s'inscrit sur le plan de la phénoménalité numérique enfin abordée dans sa positivité. Il vise à saisir la structure ontophanique de la révolution numérique, c'est-à-dire la nature de l'être des êtres numériques. L'hypothèse, c'est que l'ontophanie numérique résulte de onze caractéristiques phénoménologiques propres à la matière calculée, qui sont présentées dans un ordre didactique favorisant la compréhension globale du phénomène numérique. Il s'agit de la nouménalité, l'idéalité, l'interactivité, la virtualité, la versatilité, la réticularité, la reproductibilité instantanée, la réversibilité, la destructibilité, la fluidité et la ludogénéité. Nous terminons alors en analysant la responsabilité des activités de conception-création dans la genèse phénoménotechnique du réel et en particulier le rôle du design dans la constitution créative de l'ontophanie numérique. En tant qu'activité phénoménotechnique, le design est non seulement une activité créatrice d'ontophanie, mais encore une activité intentionnellement factitive, c'est-à-dire qui vise à faire-être autant qu'à faire-faire, en vue de projeter l'enchantement du monde. C'est pourquoi le design numérique, parce qu'il a la capacité d'engendrer de nouveaux régimes d'expériences interactives, joue un rôle essentiel dans le modelage de la révolution numérique. La révolution numérique, c'est aussi quelque chose qui se sculpte et se façonne, se coule et se moule dans les projets des designers. C'est une révolution de notre capacité à faire le monde, c'est-à-dire à créer de l'être.
APA, Harvard, Vancouver, ISO, and other styles
30

Dewi, Alita. "Apport des nouvelles technologies interactives pour l'analyse intégrée en génie électrique : vers un laboratoire virtuel d'expérimentation en électrotechnique." Phd thesis, Grenoble INPG, 2001. http://tel.archives-ouvertes.fr/tel-00597707.

Full text
Abstract:
La course à l'innovation dans la technologie des dispositifs électriques, induit de nouvelles contraintes sur les fonctionnalités des systèmes de CAO en Génie Electrique. Ils doivent permettre l'analyse fine des comportements multiphysiques des dispositifs électriques. Cette puissance, nécessaire en terme de modélisation, se traduit aussi par une complexité en terme de maîtrise par l'utilisateur. Par conséquent, les techniques d'interaction homme - machine, qui avaient été longtemps considérées comme d'intérêt secondaire dans le domaine de Pélectrotechnique, deviennent aussi importantes que les modèles physiques. L'objectif de notre travail est de développer des méthodes d'exploration et d'Interface homme - machine, naturelles et faciles à comprendre, afin de facilité l'utilisation des logiciels de simulation en électrotechnique. Pour arriver à cet objectif, nous nous sommes inspirés des activités intervenant dans un laboratoire d'expérimentation en électrotechnique. Dans un premier temps, nous avons modélisé un vrai laboratoire en dégageant les rôles du dispositif électrique à étudier, des manipulations à mettre en places, des protocoles d'expérimentation, .... Après cela, nous avons développé un système interactif, le Laboratoire Virtuel d'Expérimentation en Électrotechnique (LVEE), bâti sur le modèle d'un laboratoire réel, et dans lequel le fonctionnement du dispositif électrique est obtenu à l'aide des logiciels de simulation. La contribution de la technologie de Réalité Virtuelle a été prise en considération pour fournir l'interaction intuitive et la visualisation naturelle des comportements physiques des dispositifs électriques.
APA, Harvard, Vancouver, ISO, and other styles
31

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Chun-Han, and 陳君翰. "Performance Evaluation of Machine-to-Machine System with Virtual Machines." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/19955400517395000332.

Full text
Abstract:
碩士
國立臺灣大學
資訊網路與多媒體研究所
100
Recently, machine-to-machine (M2M) systems provide services in a variety of application domains, such as smart home, surveillance, remote control, healthcare, consumer devices, etc. The architecture of an M2M system is usually composed of different types of hardware components interconnected with a network. The performance of an M2M system is a critical factor, especially for those applications which have real-time requirements. For some M2M systems which are powered by batteries, energy consumption is also a concern. However, the complexity of setting up a development and performance evaluation environment for an M2M system is quite challenging to the developersss. In this thesis, we propose to evaluate the performance of an M2M system by running the M2M software over the virtual machines which collectively simulate the M2M system. The virtual machines are connected with our virtual network devices (VND) which model the network in the M2M system. This approach al- lows the developer to deploy unmodified software onto the simulation environment and use our tools to analyze the execution time, energy consumption, and network transactions on each virtual machinesss.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Yu Chia, and 劉育嘉. "Research and Development of Virtual Axes Machine System." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/29841606687595974555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Liao, Wei-Chen, and 廖唯辰. "Implementing a Virtual Machine based Fault Tolerance System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/jgha99.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
105
In recent years, virtual machines have been widely deployed in the Cloud. A physical machine may run multiple VMs, with multiple services. Failures of a physical machine, including hardware malfunctions and power losses could affect many services running on the machine at the same time. Thus, Fault tolerance for services in the Cloud is of paramount importance for the cloud computing. However, fault tolerance support in commercial products like VMware is expensive, not affordable by many users. In this thesis, we implement CUJU, an open source virtual machine based fault tolerance system using QEMU 2.8/Linux kernel 4.4.0 and describe how it works. We also evaluate the output latency and throughput overhead of this preliminary version of CUJU and compared its performance as well as functionality with the VMware offered Fault Tolerance system on the market.
APA, Harvard, Vancouver, ISO, and other styles
35

Jones, Stephen Todd. "Implicit operating system awareness in a virtual machine monitor." 2007. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chung, Tzu-hai, and 鐘賜海. "Applying Embedded GRAFCET Virtual Machine System for Robot Development." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/86325182170505916372.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
100
It is strict in developing embedded system using ordinary ways. From developing, verifying to deploying, a potential problem could cause a hung problem later. If there are serious bugs in deployed embedded systems, many tools such as USB, JTAG or other fixture, human resources and time are required for flashing the corrected firmware into the embedded system to fix bugs. This paper applying the schema of embedded GRAFCET virtual machine as embedded robot development platform. We implement the embedded part of this platform on an ARM Coretex-M3 based intelligent I/O controller called SIOC. An GUI based application for Android mobile device platform was also implemented as the development part of this platform. A developer can easily use this platform as a rapid development platform for developing robot applications. Through the experiments we also notice this development platform is a much more powerful than ordinary embedded development platforms. Integrating the cloud resources as part of embedded system is possible by using our development platform.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsueh, Li-Wei, and 薛力偉. "Development of Graphics Modular System for Virtual Machine Tools." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/egmg7r.

Full text
Abstract:
碩士
國立虎尾科技大學
機械與電腦輔助工程系碩士班
101
In recent years, many parts are directly milled by using machine tools. In order to ensure accuracy and safety in processing, workpieces must be confirmed via simulation of virtual machine tools before machining. Therefore, it is very important to simulate before machining, and it also can be used as a teaching and training practices. This study is to develop 3D simulation software of virtual machine tools by Borland C++ Builder. There are different types of virtual machine tools constructed in the program. In addition, we also develop the modular construction procedures for users to correctly create virtual machine tools using this simple graphical way. In the construction procedures, one can learn the concepts and operation of machine tools. The virtual machine tools can be completed according to the sequence of actions within the program. Applying the proposed graphics modular system to create virtual machine tools, it can improve inconvenience of original construction approaches and increase the applications of simulation software. Therefore, users can quickly know and learn the presented system in this thesis. Keywords:virtual machine tool, graphics construction, modular.
APA, Harvard, Vancouver, ISO, and other styles
38

Liao, Szu-En, and 廖司恩. "Application of Virtual Metrology to Machine Tool Tuning System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/98603462965107042066.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
105
The so-called “Industry 4.0” has made a great impact on manufacturing sectors, especially in machinery industry. Traditional machine tools such as lathe, milling machine, drilling machine, and grinding machine are usually equipped with CNC (Computer Numerical Control). Once objects are manufactured by the machine tools, physical metrology needs to be applied to ensure that the objects produced have the required precision. If any defect is found, then the parameters of the machine tools may need to be adjusted. However, physical metrology may result in many defects that have been produced before they are discovered. In this thesis, we apply virtual metrology to analyze the data collected by CNC milling machines and use back-propagation neural networks to predict the errors of the produced objects. If the predicted errors are not in the allowable range, then the parameters of the machine tools will be adjusted. With virtual metrology, the physical metrology may be performed less frequently.
APA, Harvard, Vancouver, ISO, and other styles
39

Tzeng, Hung-Sheng, and 曾宏升. "A Virtual Orthodontic Setup System with Human-Machine Interface." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/44270247313649413281.

Full text
Abstract:
碩士
國立中正大學
機械工程所
97
Therefore, this system will take advantage of Virtual Force Feedback system to develop Virtual Orthodontics Alignment System. Not only could get affect as traditional orthodontics, but also decrease material and manual prime cost via using Virtual Reality. In addition, this system could finish a serial of model of braces in aligner orthodontics and save time of doing manual tooth alignment. Users could use 3D Scanning System to scan patient’s Tweed Model before they do orthodontics and save as STL (Stereo Lithography File), and then use this system to carry a serial of orthodontics alignment steps out. It could scan Tweed Model and incise teeth let them become independent. Also, to utilize Collision Detection and Force Feedback System doing alignment; user could feel force feedback the situation of doing alignment will get closer to reality in this period of time. And, we could combine useful data to associate users doing orthodontics alignment. It could save more materials than traditional Orthodontics Alignment, and add Force Feedback System could make alignment get closer to reality, And, Collision Detection could prevent unusual situation when users do alignment; it also provide arch-form to make users exactly complete alignment, and decrease risk of failure.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Zong-Ru, and 楊宗儒. "Development of a Virtual Turn-mill Machine Tool Simulation System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/64309267471509005678.

Full text
Abstract:
碩士
國立高雄應用科技大學
機械與精密工程研究所
98
Recently the development of machine tool has been focused on both the manufacture of the five-axis and the turn-mill integrated machine tool. A turn-mill machine could be used to integrate milling and turning operations into one single machine so as to shorten the machining time. However, the operation of a turn-mill machine is much more complicated in learning and operation. Therefore, expensive maintenance and dangerous collision situation might occur resulting in budge burden for a general education institution. This research adopted an industrial turn-mill machine tool and focused on creating a virtual turn-mill machine tool consisting of a virtual controller and interactive 3D machine. Programming language C# and a virtual reality environment of EON software have been adopted to construct 3D models generated in Solidworks. The functions of the virtual controller include NC interpretation and human machine interface while the interactive 3D functions could be used to have visual assistance in operating the turn-mill machine. The implemented system could be used to assist learning and last mile connection for real operation to reduce collision situation.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Hsiao-Yung, and 林孝勇. "Improve the Virtual Machine's Performance with Cluster System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/65848884933129882238.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
101
Virtualization is a very important technology in the IaaS of the cloud computing. User uses computing resource as a virtual machine (VM) provided from the system provider. The VM's performance is depended on physical machine. A VM should be deployed all required resources when it is created. If there is no more resource could be deployed, the VM should be move to another physical machine for getting higher performance by using VM's live migration. The overhead of a VM's live migration is 30 to 90 seconds. If there are many virtual machines which need live migration, the cost of overhead will be very much. This paper presents how to use cluster computing architecture to improve the VM's performance. We create a user's VM as a master node of the cluster. When the process's usage of the user's VM is over the threshold we define, another virtual machine will be created as a slave node in the cluster by the virtual machine manager (VMM, hyper-visor ). It will enhance 15% of performance compared with VM's live migration.
APA, Harvard, Vancouver, ISO, and other styles
42

Shiu, Hwei-Dong, and 許暉東. "Virtual Reality Application - A Simulation System for Power Machine Operation." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/81948543078100202594.

Full text
Abstract:
碩士
國立成功大學
機械工程學系
86
In recent years, virtual reality (VR) is sprouting as a particular computer technology, which is applied in many aspects such as engineering, science, business applications, entertainment, and so on. In this dissertation, we develop a simulation system for power machine operation which integrated digital controller, computer network and virtual reality techniques. This system consists of three main parts: an operation platform, a monitor platform and a VR platform. The operation platform provides the user the sense of presence. The monitor platform controls the signal transmission. The VR platform illustrates overall simulations that combine virtual scenes and the virtual power machine operation. A forklift operation system is built for simulation in the dissertation. The virtual forklift machine is controlled by the operation platform. By using the helmet display, the user would immerse himself in the virtual environment. Finally, we discuss incomplete parts and unsolved programs of this dissertation for the future work.
APA, Harvard, Vancouver, ISO, and other styles
43

Liao, Wen-Chi, and 廖文棋. "A Study on Virtual Plant System Based on Machine Vision." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/58680938812347825405.

Full text
Abstract:
碩士
國立臺灣大學
生物產業機電工程學研究所
89
A virtual plant system based on parametric L-system and bracketed L-system was developed in this research. The virtual plant system used two layers of rules to describe the branching structures and the growth of plants. 3D graphics library was used to present 3D images of virtual plants modeled with curly cylindrical stems and leaves composed of parametric surface and texture. In order to efficiently construct a virtual plant, a digitizing system employing stereo machine vision to acquire the geometric information such as position, angle, length of internode and leaves was also developed. The digitized coordinates were used to construct the structure of virtual plant and translate into the state string of our L-systems for further calculation of plant characteristics. The performance of the machine vision system was tested with the measurement of the characteristics of pepper, collard, cabbage and Chinese cabbage seedlings. The experimental results showed that the RMSE of internode were less than 1 mm, and the RMSE of total leaf area were less than 2 cm2. The machine vision system can also be used for nondestructive growth measurement of plants. Two sets of experiments were performed. In the first experiment, the characteristics of cabbage seedlings were measured continuously as they grew. The growth curves of total leaf area, single leaf area, the length of stem, etc. were determined and analyzed. In the second experiment, we measured the growth curves of pepper seedlings cultured in pots with different sized to compare the effect of medium quantity on the growth pepper seedlings. Besides the construction of virtual plants for vegetable seedlings in the previous experiments, virtual plants of 7 other plants with different shapes and structures were also constructed to test and discuss the versatility and limitation of the virtual plant system developed in this research.
APA, Harvard, Vancouver, ISO, and other styles
44

Caamano, Paul. "Porting a JAVA [superscript tm] virtual machine to an embedded system." Diss., 2000. http://catalog.hathitrust.org/api/volumes/oclc/47124203.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Yi-Chi, and 李育祺. "A Study of the Development of Component-based Virtual Machine System." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/09666173974795866665.

Full text
Abstract:
碩士
東海大學
工業工程學系
91
The integration of control networks has been an important issue in automated manufacturing systems. The integration today is usually performed by developing a standardized communication protocol such that equipment can talk in the same semantics and languages. Semiconductor Equipment Control Standard-II (SECS-II) is today is one of the standards that are popularly applied in shop floors. However, workable systems based on SECS-II are difficult to be developed. Fortunately, component technology can help resolve the difficulty. Component technology embeds the advantages of objects. Besides, it can shorten the time span of system development, reduce the complexity of a system, and increase system flexibility and adaptability. By applying the concept of component technology and SECS-II, this research develops a virtual machine. A virtual machine is a conceptual framework that is able to accommodate the message transformation between equipment. This research develops a complete system development methodology, by which virtual machines can be developed and implemented. A demonstrative system is also developed to illustrate the applicability of the methodology.
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Shun-Hsiung, and 張順雄. "A Self-Cleansing and Virtual-Machine based Defense-in-depth System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/71729061135306142236.

Full text
Abstract:
碩士
義守大學
資訊工程學系碩士班
98
This paper proposes a new defense-in-depth information system. This kind systems are usually consist of three subsystems: firewall, intrusion detection system(IDS) and intrusion prevention system(IPS). A virtual-machine based self-cleansing mechanism is proposed to integrate into each subsystem. We use Failure Mode and Effects Analysis( FMEA) and Overall Equipment Effectiveness(OEE) method to analyze the defense performance against most of intrusions. For botnet, DNS attack and Trojaned intrusion, the FMEA risk number are reduced at least five times and the OEE value are also increased to 0.91. To valify the proposal’s feasibility, a prototype system is also implemented on a VM-Ware host OS computer. The switching time between servers vary between 15 and 30 seconds.In conclusion, the proposed information system is feasible and has higher availability in non-transaction services.
APA, Harvard, Vancouver, ISO, and other styles
47

Hur, Jui-Kui, and 何瑞國. "Dynamic Resource Allocation of Multi-core System in Virtual Machine Environment." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81308909053179692956.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
99
With the rapid development of virtual¬ization, people pay attention to the significance of the resources management and the system optimization. How to solve the resource allocation and utilization problem will be discussed. In this work, we presents a dynamic adjustment mechanism for computing resources based on KVM (Kernel based Virtual Machine). When system is running, using hot-plug to adjust the number of CPU dynamically can get better performance. While the task is finished, it can decrease the number of CPU for better utilization rate of resources. Because of different demand from users, it will cause poor performance when several virtual machines request resources continuously. For this issue, we use static priority to solve. We prove this way can get better utilization rate of resources and match demand from users. Besides, it can be run more virtual machines in the same physical machines and reduce the cost of hardware purchased and maintenance.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Huei-Yun, and 黃惠筠. "Two-Layers High Availability Protection for Virtual Machine in Cloud System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/hqvdm5.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
106
In recent years, cloud computing technology has become more mature. Because of its elasticity and manageability, most enterprises decide to deploy their business on their virtualized cloud platform. Compare with deploying date center, cloud platform is more convenient to dynamically adjust and effectively manage computing source. With the open source cloud platform, OpenStack, is constantly released a better version. It has gradually become one of the choices for enterprises to build their private cloud computing platform. Because enterprises deploy their business on cloud platform to serve their clients, and those services are provided by virtual machines. In order to keep those services running, high availability(HA) for the cloud platform will be relatively important. However, the HA mechanism of OpenStack is only for those services of controller node. It is incomplete for virtual machine protection, therefore this study proposes Software-Defined High Availability Cluster(SDHAC) mechanism to automatically detect HA virtual machines and recover their failure. The detection mechanism uses libvirt API to real-time monitor virtual machine events, and the recovery mechanism use OpenStack API to recover virtual machine failure. Let virtual machines keep running, users don’t need to fix virtual machines failure by themselves. In order to avoid virtual machines abnormalities which are caused by hardware and software problem of computing nodes. This study combined with IPMI (Intelligent Platform Management Interface) to detect and recover computing node, and read sensor information. If the sensor information of the nodes is critical, our system will immediately migrate (Live Migrate) those virtual machines to avoid errors in the computing nodes and cause the virtual machine services to be interrupted. If the computing node occur no excepted failure, HA virtual machines are failovered to another normal computing node in the HA cluster and recover abnormal computing node to improve OpenStack's high availability for the virtual machine.
APA, Harvard, Vancouver, ISO, and other styles
49

Hung-TaLin and 林宏達. "Research and Analysis of A Virtual Machine Monitor for Embedded System." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/38133602366575876989.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
102
With the mobile computing coming, application and baseband can be deployed in different operating systems and software through isolation in smartphone. Isolated simplified two smartphone’s integrated subsystems, while enhancing the reliability and safety. In order to effectively integrate applications and baseband, Open Kernel Labs (OK Labs) launched a mobile virtualization technology solutions, called OKL4 Microvisor. In this paper, we analyze the system architecture of OKL4 Microvisor, as well as the relationship between system calls, and OKL4 connected relationships with external hardware. On the software side, we use QEMU which is full system simulator to simulate the target platform, and OKL4 Microvisor mount runs on a virtual platform. In addition, we also write related test application contains multithreading parallel programming with the IPC system call to transmit data, and modify OKL4 source code to join timer to test time. Then build single cell or multiple cells on OKL4, to achieve IPC transmission through test application, and analysis IPC time spent of cell within or between cells. In terms of development board transplanting, OKL4 image file and u-boot image file are combined into an image file. Finally, burning the image file into the platform’s Flash to verify the results.
APA, Harvard, Vancouver, ISO, and other styles
50

Tsou, Chen-Ying, and 鄒震贏. "Applying OpenGL for Development of Five-axis Virtual Machine Tool System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03070587205072046814.

Full text
Abstract:
碩士
國立成功大學
機械工程學系碩博士班
94
Virtual reality technology has been applied to various industries, such as semiconductor, aerospace and automobile. A virtual multi-axis machine tool can be used to verify the machining process and evaluate manufacturability. The virtual multi-axis machine tool based on commercial software has limitations to expand. Because the functions of library cannot meet the demands of the developer, the objective of this thesis is to develop virtual environment based on OpneGL library. Modified D-H notation is used to represent the relative position and motion direction of machine tool axes for universal assembly of five-axis virtual machine tool. In the assembly process, the system will assign the coordinate frames of machine tool axes according to Modified D-H notation. This can assist users assembling the five-axis virtual machine tool efficiently and effectively. In this thesis, user interface is developed and the system display is based on OpenGL. With the interface, users can select configuration and assemble virtual machine tool. Finally, a five-axis machine tool was taken as an example to perform the assembly and simulate the machine tool motion. The motion simulation of machining an industrial blade was verified by VERICUT software.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography