To see the other types of publications on this topic, follow the link: Virtualization technologies and implementation.

Dissertations / Theses on the topic 'Virtualization technologies and implementation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Virtualization technologies and implementation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pham, Duy M. "Performance comparison between x86 virtualization technologies." Thesis, California State University, Long Beach, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1528024.

Full text
Abstract:

In computing, virtualization provides the capability to service users with different resource requirements and operating system platform needs on a single host computer system. The potential benefits of virtualization include efficient resource utilization, flexible service offering, as well as scalable system planning and expansion, all desirable whether it is for enterprise level data centers, personal computing, or anything in between. These benefits, however, involve certain costs of performance degradation. This thesis compares the performance costs between two of the most popular and widely-used x86 CPU-based virtualization technologies today in personal computing. The results should be useful for users when determining which virtualization technology to adopt for their particular computing needs.

APA, Harvard, Vancouver, ISO, and other styles
2

Johansson, Marcus, and Lukas Olsson. "Comparative evaluation of virtualization technologies in the cloud." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49242.

Full text
Abstract:
The cloud has over the years become a staple of the IT industry, not only for storage purposes, but for services, platforms and infrastructures. A key component of the cloud is virtualization and the fluidity it makes possible, allowing resources to be utilized more efficiently and services to be relocated more easily when needed. Virtual machine technology, consisting of a hypervisor managing several guest systems has been the method for achieving this virtualization, but container technology, a lightweight virtualization method running directly on the host without a classic hypervisor, has been making headway in recent years. This report investigates the differences between VM’s (Virtual Machines) and containers, comparing the two in relevant areas. The software chosen for this comparison are KVM as VM hypervisor, and Docker as container platform, both run on Linux as the underlying host system. The work conducted for this report compares efficiency in common use areas through experimental evidence, and also evaluates differences in design through study of relevant literature. The results are then discussed and weighed to provide a conclusion. The results of this work shows that Docker has the capability to potentially take over the role as the main virtualization technology in the coming years, providing some of its current shortcomings are addressed and improved upon.
APA, Harvard, Vancouver, ISO, and other styles
3

Beserra, David Willians dos Santos Cavalcanti. "Performance analysis of virtualization technologies in high performance computing enviroments." Universidade Federal de Sergipe, 2016. https://ri.ufs.br/handle/riufs/3382.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Computação de Alto Desempenho (CAD) agrega poder computacional com o objetivo de solucionar problemas complexos e de grande escala em diferentes áreas do conhecimento, como ciência e engenharias, variando desde aplicações medias 3D ate a simulação do universo. Atualmente, os usuários de CAD podem utilizar infraestruturas de Nuvem como uma alternativa de baixo custo para a execução de suas aplicações. Apesar de ser possível utilizar as infraestruturas de nuvem como plataformas de CAD, muitas questões referentes as sobrecargas decorrentes do uso de virtualização permanecem sem resposta. Nesse trabalho foi analisado o desempenho de algumas ferramentas de virtualização - Linux Containers (LXC), Docker, VirtualBox e KVM – em atividades de CAD. Durante os experimentos foram avaliados os desempenhos da UCP, da infraestrutura de comunicação (rede física e barramentos internos) e de E/S de dados em disco. Os resultados indicam que cada tecnologia de virtualização impacta diferentemente no desempenho do sistema observado em função do tipo de recurso de hardware utilizado e das condições de compartilhamento do recurso adotadas.
High Performance Computing (HPC) aggregates computing power in order to solve large and complex problems in different knowledge areas, such as science and engineering, ranging from 3D real-time medical images to simulation of the universe. Nowadays, HPC users can utilize virtualized Cloud infrastructures as a low-cost alternative to deploy their applications. Despite of Cloud infrastructures can be used as HPC platforms, many issues from virtualization overhead have kept them almost unrelated. In this work, we analyze the performance of some virtualization solutions - Linux Containers (LXC), Docker, VirtualBox and KVM - under HPC activities. For our experiments, we consider CPU, (physical network and internal buses) communication and disk I/O performance. Results show that different virtualization technologies can impact distinctly in performance according to hardware resource type used by HPC application and resource sharing conditions adopted.
APA, Harvard, Vancouver, ISO, and other styles
4

Chatterjee, Shubhajeet. "On Enabling Virtualization and Millimeter Wave Technologies in Cellular Networks." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/100596.

Full text
Abstract:
Wireless network virtualization (WNV) and millimeter wave (mmW) communications are emerging as two key technologies for cellular networks. Virtualization in cellular networks enables wireless services to be decoupled from network resources (e.g., infrastructure and spectrum) so that multiple virtual networks can be built using a shared pool of network resources. At the same time, utilization of the large bandwidth available in mmW frequency band would help to overcome ongoing spectrum scarcity issues. In this context, this dissertation presents efficient frameworks for building virtual networks in sub-6 GHz and mmW bands. Towards developing the frameworks, first, we derive a closed-form expression for the downlink rate coverage probability of a typical sub-6 GHz cellular network with known base station (BS) locations and stochastic user equipment (UE) locations and channel conditions. Then, using the closed-form expression, we develop a sub-6 GHz virtual resource allocation framework that aggregates, slices, and allocates the sub-6 Ghz network resources to the virtual networks in such a way that the virtual networks' sub-6 GHz downlink coverage and rate demands are probabilistically satisfied while resource over-provisioning is minimized in the presence of uncertainty in UE locations and channel conditions. Furthermore, considering the possibility of lack of sufficient sub-6 GHz resources to satisfy the rate coverage demands of all virtual networks, we design a prioritized sub-6 GHz virtual resource allocation scheme where virtual networks are built sequentially based on their given priorities. To this end, we develop static frameworks that allocate sub-6 GHz resources in the presence of uncertainty in UE locations and channel conditions, i.e., before the UE locations and channel conditions are revealed. As a result, when a slice of a BS serves its associated UEs, it can be over-satisfied (i.e., resources left after satisfying the rate demands of all UEs) or under-satisfied (i.e., lack of resources to satisfy the rate demands of all UEs). On the other hand, it is extremely challenging to execute the entire virtual resource allocation process in real time due to the small transmission time intervals (TTIs) of cellular technologies. Taking this into consideration, we develop an efficient scheme that performs the virtual resource allocation in two phases, i.e., virtual network deployment phase (static) and statistical multiplexing phase (adaptive). In the virtual network deployment phase, sub-6 GHz resources are aggregated, sliced, and allocated to the virtual networks considering the presence of uncertainty in UE locations and channel conditions, without knowing which realization of UE locations and channel conditions will occur. Once the virtual networks are deployed, each of the aggregated BSs performs statistical multiplexing, i.e., allocates excess resources from the over-satisfied slices to the under-satisfied slices, according to the realized channel conditions of associated UEs. In this way, we further improve the sub-6 GHz resource utilization. Next, we steer our focus on the mmW virtual resource allocation process. MmW systems typically use beamforming techniques to compensate for the high pathloss. The directional communication in the presence of uncertainty in UE locations and channel conditions, make maintaining connectivity and performing initial access and cell discovery challenging. To address these challenges, we develop an efficient framework for mmW virtual network deployment and UE assignment. The deployment decisions (i.e., the required set of mmW BSs and their optimal beam directions) are taken in the presence of uncertainty in UE locations and channel conditions, i.e., before the UE locations and channel conditions are revealed. Once the virtual networks are deployed, an optimal mmW link (or a fallback sub-6 GHz link) is assigned to each UE according to the realized UE locations and channel conditions. Our numerical results demonstrate the gains brought by our proposed scheme in terms of minimizing resource over-provisioning while probabilistically satisfying virtual networks' sub-6 GHz and mmW demands in the presence of uncertainty in UE locations and channel conditions.
Doctor of Philosophy
In cellular networks, mobile network operators (MNOs) have been sharing resources (e.g., infrastructure and spectrum) as a solution to extend coverage, increase capacity, and decrease expenditures. Recently, due to the advent of 5G wireless services with enormous coverage and capacity demands and potential revenue losses due to over-provisioning to serve peak demands, the motivation for sharing and virtualization has significantly increased in cellular networks. Through wireless network virtualization (WNV), wireless services can be decoupled from the network resources so that various services can efficiently share the resources. At the same time, utilization of the large bandwidth available in millimeter wave (mmW) frequency band would help to overcome ongoing spectrum scarcity issues. However, due to the inherent features of cellular networks, i.e., the uncertainty in user equipment (UE) locations and channel conditions, enabling WNV and mmW communications in cellular networks is a challenging task. Specifically, we need to build the virtual networks in such a way that UE demands are satisfied, isolation among the virtual networks are maintained, and resource over-provisioning is minimized in the presence of uncertainty in UE locations and channel conditions. In addition, the mmW channels experience higher attenuation and blockage due to their small wavelengths compared to conventional sub-6 GHz channels. To compensate for the high pathloss, mmW systems typically use beamforming techniques. The directional communication in the presence of uncertainty in UE locations and channel conditions, make maintaining connectivity and performing initial access and cell discovery challenging. Our goal is to address these challenges and develop optimization frameworks to efficiently enable virtualization and mmW technologies in cellular networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Fantini, Alessandro. "Virtualization technologies from hypervisors to containers: overview, security considerations, and performance comparisons." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12846/.

Full text
Abstract:
In un'epoca in cui quasi tutte le persone utilizzano quotidianamente applicazioni basate su cloud senza nemmeno farci caso e le organizzazioni del settore IT stanno investendo notevoli risorse in questo campo, non tutti sanno che il cloud computing non sarebbe stato possibile senza la virtualizzazione, una tecnica software che ha le sue radici nei primi anni sessanta. Lo scopo di questa tesi è fornire una panoramica delle tecnologie di virtualizzazione, dalla virtualizzazione hardware e gli hypervisor fino alla virtualizzazione a livello di sistema operativo basata su container, analizzare le loro architetture e fare considerazioni relative alla sicurezza. Inoltre, dal momento che le tecnologie basate su container si fondano su funzioni specifiche di contenimento del kernel Linux, alcune sezioni sono utilizzate per introdurre ed analizzare quest'ultime singolarmente, al livello di dettaglio appropriato. L'ultima parte di questo lavoro è dedicata al confronto quantitativo delle prestazioni delle tecnologie basate su container. In particolare, LXC e Docker sono raffrontati su una base di cinque test di vita reale e le loro prestazioni sono confrontate fianco a fianco, per evidenziare le differenze nella quantità di overhead che introducono.
APA, Harvard, Vancouver, ISO, and other styles
6

Wagner, Ralf [Verfasser], and Bernhard [Akademischer Betreuer] Mitschang. "Integration management : a virtualization architecture for adapter technologies / Ralf Wagner. Betreuer: Bernhard Mitschang." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/106910650X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cilloni, Marco. "Design and Implementation of an ETSI Network Function Virtualization-compliant Container Orchestrator." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13373/.

Full text
Abstract:
La Network Function Virtualisation (NFV) è la principale forza dietro la migrazione delle infrastrutture dei provider di reti verso sistemi distribuiti cloud, fornendo un innovativo approccio al design di architetture di reti di telecomunicazioni che permette un completo disaccoppiamento dei servizi offerti dalla rete dai dispositivi fisici e gli apparati su cui essi risiedono attraverso la loro completa virtualizzazione. L’uso di VNF, blocchi logici in grado di rappresentare le funzionalità e i servizi forniti dall’infrastruttura come elementi virtuali, permette alle Network Functions di essere agevolmente rilocate in data centers prossimi agli utenti finali dei servizi che offrono, evitando i pesanti costi in personale ed apparecchiature coinvolti nel caso dei dispositivi fisici. ETSI NFV fornisce linee guida ed architetture volte al supportare l’amministrazione ed orchestrazione (MANO) di apparati virtualizzati, sfruttando le infrastrutture fornite da Virtual Infrastructure Managers (VIM). Questa tesi ha affrontato le modalità con cui un framework NFV esistente, come Open Baton, possa essere esteso per sfruttare appieno le capacità fornite da sistemi di containerizzazione come Docker, realizzando i componenti e concetti necessari per offrire una infrastruttura NFV (NFVI) altamente scalabile e cloud-ready. Il prototipo di VIM basato su Docker e i relativi componenti MANO sviluppati durante questa tesi sono stati pensati per essere il più possibile indipendenti fra loro, per mantenere il sistema riusabile ed aperto ad estensioni future. L’analisi compiuta sulla soluzione per l’orchestrazione di container NFV basata su Docker creata durante lo step implementativo della tesi ha mostrato risultati molto positivi riguardo l’overhead sull’utilizzo di risorse di memoria e di storage da parte delle istanze di VNF basate su container.
APA, Harvard, Vancouver, ISO, and other styles
8

Cardace, Antonio. "UMView, a Userspace Hypervisor Implementation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13184/.

Full text
Abstract:
UMView is a partial virtual machine and userspace hypervisor capable of intercepting system calls and modifying their behavior according to the calling process' view. In order to provide flexibility and modularity UMView supports modules loadable at runtime using a plugin architecture. UMView in particular is the implementation of the View-OS concept which negates the global view assumption which is so radically established in the world of OSes and virtualization.
APA, Harvard, Vancouver, ISO, and other styles
9

Weibel, Nicolas. "Industrial implementation of novel composite material technologies /." Lausanne, 2002. http://library.epfl.ch/theses/?nr=2567.

Full text
Abstract:
Thèse sciences techniques, EPF Lausanne en collab. avec IMD Lausanne, no 2567 (2002), Faculté des Sciences et Techniques de l'Ingénieur, Section Matériaux. Directeurs: J.-A. E. Månson (EPFL), T. E. Vollmann (IMD) ; rapporteurs: D. Bonner, S. Catsicas, P. Lorange.
APA, Harvard, Vancouver, ISO, and other styles
10

Thelander, Jens, and Victor Pettersson. "Implementation of Procurement 4.0 Technologies : A systematic content analysis on implementation factors." Thesis, Linnéuniversitetet, Institutionen för ekonomistyrning och logistik (ELO), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104327.

Full text
Abstract:
Procurement 4.0 is the integration of Industry 4.0 related technologies into the procurement process, making tasks automated, information gathering and communication more effective, while establishing interconnected networks. After noticing a lack of studies done on the implementation of Procurement 4.0, the purpose of this article became to close the theoretical gap and extend the understanding of the implementation of Procurement 4.0. The study is done as a systematic content analysis, with the basis in a systematic literature review. Two areas of interest were studied and evaluated, identified effects on the procurement process by implementing Procurement 4.0, and identified Industry 4.0 implementation factors. Altogether, 14 studies were analyzed and evaluated and later put against each other to conclude Procurement 4.0 implementation factors. This study identified the need for managerial support and interaction with the employees, as an important part of the implementation phase. There are also a few factors that are not affected, regardless of if the implementation is concerning Industry 4.0 or Procurement 4.0.
APA, Harvard, Vancouver, ISO, and other styles
11

Percival, Jennifer. "Complementarities in the Implementation of Advanced Manufacturing Technologies." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/843.

Full text
Abstract:
Within the last decade, the importance of flexibility and efficiency has increased in the manufacturing sector. The rising level of uncertainty in consumer preferences has caused many organizations to aggressively search for cost reductions and other sources of competitive advantage. This fact has resulted in an increased implementation of advanced manufacturing technologies (AMT). A number of studies propose that the implementation of AMT must be accompanied by a shift in supporting organizational practices to realize the greatest performance enhancement. As yet, the complementarities between organizational policies and AMT have not been determined. Using assumptions about complementarities in manufacturing made by Milgrom and Roberts (1995) in conjunction with a comprehensive AMT survey (Survey of Advanced Technology in Canadian Manufacturing-1998) a model of manufacturing plant productivity was developed. Constrained regression analysis reveals that the use of AMT, as well as various organizational policies, depends both on the size of the plant as well as the industry in which it operates. Factor analysis of the over 70 variables found that regardless of the nature of the variable (business strategy, source of implementation support, AMT, etc. ), all design elements factored together. The factor analysis also shows that large firms who use AMT also have many design technologies. This result differs for smaller firms where the use of AMT is highly correlated with perceived benefits of the technology and a large number of sources of implementation support. The analysis also supports the distinction of high technology (highly innovative) industries and low technology (low levels of innovation) industries since electronics, chemicals and automotive have a large percentage of plants with all of the model factors whereas the textile, non-metal and lumber industries have very few plants with all of the model factors. The results show that there are important differences that should be considered when creating policies to encourage innovation and the use of AMT for the various manufacturing industries and plant sizes.
APA, Harvard, Vancouver, ISO, and other styles
12

Yuen, Yee Shan Cherry. "Congestion management and its implementation using information technologies." Thesis, University of Strathclyde, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hsieh, Edward F. (Edward Fang). "Investigating successful implementation of technologies in Developing nations." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32887.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005.
Includes bibliographical references.
A study was performed to determine possible factors that contribute to successful implementation of new technologies in developing nations. Engineers and other inventors have devoted great effort to Appropriate Technology design over the last two decades, but few comprehensive case studies currently exist examining factors that lead to technology success. Existing studies of appropriate technology were summarized and a quantitative model was created to tabulate the data. Factors of local maintenance, local production, and local need of a technology were found to be the most important to sustainable technology implementation. The model was then tested with a current Appropriate Technology project to examine the relevance of its results. Overall, the model proved applicable, though furthers studies are suggested to refine the model.
by Edward F. Hsieh.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
14

Turkyilmaz, Ogun. "Emerging 3D technologies for efficient implementation of FPGAs." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT091/document.

Full text
Abstract:
La complexité croissante des systèmes numériques amène les architectures reconfigurable telles que les Field Programmable Gate Arrays (FPGA) à être très fortement demandés en raison de leur facilité de (re)programmabilité et de leurs faibles coûts non récurrents (NRE). La re-configurabilité est réalisée grâce à de nombreux point mémoires de configuration. Cette re-configurabilité se traduit par une extrême flexibilité des applications implémentées et dans le même temps par une perte en surface, en performances et en puissance par rapport à des circuits intégrés spécifiques (ASIC) pour la même fonctionnalité. Dans cette thèse, nous proposons la conception de FPGA avec différentes technologies 3D pour une meilleure efficacité. Nous intégrons les blocs à base de mémoire résistives pour réduire la longueur des fils de routage et pour élargir l'employabilité des FPGAs pour des applications non-volatiles de faible consommation. Parmi les nombreuses technologies existantes, nous nous concentrons sur les mémoires à base d'oxyde résistif (OxRRAM) et les mémoires à pont conducteur (CBRAM) en évaluant les propriétés uniques de ces technologies. Comme autre solution, nous avons conçu un nouveau FPGA avec une intégration monolithique 3D (3DMI) en utilisant des interconnexions haute densité. A partir de deux couches avec l'approche logique-sur-mémoire, nous examinons divers schémas de partitionnement avec l'augmentation du nombre de couches actives intégrées pour réduire la complexité de routage et augmenter la densité de la logique. Sur la base des résultats obtenus, nous démontrons que plusieurs niveaux 3DMI est une alternative solide pour l'avenir de mise à l'échelle de la technologie
The ever increasing complexity of digital systems leads the reconfigurable architectures such as Field Programmable Gate Arrays (FPGA) to become highly demanded because of their in-field (re)programmability and low nonrecurring engineering (NRE) costs. Reconfigurability is achieved with high number of point configuration memories which results in extreme application flexibility and, at the same time, significant overheads in area, performance, and power compared to Application Specific Integrated Circuits (ASIC) for the same functionality. In this thesis, we propose to design FPGAs with several 3D technologies for efficient FPGA circuits. First, we integrate resistive memory based blocks to reduce the routing wirelength and widen FPGA employability for low-power applications with non-volatile property. Among many technologies, we focus on Oxide Resistive Memory (OxRRAM) and Conductive Bridge Resistive Memory (CBRAM) devices by assessing unique properties of these technologies in circuit design. As another solution, we design a new FPGA with 3D monolithic integration (3DMI) by utilizing high-density interconnects. Starting from two layers with logic-on-memory approach, we examine various partitioning schemes with increased number of integrated active layers to reduce the routing complexity and increase logic density. Based on the obtained results, we demonstrate that multi-tier 3DMI is a strong alternative for future scaling
APA, Harvard, Vancouver, ISO, and other styles
15

Hsieh, Cheng-Liang. "Design and Implementation of Scalable High-Performance Network Functions." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1416.

Full text
Abstract:
Service Function Chaining (SFC) enriches the network functionalities to fulfill the increasing demand of value-added services. By leveraging SDN and NFV for SFC, it becomes possible to meet the demand fluctuation and construct a dynamic SFc. However, the integration of SDN with NFV requires packet header modifications, generates excessive network traffics, and induces additional I/O overheads for packet processing. These additional overheads result in a lower system performance, scalability, and agility. To improve the system performance, a co-optimized solution is proposed to implemented NF to achieve a better performance for software-based network functions. To improve the system scalability, a many-field packet classification is proposed to support a more complex ruleset. To improve the system agility, a network function-enabled switch is proposed to lower the network function content switching time. The experiment results show that the performance of a network function is improved by 8 times by leveraging GPU as a parallel computation platform. Moreover, the matching speed to steer network traffics with many-field ruleset is improved by 4 times with the proposed many-field packet classification algorithm. Finally, the proposed system is able to improve system bandwidth 5 times better compared the native solution and maintain the content switch time with the proposed SFC implementation using SDN and NFV.
APA, Harvard, Vancouver, ISO, and other styles
16

Özcan, Mehmet Batuhan, and Gabriel Iro. "PARAVIRTUALIZATION IMPLEMENTATION IN UBUNTU WITH XEN HYPERVISOR." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2011.

Full text
Abstract:
With the growing need for efficiency, cost reduction, reduced disposition of outdated electronics components as well as scalable electronics components, and also reduced health effects of our daily usage of electronics components. Recent trend in technology has seen companies manufacturing these products thinking in the mentioned needs when manufacturing and virtualizations is one important aspect of it. The need to share resources, the need to use lesser workspace, the need to reduce cost of purchase and manufacturing are all part of achievements of virtualization techniques. For some people, setting up a computer to run different virtual machines at the same time can be difficult especially if they have no prior basic knowledge of working in terminal environment and hiring a skilled personnel to do the job can be expensive. The motivation for this thesis is to help people with little or no basic knowledge on how to set up virtual machine with Ubuntu operating system on XEN hypervisor.
APA, Harvard, Vancouver, ISO, and other styles
17

Ram, Mohan Nethra Mettuchetty. "Emerging technologies in architectural visualization implementation strategies for practice /." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04072003-164447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Russell, Mary Grace. "Invention and implementation of technologies for continuous flow synthesis." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/127715.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, May, 2019
Cataloged from the PDF of thesis.
Includes bibliographical references.
In this thesis, I have optimized a synthesis of rufinamide an important epilepsy medication. This convergent synthesis generates two reactive intermediates in situ (aryl azide and propiolamide) and then combines them in a regioselective click reaction utilizing copper tubing as the catalyst. Next, I have optimized a synthesis of nicardipine which is prescribed to treat high blood pressure. The nature of the project required that the final product be relatively pure (>90 %) so that the final product could be crystallized from the reaction mixture. Nicardipine was synthesized in three steps, but in two flow reactors where one of the reactors induced two steps. The reaction mixture was then purified using two in-line aqueous extrations. First, the reaction stream was washed with HCl to produce the salt of nicardipine and wash away polar compounds. Then, the product is extracted into the aqueous layer by using a 1:1 water DMSO mixture.
Finally, the synthesis's scale was increased and run in the system that was created in collaboration with the Jensen lab and Myerson lab. Next, a fully continuous synthesis of linezolid was optimized and run. The synthesis targeted the challenging intermediate amide epoxide that rapidly cyclizes into unwanted oxazolines. We were able to circumvent this side reactivity by masking the nucleophilic amide N-H by quenching the resulting nitrillium after Ritter type reaction with 2-propanol to produce the imidate. After accessing the masked amide epoxide, linezolid was produced by nucleophilic addition to the epoxide with the aniline made from a nucleophilic aromatic substitution (SNAr) reduction sequence. Finally, late stage oxazolidinone formation produces linezolid in a 73% yield in 27 minutes longest linear sequence. Next, I contributed to a system that automatically optimized and analyzed organic reactions in continuous flow.
This system in collaboration with the Jensen lab fully integrated software, hardware that controlled the continuous platform, and in-line analytics. This system, after the chemist had provided the desired chemical space, could optimize a reaction without any manual intervention. Finally, I developed a monolithic cellular solid made of functionalized silica for catalyst support. This system could solve some of the problems associated with packed bed reactors including catalyst deactivation due to channeling or clogging of the reactor. This type of catalyst support could be applicable to a large number of catalysts by attaching the catalyst to silane side chains with appended functionality. Portions of this thesis have been published in the following articles co-written by the author and have been reprinted and/or adapted with permission from their respective publishers.Zhang, P.; Russell, M.G.; Jamison, T.F. "Continuous Flow Total Synthesis of Rufinamide" Org. Proc. Res. Dev.
2014, 15671570. © 2014 American Chemical Society. MGR ran the optimization of the synthesis as well as isolation and characterization of the final product. PZ wrote the manuscript and validated the results under TFJ's guidance. Zhang, P.; Weeranoppanant, N.; Thomas, D. A.; Tahara, K.; Stelzer, T.; Russell, M. G.; OMahony, M.; Myerson, A. S.; Lin, H.; Kelly, L. P.; Jensen, K. F.; Jamison, T. F.; Dai, C.; Cui, Y.; Briggs, N.; Beingessner, R. L.; Adamo, A. Advaced Continuous Flow Platform for On-Demand Pharmaceutical Manufacturing, Chem. Eur. J. 2018, 24, 2776-2784. DOI: 10.1002/chem.201706004. © 2018 John Wiley & Sons, Inc. MGR optimized the synthesis of nicardipine as well as ran the synthesis in the synthesis frame. PZ, HL, LPK, CD, RLB all woked to develop chemistry for the syntheses of the different drug targets. NW, DAT, and AA worked to develop the up-steam synthesis unit as well as necessary undeveloped components.
KT, TS, MM, YC, and NB woked to deleop the continuous recrystalization unit and purified the drug targets. TFJ, KFJ, and ASM provided instrumental guidance to the teams. Russell, M. G.; Jamison, T. F. "Seven-Step Continuous Flow Synthesis of Linezolid Without Intermediate Purification," Angew. Chem Int. Ed. 2019, 58, 7678-7681. DOI: 10.1002/anie.201901814. © 2019 John Wiley & Sons, Inc. All synthetic work was carried out by MGR under TFJ's guidance. B6dard, A.-C.; Adamo, A.; Aroh, K. C.; Russell, M. G.; Bedermann, A. A.; Torosian, J.; Yue, B.; Jensen, K. F.; Jamison, T. F. Reconfigurable System for Automated Optimization of Diverse Chemical Reactions, Science 2018, 361, 1220-1225. © 2018 American Association for the Advancement of Sciences. Reprinted with permission from AAAS. MGR and ACB worked together to run the various optimizations as well as substrate scopes. AAB developed initial conditions for several of the reactions. AA developed the system with JT and BJ's assistance.
KCA integrated the system with the software as well as modeled the optimization protocols. KFJ and TFJ provided instrumental guidance to the teams. Leibfarth, F. A.; Russell, M. G.; Langley, D. M.; Seo, H.; Kelly, L. P.; Carney, D. W.; Sello, J. K.; Jamison, T. F. Continuous-Flow Chemistry in Undergraduate Education: Sustainable Conversion of Reclaimed Vegetable Oil into Biodiesel, J. Chem. Ed. 2018, 95, 1371-1375. DOI: 10.1021/acs.jchemed.7b00719. © 2018 American Chemical Society. MGR and DML developed and optimized the chemistry. FAL wrote the manuscript and the laboratory experiment. MGR, HS, and LPK, taught the experiment. DWC provided assistance. JKS and TFJ provided guidance.
by Mary Grace Russell.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Chemistry
APA, Harvard, Vancouver, ISO, and other styles
19

Drayson, P. R. "The implementation of industrial robots in a manufacturing organisation." Thesis, Aston University, 1986. http://publications.aston.ac.uk/15104/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

LeBlanc, Robert-Lee Daniel. "Analysis of Data Center Network Convergence Technologies." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4150.

Full text
Abstract:
The networks in traditional data centers have remained unchanged for decades and have grown large, complex and costly. Many data centers have a general purpose Ethernet network and one or more additional specialized networks for storage or high performance low latency applications. Network convergence promises to lower the cost and complexity of the data center network by virtualizing the different networks onto a single wire. There is little evidence, aside from vendors' claims, that validate network convergence actually achieves these goals. This work defines a framework for creating a series of unbiased tests to validate converged technologies and compare them to traditional configurations. A case study involving two different network converged technologies was developed to validate the defined methodology and framework. The study also shows that these two technologies do indeed perform similarly to non-virtualized network, reduce costs, cabling, power consumption and are easy to operate.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhang, Jie Zhang. "Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC Clusters." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532737201524604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nikolaev, Ruslan. "Design and Implementation of the VirtuOS Operating System." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/24964.

Full text
Abstract:
Most operating systems provide protection and isolation to user processes, but not to critical system components such as device drivers or other systems code. Consequently, failures in these components often lead to system failures. VirtuOS is an operating system that exploits a new method of decomposition to protect against such failures. VirtuOS exploits virtualization to isolate and protect vertical slices of existing OS kernels in separate service domains. Each service domain represents a partition of an existing kernel, which implements a subset of that kernel's functionality. Service domains directly service system calls from user processes. VirtuOS exploits an exceptionless model, avoiding the cost of a system call trap in many cases. We illustrate how to apply exceptionless system calls across virtualized domains. To demonstrate the viability of VirtuOS's approach, we implemented a prototype based on the Linux kernel and Xen hypervisor. We created and evaluated a network and a storage service domain. Our prototype retains compatibility with existing applications, can survive the failure of individual service domains while outperforming alternative approaches such as isolated driver domains and even exceeding the performance of native Linux for some multithreaded workloads. The evaluation of VirtuOS revealed costs due to decomposition, memory management, and communication, which necessitated a fine-grained analysis to understand their impact on the system's performance. The interaction of virtual machines with multiple underlying software and hardware layers in virtualized environment makes this task difficult. Moreover, performance analysis tools commonly used in native environments were not available in virtualized environments. Our work addresses this problem to enable an in-depth performance analysis of VirtuOS. Our Perfctr-Xen framework provides capabilities for per-thread analysis with both accumulative event counts and interrupt-driven event sampling. Perfctr-Xen is a flexible and generic tool, supports different modes of virtualization, and can be used for many applications outside of VirtuOS.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Bendun, Fabian [Verfasser]. "Privacy enhancing technologies : protocol verification, implementation and specification / Fabian Bendun." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1220691054/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Homayounfard, Amir. "Value creation and the implementation of technologies for advancing services." Thesis, University of Portsmouth, 2018. https://researchportal.port.ac.uk/portal/en/theses/value-creation-and-the-implementation-of-technologies-for-advancing-services(c3e297bb-a8a8-4860-8bc2-23f9e905b637).html.

Full text
Abstract:
It is widely recognized that the use of technologies can serve as a critical strategic tool in benefiting from innovation and achieving increased business profitability in the retail sector. Research addressing the role of technologies in the theoretical entity of profiting from technological innovation (PFI) has proliferated in recent years. In parallel, the growing role of technologies within the theory of service-dominant logic (SDL) is thriving. However, firms face the difficult task of applying technologies and releasing the value creation potentials of technologies for advancing services. In this sense, the retail industry has been a recognized context for practices of technologies for innovating services. This research explores the role of technology and its value drivers for innovating services in the UK retail sector. While the fit between the PFI and SDL frameworks has been overlooked, insight into the importance of technology within this theoretical interface remains unexplored. This research focuses on the implementation stage of the adoption process of technologies. In this stage, retailers identify new technologies, through collaboration with technology suppliers and engage in assessing and operational aspects. In doing so, retailers are increasingly moving towards technologies aimed at innovating their services through improving efficiency and productivity. The research is followed by two phases of data collection. Phase one includes semi-structured qualitative interviews with key informants from the technology suppliers in the UK retail sector. Phase two includes an exploratory stage with nine case studies in the UK retail sector. Conclusively, this research, first, offers a revised perspective for Teece's works in 1986 and 2006 on how to profit from technological innovation (PFI). Second, it develops an integrative framework through linking the revised-PFI framework with the theoretical foundations of service-dominant logic (SDL). Third, it provides a roadmap for the implementation model of technologies in the UK retail sector. Fourth, it offers a typology of technology spectrum for delivering value by different technologies during the implementation process. The typology consists of nine unique types of technologies in the chosen sector. Fifth, it updates the typology of technology spectrum and presents it in the form of a typology of retail business models, where each group of technologies requires an exclusive business model for the retailer to be adopted.
APA, Harvard, Vancouver, ISO, and other styles
25

TADESSE, ADDISHIWOT. "Efficient Bare Metal Backup and Restore in OpenStack Based Cloud InfrastructureDesign : Implementation and Testing of a Prototype." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Menzel, Christoph. "Basic conditions for the implementation of speed adaptation technologies in Germany." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=973040610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Guerster, Martin. "Self-leadership and the applicability of implementation intentions." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Ahmad, Bilal, and Sunisa Hemphoom. "Family Firms and Clean Technologies : A qualitative study exploring how a firm’s ownership status influences implementation of clean technologies." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-42397.

Full text
Abstract:
Abstract Background: Sustainability practices have become a crucial factor for firms since there are external and internal pressures that expect firms to act environmentally friendly. Especially within organizations that are owned by family, being sustainable enables them to pass their firm in a good condition to the next generation. One way firms can be sustainable is through adopting clean technology strategy as it can provide both environmental and economic benefits to firms. Being sustainable and having the ability to implement clean technology requires a long-term vision or long-term orientation (LTO); a characteristic often associated with family-controlled businesses (FCBs). Purpose: The purpose is to examine the adoption of clean technology within family-controlled firms (FCBs) and non-family-controlled firms (Non-FCBs). The aim is to explore if there are certain characteristics of FCBs that facilitate implementation of clean technologies. Method: This research is based on qualitative research method with an abductive approach and interpretivism philosophy. The primary data is collected through semi-structured interviews with four companies of which three are family-controlled businesses and one is a non-family- controlled business. Conclusion: FCBs are more inclined to invest in clean technologies. The extent to which a company does or does not implement clean technologies depends not only on the institutional values of an organization but also how deeply one or more of the three LTO dimensions are implanted in those values.
APA, Harvard, Vancouver, ISO, and other styles
29

Kadioglu, Koray. "Design And Implementation Of A Plug-in Framework For Distributed Object Technologies." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607662/index.pdf.

Full text
Abstract:
This thesis presents a framework design and implementation that enables run-time selection of different remote call mechanisms. In order to implement an extendable and modular system with run-time upgrading facility, a plug-in framework design is used. Since such a design requires enhanced usage of run-time facilities of the programming language that is used to implement the framework, in this study Java is selected because of its reflection and dynamic class loading facilities. A sample usage of this framework is enabling an application to distribute its tasks over a network using a suitable distributed object technology (DOT). In this work, CORBA, RMI and Java Sockets are the sample DOT plug-ins. A series of performance evaluations of these DOTs are presented to establish a baseline for choosing a suitable DOT for the application domain that uses this framework.
APA, Harvard, Vancouver, ISO, and other styles
30

Zahid, Muhammad Umair, and Mehdi Mohammed Hasan Rahi. "Evaluation of Two Web Development Technologies : Implementation of an online shopping store." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11804.

Full text
Abstract:
In the thesis we have evaluate the differences between two web development technologies by developing the same two applications with same functional requirement  in different two technologies, Microsoft Visual Studio 2010 and Netbeans IDE 6.8. We use the Active Server Pages (ASP.NET) with C# in the Visual Studio 2010 and Java Server Pages (JSP) with Servlet in the NetBeans IDE 6.8. We have developed an online shopping store with some features for Admin and Client. We have found that NetBeans with JSP is faster in execution of the code as compare to Visual Studio with ASP.NET when we update any code because JSP compile the code at runtime. Meanwhile we have also found that ASP.NET is faster for development procedure because ASP.NET provides many built-in tools, controls and functions while as in JSP there is not so much built-in help to generate the code automatically. We have also face that Visual Studio with ASP.NET is easier to learn as compared to NetBeans with JSP because ASP.NET has strong and complete documentation inform of tutorials, books and examples.
APA, Harvard, Vancouver, ISO, and other styles
31

Pozzebon, Marlei. "The implementation of configurable technologies : negotiations between global principles and local contexts." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84540.

Full text
Abstract:
This investigation focuses on configurable technologies, a term which refers to technologies that are highly parameterizable and are built from a range of components to meet the very specific requirements of a particular organization. They cannot be seen independently of their representations through external intermediaries who "speak" for the technology by providing images, descriptions, demonstrations, policies, templates and "solutions". I use the term technology-configuring mediation to refer to the process characterized by a socially constructed relationship between clients and consultants, where visions of how the technology should operate are negotiated. Configurable tools are well illustrated by ERP projects and represent an important trend in IS, drawing its popularity from the hope of benefiting from increased economies of scale and access to cumulative knowledge supposedly "embedded" into these technological artifacts.
From a critical interpretive perspective that combines ideas from structuration theory, social shaping views of technology and critical discourse analysis, this dissertation is based on an empirical investigation that spanned one year and is primarily organized in three papers. The first paper investigates the use of structuration theory in the IS field, asking: How can we successfully apply structuration theory in IS empirical research? Paper 1 contributes to the advancement of interpretive research methods by describing, analyzing and illustrating the ways IS scholars have used Giddens' theory in their research. In addition, it presents a repertoire of research strategies that may help overcome barriers to the empirical application of structurationist theory by dealing with three core elements: time, context and duality of technology.
The second paper discusses the rhetorical closure that often dominates discourses about IT, arguing that configurable technologies are social constructions and, to different degrees, are always open to change. Taking ERP projects as a typical illustration of configurable IT, Paper 2 describes a multilevel framework that identifies occasions for ERP package negotiation and change at three levels---segment, organization and individual---thereby breaking down the rhetorical closure that seems to dominate public debate. Paper 2 draws on structurationist and political streams of thinking about technology to set out a theoretical framework that contributes to advancing our knowledge of configurable IS phenomena.
The third paper addresses the question: How does the mediation process influence the negotiation between global principles and local contexts during the implementation of configurable IS, and how does such a negotiation influence the success of the implemented technology? Paper 3 provides a new understanding of configurable technology implementation. The structuring of a new configuration is seen as a mediation process where knowledge and power dependencies are created and recreated over time by consultants and clients, the entire process being bordered by internal and external constraints. Paper 3 recognizes different patterns of mediation and explains how these patterns affect the negotiation of global principles and local contexts as well as the project results. The study ends by identifying a collection of mediating strategies that are likely to improve the implementation of configurable IS.
APA, Harvard, Vancouver, ISO, and other styles
32

Dollete, Rodolfo G. "Implementation of magnetic strip/smart card technologies and their applications at NPS." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA285734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Penney, John 1974. "Managing the implementation of automotive emission control technologies using systems engineering principles." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/34737.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2004.
Includes bibliographical references (p. 79-80).
In the 1940s and 1950s poor air quality in major metropolitan areas throughout the United States started to negatively influence the health of citizens throughout the country. After numerous studies the government concluded that mobile sources of air pollution were a significant contributor to the deteriorating air quality. From that point onwards, the automobile manufacturers have been forced to comply with ever tightening emission regulations. This thesis describes an original investigation into the conflicting clockspeeds that prohibit rapid integration of new automobile emission technologies into production automobiles. Common themes and barriers to technology implementation are uncovered by systematically analyzing current production emission technology and exhaust gas after-treatment systems, and investigating how those systems have evolved over the years. A heuristic for analyzing the technology clockspeed is developed by decomposing the problem into four interconnected cycles. These four cycles correspond to the government's process to develop new automobile emission control regulations and the automobile manufacturer's ability to engineer and certify vehicle platforms, engines, and combustion after-treatment systems. This thesis analyzes the emission control technology development process in six chapters. The first chapter deals with setting the scope and defining the boundaries of the systems that will be analyzed. Chapter two analyzes the driving forces behind the creation of emission regulations and the legislative processes that transform ideas into law. Chapter three analyzes the second level decomposition of the problem at the vehicle level with a specific emphasis on Ford Motor Company's Fox vehicle platform.
(cont.) The fourth chapter decomposes the problem to the engine system level with a focus on the production history of American V8 engines. Chapter five investigates the management of a catalytic converter development program and recommends an organizational structure to efficiently develop catalytic converter systems. The organizational structure recommendation is based on results obtained from a task oriented design structure matrix and a system engineering decomposition.
by John Penney.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Foster, Stephen MacDonald. "Analysis of the factors which affect the implementation of advanced manufacturing technologies." Thesis, Massachusetts Institute of Technology, 1989. https://hdl.handle.net/1721.1/122270.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
Includes bibliographical references (leaves 115-116).
by Stephen MacDonald Foster.
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
APA, Harvard, Vancouver, ISO, and other styles
35

Efstathiades, Andreas. "Modeling the implementation of advanced manufacturing technologies in the Cypriot manufacturing industry." Thesis, Brunel University, 1997. http://bura.brunel.ac.uk/handle/2438/5791.

Full text
Abstract:
For the Cyprus Manufacturing Industry, previously committed to the production of medium quality standard products, the increased and changing nature of competitive pressures represents a fundamental challenge. The major problems the Cyprus manufacturing industry is facing appear to be labour shortages, together with low product competitiveness and poor production organization. It is widely believed that the introduction of Advanced Manufacturing technologies (AMTs) offers a means of resolving the above problems but their implementation is a risky venture. The main objective of the study was to examine the implementation of AMTs in the Cyprus Manufacturing Industry, identify the factors leading to successful application of these technologies and based on these factors develop an integrated process plan to facilitate their successful implementation. A survey was conducted on a sample of 40 companies using personal interviews based on a purpose designed comprehensive questionnaire. The questionnaire encompassed the international trends in the management and implementation of AMT. Successes and failures have been considered in terms of the Technical, Manufacturing and the Business aspects and influences of each technology. It has been found that the most important factors contributing to the successful implementation of AMT were the level of long term planning, the fitness level of AMT in the existing processes and the attention given to infrastructure preparation and human resource development. Based on the success factors identified an integrated planning model has been developed. The model incorporates all the planning procedures and implementation parameters to be followed in order to ensure successful AMT adoption and implementation. The model addresses the three main stages of AMT adoption and implementation: (a) the planning phase, (b) the selection, transfer and pre-implementation phase and (c) the post implementation phase. For each phase the steps to be followed are fully explored and analysed. Finally the usefulness of the model in facilitating the successful application of AMT is illustrated through two case studies.
APA, Harvard, Vancouver, ISO, and other styles
36

Schröder, Thomas F. "Profitability of SST Options efficiency gains through the implementation of self-service technologies /." kostenfrei, 2007. http://www.unisg.ch/www/edis.nsf/wwwDisplayIdentifier/3406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zaker, Hosein Mohammed Reza. "BIM implementation in architectural practices : towards advanced collaborative approaches based on digital technologies." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668050.

Full text
Abstract:
We are at a stage where Building Information Modelling (BIM) has reached a maturity level to be widely adopted by the professionals and organizations within the Architectural, Engineering and Construction (AEC) industry. An industry which is highly fragmented and not advanced in terms of digitalization, making an effective collaboration hard to achieve. The advances in Information and Communication Technologies (ICT) have brought about the promise of improving collaborative procedure in a wide range of industries. The widespread adoption of BIM has paved the way for the introduction of ICT within the AEC sector. The reported benefits of BIM imply on its potential for contributing to a successful inter-disciplinary collaboration. This calls for attention from architects who shall consider how BIM allows the architectural practices to operate in truly novel ways to achieve new building efficiencies and organizations. This research was designed to investigate the crucial factors for an effective collaboration based on advanced ICT and enabled by BIM with respect to architectural practices. An effective inter-disciplinary collaboration allows architects as the authors of the projects to oversee the development and delivery of the projects more consistently with their design intends. The concerns about the move towards adopting BIM by architectural firms were reviewed and its influential factors and barriers were discussed. As we read about it, BIM is indicated by different terms to describe its essence: ‘’BIM methodology’’, ‘’BIM technology’’, ‘’BIM process’’, ‘’BIM systems’’ and etc. However, none of these terms can include all aspects of BIM. The term ‘’ecosystem’’ was adopted to describe the nature of BIM and the reason for which is described in this work. To further constitute the BIM ecosystem, its dimensions of People, Products and Processes were presented in detail with respect to collaborative procedures. It included the delineation of a number of BIM policies and protocols, tools and technologies, roles and skills which are all related to and suitable for architectural practices in their interdisciplinary collaboration. Through three case studies, the research questions and hypothesis were put into investigation. Based on the idea of change management and the socio-technical nature of BIM collaboration, a qualitative research approach was adopted. Various techniques were used to gather information to be analyzed through a coding process of the qualitative data. The codes were interpreted as the factors influencing collaboration and were grouped to form the crucial concepts contributing to effective BIM-enabled collaborative procedures. It was revealed that the “joint decision making” factor is the most crucial one in this respect followed by “collaboration involvement” and “interoperability”. These findings were based on the frequency of the codes related to these factors in the data analysis. The crucial concepts in BIM-enabled collaboration were revealed to be “collaboration conditions” followed by “software capacity” and “human resources organization”. The findings confirm the research hypotheses that BIM implementation asks architects to assume a leadership role in collaborative procedures and that it allows for the integration of ICT into the technological pipeline of architectural practices. However, the validity of the two hypotheses is subject to certain conditions that are discussed in this work. The research finds the area of BIM education a place of great interest for future research work as the factor of “training” has a great influence on the overall success of BIM-enabled collaboration. Furthermore, it was revealed that the crucial factor of “interoperability” needs more attention from both industry and academic sectors. The impacts of BIM implementation on existing and emerging roles within the industry is another area of great interest for future works and research.
Estamos en una etapa en la que Building Information Modeling (BIM) ha alcanzado un nivel de madurez que será ampliamente adoptado por el Profesionales y organizaciones dentro de la industria de Arquitectura, Ingeniería y Construcción (AIC). Una industria que es altamente fragmentado y no avanzado en términos de digitalización, lo que hace que una colaboración efectiva sea difícil de lograr. Los avances en las Tecnologías de la Información y la Comunicación (TIC) han traído la promesa de mejorar la colaboración Procedimiento en una amplia gama de industrias. La adopción generalizada de BIM ha allanado el camino para la introducción de las TIC en el sector de la AIC. Los beneficios reportados de BIM implican en su potencial para contribuir a un éxito interdisciplinario colaboración. Esto requiere la atención de arquitectos que deben considerar cómo BIM permite que las prácticas arquitectónicas operen en formas verdaderamente novedosas para lograr nuevas eficiencias de construcción y organizaciones. Esta investigación fue diseñada para investigar los factores cruciales para una colaboración efectiva basada en TIC avanzadas y habilitado por BIM con respecto a las prácticas arquitectónicas. Una colaboración interdisciplinaria efectiva permite a los arquitectos como autores de los proyectos para supervisar el desarrollo y la entrega de los proyectos de manera más coherente con sus propósitos de diseño. Se revisaron las preocupaciones sobre el movimiento hacia la adopción de BIM por parte de las empresas de arquitectura y sus factores influyentes y Se discutieron las barreras. A medida que leemos sobre esto, BIM se indica mediante diferentes términos para describir su esencia: "metodología BIM", "Tecnología BIM", "Proceso BIM", "Sistemas BIM" y etc. Sin embargo, ninguno de estos términos puede incluir todos los aspectos de BIM. El término "ecosistema" se adoptó para describir la naturaleza de BIM y la razón por la cual se describe en este trabajo. A más constituyen el ecosistema BIM, sus dimensiones de Personas, Productos y Procesos se presentaron en detalle con respecto a procedimientos colaborativos. Incluía la delineación de una serie de políticas y protocolos BIM, herramientas y tecnologías, roles y habilidades que están relacionadas y son adecuadas para las prácticas arquitectónicas en su colaboración interdisciplinaria. A través de tres estudios de caso, las preguntas de investigación y la hipótesis se pusieron en investigación. Basado en la idea de cambio. La gestión y la naturaleza sociotécnica de la colaboración BIM, se adoptó un enfoque de investigación cualitativa. Varios se utilizaron técnicas para recopilar información para analizarla a través de un proceso de codificación de los datos cualitativos. Los codigos fueron interpretados como los factores que influyen en la colaboración y se agruparon para formar los conceptos cruciales que contribuyen a la eficacia procedimientos colaborativos habilitados por BIM. Se reveló que el factor de "toma de decisiones conjunta" es el más crucial en este sespeto seguido de "participación colaborativa" e "interoperabilidad". Estos hallazgos se basaron en la frecuencia de códigos relacionados con estos factores en el análisis de datos. Los conceptos cruciales en la colaboración habilitada por BIM se revelaron como "Condiciones de colaboración" seguidas de "capacidad de software" y "organización de recursos humanos". Los hallazgos confirman la investigar las hipótesis de que la implementación BIM les pide a los arquitectos que asuman un papel de liderazgo en los procedimientos de colaboración y que permite la integración de las TIC en la línea tecnológica de las prácticas arquitectónicas. La investigación considera que el área de educación BIM es un lugar de gran interés para futuros trabajos de investigación, ya que el factor de "capacitación" tiene un gran influencia en el éxito general
APA, Harvard, Vancouver, ISO, and other styles
38

Blanchard, Roxann Russell. "Recovered energy logic--device optimization for circuit implementation in silicon and heterostructure technologies." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34066.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 91-93).
by Roxann Russell Blanchard.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
39

Monginho, Mário Augusto Bragado. "Estudo do impacto da virtualização de hardware num nó de uma organização distribuída: o estudo de caso da Administração Regional de Saúde do Alentejo." Master's thesis, 2012. http://hdl.handle.net/10400.26/6743.

Full text
Abstract:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de mestre em Sistemas de Informação Organizacionais
O âmbito deste trabalho é, numa primeira parte, desenvolver um conteúdo mais teórico, para que desta forma se consiga ter uma noção mais exata da virtualização. A rápida evolução das TIC, bem como a sua constante mutação, originou uma evolução da capacidade de processamento, armazenamento e de comunicação muito significativa. O processamento de dados nas organizações é realizado em ambientes diversificados de trabalho que, no entanto, são complementares entre si. A virtualização oferece um ambiente uniforme e muito idêntico ao das máquinas físicas, disponibilizando o sistema operativo, aplicações e serviços de rede de uma forma totalmente isolada e independente. Esta técnica tem vindo a ganhar notoriedade em infraestruturas de TIC pois permite consolidar servidores edesktops, reduzindo custos, melhorando a segurança e implementando atolerância a possíveis falhas.Tem igualmente o intuito de fornecer conceitos básicos da virtualização, formas de implementação, vantagens e desvantagens e tecnologias de virtualização. Numa segunda parte analisa-se o impacto que a implementação da virtualização provoca numa organização, através do estudo de caso do Centro de Saúde de Vendas Novas e replicar para os restantes centros de saúde e extensões do distrito de Évora. Neste trabalho são igualmente identificadas as ferramentas a serem adotadas para a virtualização dos desktops; a opinião dos colaboradores no centro de saúde e a continuidade de negócio. Na sequência da análise feita à forma escolhida para a implementação da virtualização é apresentada uma proposta de instalação de um cluster por balanceamento de carga de rede.
APA, Harvard, Vancouver, ISO, and other styles
40

Tsai, Chia-Fan, and 蔡佳帆. "Comparisons of Benchmark of Virtualization Technologies." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/54058146244527759577.

Full text
Abstract:
碩士
淡江大學
資訊管理學系碩士班
96
In recent years, many enterprises pay much attention to virtualization technologies. Many enterprises are looking forward to the boom and advantages of the technology in the future. And some enterprises are going to lead the technology in their internal working process. Hoping to amplify the special properties on the technology , in order to consolidate resources、cost down、simplify management. In the thesis, we setup some in common use server operation systems on different virtualization technologies software. Then we take stress testing on each system environment. At last, we will compare each performance by listing. We hope the output of the thesis can give some suggestions to the enterprises. We want to pursue reasonable setup on management and performance when enterprises choose server operation systems and virtualization technologies.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Wei-Min, and 陳韋民. "An Overview of Virtualization Technologies for Cloud Computing." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/06810835515404836191.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
100
Cloud computing is a new concept that incorporates many existing technologies, such as virtualization. Virtualization is important for the establishment of cloud computing. With virtualization, cloud computing can virtualize the hardware resources into a huge resource pool for users to utilize. This thesis begins with an introduction to how a widely used service model classifies cloud computing into three layers. From the bottom up, they are IaaS, PaaS, and SaaS. Some service provides are taken as examples for each service model, such as Amazon Beanstalk and Google App Engine for PaaS; Amazon CloudFormation and Microsoft mCloud for IaaS. Next, we turn our discussion to the hypervisors and the technologies for virtualizing hardware resources, such as CPUs, memory, and devices. Then, storage and network virtualization techniques are discussed. Finally, the conclusions and the future directions of virtualization are drawn.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Chia-Hao, and 劉家豪. "Implementation of OpenStack based Network Function Virtualization." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/05391731328334012727.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系在職專班
103
The virtual network function (VNF) is to integrate network the traditional network functions by using software technology such as Software UTM. In the past, software UTM cannot afford enough performance when compared to hardware-based network appliances. So it's just provides low-cost solution and doesn't demonstrate its advantages. However, it will have the efficiencies and flexibility by using modern cloud computing technology. The purpose of NFV(Network Functions Virtualization) is to decouple the network services from the hardware, and then deploy networking functions on the infrastructure that including servers, storage and even other networks. Through our survey, the deployment of such technology tends to be implemented through the advantages of cloud computing architecture. Thus, NFV acts like a network virtual machine in cloud environment. SDN (Software Defined Network) is currently active in the data center network to provide flexible resource allocation. The cloud platform can also implement the switching function by using the OVS (Open vSwitch) on the server, and then using overlay tunneling protocols to encapsulate layer 2 tenant network traffic over some layer 3 networks. Deployment NFV using SDN in the cloud platform is an innovative concept. SDN enables the flexible routing of traffic within an NFV infrastructure to improve the efficiency and maximize the overall agility of the network. In this thesis, we use Pica8 SDN switch, OpenStack and Sortware UTM to implement the network function virtualization platform. Our effort is to integrate the above software modules into a workable NFV system. The firewall NFV is practically implemented in the platform. And, in order to evaluate the performance, the smartbits equipment is applied to examine its function and throughput.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Ye. "Leveraging virtualization technologies for resource partitioning in mixed criticality systems." Thesis, 2015. https://hdl.handle.net/2144/14055.

Full text
Abstract:
Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, such as the ARM Cortex A15, and x86 processors with Intel VT-x or AMD-V support. Hardware virtualization offers opportunities to partition physical resources, including processor cores, memory and I/O devices amongst guest virtual machines. Mixed criticality systems and services can then co-exist on the same platform in separate virtual machines. However, traditional virtual machine systems are too expensive because of the costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests. For example, hypervisors are needed to schedule separate VMs on physical processor cores. Additionally, traditional hypervisors have memory footprints that are often too large for many embedded computing systems. This dissertation presents the design of the Quest-V separation kernel, which partitions services of different criticality levels across separate virtual machines, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention of a hypervisor. In Quest-V, a hypervisor is not needed for normal operation, except to bootstrap the system and establish communication channels between sandboxes. This approach not only reduces the memory footprint of the most privileged protection domain, it removes it from the control path during normal system operation, thereby heightening security.
APA, Harvard, Vancouver, ISO, and other styles
44

"A Comparative Study on the Performance Isolation of Virtualization Technologies." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.55483.

Full text
Abstract:
abstract: Virtualization technologies are widely used in modern computing systems to deliver shared resources to heterogeneous applications. Virtual Machines (VMs) are the basic building blocks for Infrastructure as a Service (IaaS), and containers are widely used to provide Platform as a Service (PaaS). Although it is generally believed that containers have less overhead than VMs, an important tradeoff which has not been thoroughly studied is the effectiveness of performance isolation, i.e., to what extent the virtualization technology prevents the applications from affecting each other’s performance when they share the resources using separate VMs or containers. Such isolation is critical to provide performance guarantees for applications consolidated using VMs or containers. This paper provides a comprehensive study on the performance isolation for three widely used virtualization technologies, full virtualization, para-virtualization, and operating system level virtualization, using Kernel-based Virtual Machine (KVM), Xen, and Docker containers as the representative implementations of these technologies. The results show that containers generally have less performance loss (up to 69% and 41% compared to KVM and Xen in network latency experiments, respectively) and better scalability (up to 83.3% and 64.6% faster compared to KVM and Xen when increasing number of VMs/containers to 64, respectively), but they also suffer from much worse isolation (up to 111.8% and 104.92% slowdown compared to KVM and Xen when adding disk stress test in TeraSort experiments under full usage (FU) scenario, respectively). The resource reservation tools help virtualization technologies achieve better performance (up to 85.9% better disk performance in TeraSort under FU scenario), but cannot help them avoid all impacts.
Dissertation/Thesis
Masters Thesis Computer Science 2019
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Yu-Hsuan, and 劉宇軒. "Collaborative edge computing implementation based on docker virtualization." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9u3ub3.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
107
The development of IOT has continued to leap forward promoted the rise of cluster-distributed systems. Through the division of labor between machines, it is possible to achieve an excellent system for computing, storage, and system maintenance that can be close to expensive hardware devices, even better in terms of device fault tolerance. As a result, most of the machines in the cluster decentralized system have operations, storage, control, and networking capabilities, so they can be considered as an edge computing node. Through the edge computing node, we can transfer the cloud service to the node, which not only reduce actual transmission distance, network transmission information but also avoid unpredictable network congestion. It can also improve the use of hardware resource, and combine with Virtual machine (VM) not only provides independent space, but also we can customized system environment for the specific service, and allows multiple tasks to run on the same machine at the same time without interfering with each other. But the VM generally needs to independent hardware resource to execute system, so there are often problems with insufficient hardware resources, and when we using edge nodes to collaborate on a task will require synchronization between nodes, also face task execution time limits. This paper converts the all problem which mentioned on the above to the problem of Sigle Pub - multiple Sub communication. Therefore, in order to solve the above problems, the Linux Container (Docker) is selected as the execution environment architecture in our edge node, the virtual containers not only share the hardware resources with each other but also isolate the execution environment from the system, so there is a lighter weight and faster startup speed to break through the limitations of VM, and combined with SSH File System (SSHFS) to solve The problem of Synchronization between edge nodes. In view of the structure of this paper, First, I want to use the screen of the 2019 performance of drones in Pingtung as an example to simulate a machine that can send tasks to the cluster through the wireless network, allowing a specified number or specific cluster machines to perform task content. And return the execution results to simulate the edge calculation. Second, simulated fire automatic path planning escape system, let the edge nodes communicate with each other to find the best path from current position to exit door to simulated a collaborative system.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Chia-Cheng, and 劉家誠. "Implementation of Disaster Recovery Mechanism for Virtualization Environment." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/vx5z5t.

Full text
Abstract:
碩士
國立虎尾科技大學
資訊管理研究所
100
A burgeoning cloud computing brought up the topic of virtualization in recent years. The leading manufacturers in information field, including Microsoft, Intel, IBM, have invested substantial resources in this aspect of virtualization. However there are still some security worries in virtualization architecture. The virtual environment is quite dynamic, and therefore it is more important to perform data backup and disaster recovery for safety management. This paper discusses the virtualization platform, virtual machine, desktop virtualization security issues, and implements disaster recovery mechanism for virtualization environment.
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Shu-Hui, and 張淑惠. "Research on the Decision Factors for Enterprise Adopting Server Virtualization Technologies." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/3hxdgc.

Full text
Abstract:
碩士
國立臺北科技大學
工業工程與管理系EMBA班
105
Virtualization technologies are receiving prevailing adoptions in enterprise IT during last decade, and are becoming a critical component in Enterprise IT strategy. However, many enterprises in Taiwan are still lacking deep understanding about the technologies and benefits of virtualization. Even enterprises see the whole IT markets are fast moving toward virtualization, but they still can’t build solid business case to support their IT transformation. This research target for medium and large enterprises, study the impact and consideration factors of virtualization technologies, identify most important decision making factors in adopting virtualization technologies, and study how these factors support enterprise IT requirements. This research had identified the primary perspectives which enterprises evaluate virtualization technology are: cost, service capabilities, management capabilities, technology vision, and most important: availability. The most important factors influence decisions are: preventing data from lost or corrupted, staff operational costs, software acquisition and maintenance cost, prevent system interruption from hardware components failure, security of products, ensure system availability in case of server failure, hardware acquisition and maintenance cost, data center space and facilities, system can back to online as quick as possible in case of server failure, product manufacturer support capabilities.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Hsienyi, and 王顯義. "On Implementation of GPU Virtualization Using PCI Pass-Through." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/11676449138645354038.

Full text
Abstract:
碩士
東海大學
資訊工程學系
100
Nowadays, NVIDIA’s CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven to be quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. GPU-base clusters are likely to play an important role in future cloud computing centers, because some compute-intensive applications may require both CPUs and GPUs. In this thesis by using PCI pass-through technology and making the virtual machines in a virtual environment are able to use the NVIDIA graphics card, and we can use the CUDA high performance computing as well. It makes the virtual machine have not only the virtual CPU but also the real GPU for computing. The performance of virtual machine is predicted to increase dramatically. This thesis will measure the performance differences between virtual machines and physical machines by using CUDA; and how virtual machines would varify CPU numbers under influence of CUDA performance. At last, we compare two open source virtualization environment hypervisor, whether it is after PCI pass-through CUDA performance differences or not. Through the experiment, we will be able to know which environment will reach the best efficiency in a virtual environment by using CUDA.
APA, Harvard, Vancouver, ISO, and other styles
49

CHEN, PING-SHENG, and 陳稟升. "Design and Implementation of Virtualization for the Enterprise Infrastructure." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/86262413875925476345.

Full text
Abstract:
碩士
國立高雄應用科技大學
電機工程系博碩士班
104
With the popularity of mobile devices and change in users' demand, many new patterns of use and business model appear. The relationship between cloud computing and the mass daily life is getting close. There are three different service models in cloud computing: Software as a Service(SaaS), Platform as a Service(PaaS), Infrastructure as a Service (IaaS). Unlike ordinary disscussion of IaaS service model is to investigate the correlation of external users, this article will focus on the internal IaaS service model of enterprise. Take limited budget into account, the maintenance for conventional dedicated hardware will face many constraints and challenges. Take the credit for the progress of technique, software and hardware are keeping improved. Enterprise use the technique of virtualization to construct an internal IaaS service environment. By this way, company could utilize their assets and resources flexibly. This article will resolve the real infrastructure of enterprise. To solve the shortcomings about traditional maintenance. This paper will mention that how to introduce the virtualization technique on x86 platform and design a customized structure of cabling to suit the demand of virtualization. Through the actual implementation to evaluate the feasibility of virtualization. In this paper, author would like to offer a solution to the business that have not adopted virtualization, and view the opportunity and its future development.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Wei-Ya, and 王薇雅. "Implementation of an Enterprise Cloud System with Virtualization Technology." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/17278083826144997503.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
102
In the traditional system architecture, different applications are usually installed in different servers to provide services. When there is a temporal requirement to install an additional service, it might be necessary to buy some new machines, or take much effort in re-organizing the application installation among the servers. In this phenomenon, the usage of system source might be inefficiently, and the management of distributed system architecture would be a big problem. In order to settle down the problems, we introduce a system transition and implementation procedure to help a company building up the cloud environment with virtualization technology. The proposed procedure will translate the traditional distributed server architecture into virtualized platform. With the adoption of virtualization technology, system resources can be shared and used more efficiently, and system deployment will be more easily and rapidly. After the validation by case, it is shown that the proposal not only achieves the above goals, but also helps the new platform to reducing power wastage. Also, the centralized system management is simplified, and time and long-term cost for system maintenance are decreased.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography