Academic literature on the topic 'Open Virtualization Format'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Open Virtualization Format.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Open Virtualization Format"

1

Jeyakanthan, Mahesa, and Amiya Nayak. "Policy management: leveraging the open virtualization format with contract and solution models." IEEE Network 26, no. 5 (September 2012): 22–27. http://dx.doi.org/10.1109/mnet.2012.6308071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Frey, Lewis J., Katherine A. Sward, Christopher JL Newth, Robinder G. Khemani, Martin E. Cryer, Julie L. Thelen, Rene Enriquez, et al. "Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network." Journal of the American Medical Informatics Association 22, no. 6 (March 21, 2015): 1271–76. http://dx.doi.org/10.1093/jamia/ocv009.

Full text
Abstract:
Abstract Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes.
APA, Harvard, Vancouver, ISO, and other styles
3

Von Suchodoletz, Dirk, Klaus Rechert, Randolph Welte, Maurice Van den Dobbelsteen, Bill Roberts, Jeffrey Van der Hoeven, and Jasper Schroder. "Automation of Flexible Migration Workflows." International Journal of Digital Curation 6, no. 1 (March 8, 2011): 183–98. http://dx.doi.org/10.2218/ijdc.v6i1.181.

Full text
Abstract:
Many digital preservation scenarios are based on the migration strategy, which itself is heavily tool-dependent. For popular, well-defined and often open file formats – e.g., digital images, such as PNG, GIF, JPEG – a wide range of tools exist. Migration workflows become more difficult with proprietary formats, as used by the several text processing applications becoming available in the last two decades. If a certain file format can not be rendered with actual software, emulation of the original environment remains a valid option. For instance, with the original Lotus AmiPro or Word Perfect, it is not a problem to save an object of this type in ASCII text or Rich Text Format. In specific environments, it is even possible to send the file to a virtual printer, thereby producing a PDF as a migration output. Such manual migration tasks typically involve human interaction, which may be feasible for a small number of objects, but not for larger batches of files.We propose a novel approach using a software-operated VNC abstraction layer in order to replace humans with machine interaction. Emulators or virtualization tools equipped with a VNC interface are very well suited for this approach. But screen, keyboard and mouse interaction is just part of the setup. Furthermore, digital objects need to be transferred into the original environment in order to be extracted after processing. Nevertheless, the complexity of the new generation of migration services is quickly rising; a preservation workflow is now comprised not only of the migration tool itself, but of a complete software and virtual hardware stack with recorded workflows linked to every supported migration scenario. Thus the requirements of OAIS management must include proper software archiving, emulator selection, system image and recording handling. The concept of view-paths could help either to automatically determine the proper pre-configured virtual environment or to set up system images for certain migration workflows. View-paths may rise in demand, as the generation of PDF output files from Word Perfect input could be cached as pre-fabricated emulator system images. The current groundwork provides several possible optimizations, such as using the automation features of the original environments.
APA, Harvard, Vancouver, ISO, and other styles
4

Sergeev, A. N., N. Yu Kulikova, and G. V. Tsymbalyuk. "Using video conferencing services in network educational communities: Theory and experience of implementation in teaching informatics." Informatics and education, no. 7 (November 4, 2020): 47–54. http://dx.doi.org/10.32517/0234-0453-2020-35-7-47-54.

Full text
Abstract:
The article raises the problem of using video conferencing services in network educational communities in the context of virtualization of the educational space when organizing the interaction of students in the process of online learning in informatics. Changes in the education system associated with the development of social networks and network communities are discussed, and new opportunities provided by them to the teacher for including students in various types of educational and cognitive activities are analyzed. The concepts of “network community” and “social networks” are revealed. The role of network communities and social networks as collective subjects of social information and educational activities on the Internet is demonstrated. From the standpoint of the implementation of the principle of interactivity of interaction of subjects of educational activity in network communities, the analysis of educational interaction of schoolchildren in the format of video conference based on the Internet is carried out. The characteristics of the capabilities of popular services for network interaction of participants in the educational process when organizing training in the video conference format are described, examples of open software solutions are given. The experience of using video conferencing services in teaching informatics and robotics in the context of the implementation of the educational process in the networked educational communities of the Internet is revealed. The author's approach to the use of tools of popular services for organizing video conferencing in the implementation of interactive training in promising areas of informatics and robotics in the process of joint activities on the Internet is proposed. The results obtained contribute to the theory of informatization of education and can be used in practice by teachers who implement the educational process with schoolchildren using interactive interaction services and Internet resources.
APA, Harvard, Vancouver, ISO, and other styles
5

"Performance Analysis of Constrained Device Virtualization Algorithm." International Journal of Innovative Technology and Exploring Engineering 9, no. 5 (March 10, 2020): 532–39. http://dx.doi.org/10.35940/ijitee.e2606.039520.

Full text
Abstract:
Internet of Things aims to automate and add intelligence into existing processes by introducing constrained devices such as sensors and actuators. These constrained devices lack in computation and memory resources and are usually battery powered for ease of deployments. Due to their limited capabilities, the constrained devices usually host proprietary protocols, platforms, data formats and data structures for communications and therefore, are unable to communicate with devices from different vendors. This inability leads to interoperability issues in Internet of Things which, is in fact against the spirit of Internet of things which, envisions interconnection of billions of devices and hence, results in an isolated, vendor-locked and close-loop deployments of IoT solutions. Various approaches have been made by the industry and academia to resolve the interoperability issues amongst constrained devices. However, majority of the solutions are at different layers of the communication stack but do not provide a holistic solution for the problem. In more recent research, there have been theoretical proposals to virtualize constrained devices to abstract their data so that its always available to applications. We have adopted this technique in our research to virtualize the entire Internet of Things network so that virtual TCP/IP based protocols can operate on virtual networks for enabling interoperability. This paper proposes the operations of the Constrained Device Virtualization Algorithm and then simulates it in CloudSIM to derive performance results. The paper further highlights open issues for future research in this area.
APA, Harvard, Vancouver, ISO, and other styles
6

Bhushan, Ram Chandra, and Dharmendra K. Yadav. "Formal Specification and Verification of Data Separation for Muen Separation Kernel." Recent Advances in Computer Science and Communications 13 (August 31, 2020). http://dx.doi.org/10.2174/2666255813999200831103502.

Full text
Abstract:
Introduction: Development of integrated mixed-criticality systems is becoming increasingly popular for application-specific systems, which needs separation mechanism for available onboard resources and the processors equipped with hardware virtualization. Hardware virtualization allow the partitions to physical resources, which include processor cores, memory, and I/O devices, among guest virtual machines (VMs). For building mixed criticality computing environment, traditional virtual machine systems are inappropriate because they use hypervisors to schedule separate VMs on physical processor cores. In this article, we discuss the design of an environment for mixed-criticality systems: The Muen an x86/64 separation kernel for high assurance. The Muen Separation Kernel is an Open Source microkernel which has no runtime errors at the source code level. The Muen separation kernel has been designed precisely to encounter the challenging requirements of high-assurance systems built on the Intel x86/64 platform. Muen is under active development, and none of the kernel properties of it has been verified yet. In this paper, we present a novel work of verifying one of the kernel properties formally. Method: The CTL used in NuSMV is a first-order modal along with data-depended processes and regular formulas. CTL is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized . This section shows the verification of all the requirements mentioned in section 3. In NuSMV tool the command used for verification of the formulas written in CTL is checkctlspec -p ”CTL-expression”. The nearest quantifier binds each occurrence of a variable in the scope of the bound variable, which has the same name and the same number of arguments. Result: Formal methods have been applied to various projects for specification and verification purpose. Some of them are the SCOMP , SeaView , LOCK,and Multinet Gateway projects. The TLS was written formally. Several mappings were done between the TLS and the SCOMP code: Informal English language to TLS, TLS to actual code , and TLS to pseudo-code. The authors present an ACL2 model for a generic separation kernel also known as GWV approach. Conclusion: We consider the formal verification of data separation property which is one of the crucial modules to achieve the separation functionality. The verification of the data separation manager is carried out on the design level using the NuSMV tool. Furthermore, we present the complete model of the data separation unit along with its code written in the NuSMV modelling language. Finally, we have converted the non-functional requirements into the formal logic, which then has verified the model formally.
APA, Harvard, Vancouver, ISO, and other styles
7

Cancela, Héctor. "Preface to the December 2015 issue." CLEI Electronic Journal, December 1, 2015. http://dx.doi.org/10.19153/cleiej.18.3.0.

Full text
Abstract:
We are glad to present the last issue of 2015, completing Volume 18 of the CLEI Electronic Journal. This issue is comprised by the following regular papers.The first paper, “Quality of Protection on WDM networks: A Recovery Probability based approach”, by M. D. Rodas-Brítez and D. P. Pinto-Roa, features a proposal of a new quality of protection (QoP) paradigm for Wavelength Division Multiplexing optical networks. The new approach is flexible, allowing the network administrator to define and select a set of protection levels, based on recovery probabilities which measure the degree of conflict among primary lightpaths sharing backup lightpaths. To show the interest of the approach, a Genetic Algorithm is used to design a routing strategy by multi-objectiveoptimization, minimizing the number of blocked requests, the number of services without protection, the total differences between the requested QoP and the assigned QoP, and the network cost.The second paper, “Towards Scalability for Federated Identity Systems for Cloud-Based Environments”, by A.A. Pereira, J. B. M. Sobral and C. M. Westphall, addresses scalability issues in identity management for cloud computing environments. The authors propose an adapted sticky-session mechanism, as an alternative approach to the more common distributed memory approach, and discuss the implications in therms of computational resources, throughput and overall efficiency. The following work, “Formal Analysis of Security Models for Mobile Devices, Virtualization Platforms, and Domain Name Systems”, by G. Betarte and C. Luna,tackles security models for security-critical applications in three areas: mobile devices, virtualization platforms, and domain name systems. The authors develop formalizations using the Calculus of Inductive Constructions, to study different usual variants of security models in these platforms and their properties.The last paper of this issue is “Digi-Clima Grid: image processing and distributed computing for recovering historical climate data”, by authors S. Nesmachnow, G. Usera and F. Brasileiro. This paper reports an experience of implementing semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures, which was applied to Uruguayan historical climate data.As we complete now the eighteenth year of continued existence of CLEIej, we thank the regional community for its continued support, and we encourage researchers working in computer science and its applications to consider submitting their work to CLEIej, as the the leading electronic, open access journal in Computer Science in Latin America.
APA, Harvard, Vancouver, ISO, and other styles
8

"Forming of Organizational and Economic Mechanism of the Cryptocurrency Market for the Countries with Position of Anticipation." International Journal of Recent Technology and Engineering 8, no. 6 (March 30, 2020): 72–79. http://dx.doi.org/10.35940/ijrte.f7144.038620.

Full text
Abstract:
The article reveals the economic essence of cryptocurrency as an information and technological innovation. The authors have determined that cryptocurrency is a universal global means of payment, exchange, and investing, which exists in the form of a highly protected software code and is characterized by a free market exchange rate. Having considered technical, technological, and organizational aspects of using cryptocurrencies, the authors carried out the comparison of electronic money and cryptocurrency. The done analysis of markets and types of cryptocurrencies has enabled to form a ranking of cryptocurrencies by level of capitalization. The article describes the dynamics of the growth of cryptocurrency market capitalization and the domination of a Bitcoin’s market share. The authors have ascertained strengths of Bitcoin, which had allowed this cryptocurrency to become a useful international means of payment with the high investment potential. The article examines weaknesses of the exchange of cryptocurrency both ordinary consumers and governments. The authors have proven that institutionalization ensured by the formal and informal establishment of rules for functioning of cryptocurrency is necessary for effective functioning of cryptocurrency. The authors have substantiated three positions of institutional support describing the attitude of countries to functioning of a cryptocurrency market: a loyal position, categorical position, and position of anticipation. The authors have developed an organizational and economic mechanism for forming a cryptocurrency market based on functions, methods, and tools of management and suggested directions for undertaking a policy in the sphere of functioning of a cryptocurrency market for countries with the position of anticipation. The process of virtualization of modern society is inevitable. Сountries with the position of anticipation should support the course on innovation by solving a range of regulatory, technical and information issues on the development of the cryptocurrency market, based on leading international experience. The primary tasks should be: granting the legal status of cryptocurrency and developing rules for its circulation, introduction of technological innovations with the participation of the state, large corporations and venture funds, creation of an open ecosystem for interaction of all participants, as well as wide information support at all levels.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Open Virtualization Format"

1

Ribeiro, Didier Martins. "A Web-based Solution for Virtual Machine Instances Migration Across Type-2 Hypervisors." Master's thesis, 2014. http://hdl.handle.net/10400.6/6170.

Full text
Abstract:
Cloud computing has improved computing efficiency by reducing the cost to the users. A current datacenter consists of tens to hundreds of thousands of servers and contains hundreds of thousands of switches connected hierarchically. Sharing processing resources through services like "software as service" (SaaS), users can amortize the cost of hardware and software. To facilitate upgrades and maintenance of systems, virtual machines (VMs) are often used to provide services, and their migrations result in better use of resources. The cloud, supported by virtualization is emerging as an important paradigm of "oriented service". The administration of systems is critical to provide availability and performance in data systems, providing automated the required real-time capacity to meet service requests. But virtualization does not reduce the complexity of a system. In fact, the execution of multiple virtual machines (VMs) on top of a physical infrastructure can increase overall system complexity and present new challenges in his administration. Virtualization of resources is a key component of "cloud computing" to provide computing and storage services being ubiquitous in today's "data center". Support for servers by building cluster of virtual machines is universally adopted to maximize the utilization of hardware resources. Virtualization has become a key technology implemented by a growing number of organizations related to Information Technology (IT) worldwide. Virtualization of systems has rapidly gained popularity because of its potential to reduce the costs of TI's. Allowing IT managers to increase the use of existing physical resources and even reduce the number of deployed systems. This consolidation helps reduce hardware requirements management, reducing the need for power and cooling, and thus reducing the costs of TI's in general. Additionally, the deployment of virtualization solutions typically means increased management tools to the existing environment. Access to software and data anywhere, anytime, on any device and in any connectivity, has long been a crucial issue for researchers and architects systems. The amount of data processed increases each year, both in largescale systems and in smaller environments. Likewise, the computation is being performed for processing the data, and the communication is made to distribute the data. This phenomenon is associated with a steady increase in computing power, storage and communication resources available, although with different characteristics. The impact of the current growth in the use of virtualization tools became more popular the use of virtual machines. The use of virtual laboratoriestesting is becoming more popular in the practice of QA testing. This approach allows the "testers" to test different applications without relying on permanent configuration of a system. Using virtual machines, QA tests can simulate different computers with different operating systems through a single physical computer or create a full virtual laboratory with multiple virtual machines configured differently. These virtual "computers" operate independently of each other and we can launch two or more virtual platforms simultaneously on one computer, saving the cost of having to buy more hardware just to run quality control tests. Applications running in a virtual machine behave as if they were running on its own physical system. This can also be useful to test web applications, because it can simultaneously test web applications across browsers which work independently of each other in different virtual machines, again, without the cost of buying more hardware for testing needs. Testing applications with virtual machines may have different utilities. This can be used for distributed client-server applications, functional testing, regression test, etc. But no matter what kind of QA testing we use, it will be more effective if automated and any kind of testing in virtual laboratories can be easily automated. A computer system is a dynamic system and configurations of operating systems continually change. Install or update software drivers and hardware happens frequently and installing different versions of an application affects the internal structure of the system and may influence the test results. While buying multiple computers to support multiple platforms is an option for some, it is often too expensive for most testing labs. Fortunately, virtual machines are a solution to these problems and much more cost effective. Once we have created and configured a virtual machine or a lab full of virtual machines, we can have the configuration of a stable system, which is very important when testing applications. However, we may need a more powerful computer to run multiple virtual machines on a single computer at the same time, but that's often cheaper than buying three physical computers. With a virtual laboratory in a computer, we can perform tests with distributed client-server applications without the need for multiple computers. In this dissertation, based on the characteristics presented above, is presented VirtualMigra. The VirtualMigra Platform is a tool that allows the migration of virtual machines regardless of their manufacturer among different users in a LAN. The use of the Oracle Virtualbox and VMware Workstation APIs allows a comfortable and intuitive level of abstraction for the users. Were conducted exhaustive experiments to test the platform and these were successfully performed in a real environment, thus being ready for real exploration platform.
A computação em nuvem tem melhorado a eficiência de computação reduzindo o custo para os utilizadores. Um centro de dados atual é constituído por dezenas a centenas de milhares de servidores e contém centenas de milhares de “switches” hierarquicamente conectados. Partilhando os recursos de processamento através de serviços como “Software as Service” (SaaS), os utilizadores podem amortizar o custo do hardware e do software. Para facilitar atualizações e manutenções de sistemas, as máquinas virtuais (VMs) são frequentemente utilizadas para prestação de serviços, e suas migrações resultam numa melhor utilização dos recursos. A computação em nuvem, apoiada pela virtualização está emergindo como um importante paradigma de serviço orientado. A administração de sistemas é fundamental para oferecer disponibilidade e desempenho em sistemas de dados, fornecendo de forma automatizada a capacidade necessária em tempo real para atender pedidos de serviços. Mas a virtualização não reduz a complexidade de um sistema. Na verdade, a execução de várias máquinas virtuais (VMs) no topo de uma infra-estrutura física pode aumentar a complexidade geral do sistema e colocar novos desafios na sua administração. A virtualização de recursos é a componente chave da computação em nuvem para fornecer serviços de computação e armazenamento estando omnipresente nos centros de dados atuais. O apoio à servidores através da construção de cluster de máquinas virtuais é universalmente adotada para maximizar a utilização dos recursos do hardware. A virtualização tornou-se uma tecnologia chave implementada por um número crescente de organizações relacionadas com as Tecnologias da Informação (TI) em todo o mundo. A virtualização de sistemas rapidamente ganhou popularidade por causa do seu potencial em reduzir os custos das TI´s. Permitindo aos gestores de TI aumentarem a utilização dos recursos físicos existentes e até mesmo reduzir o número de sistemas implementados. Esta consolidação ajuda reduzir os requisitos na gestão de hardware, reduzindo as necessidades de energia e refrigeração, e, assim, reduzindo os custos das TI´s em geral. Além disso, a implantação de soluções de virtualização normalmente significa um aumento de ferramentas de gestão para o ambiente existente. O acesso ao software e dados em qualquer lugar, a qualquer hora, em qualquer dispositivo e com qualquer conectividade, já há muito tempo é um tema crucial para investigadores e arquitectos de sistemas. A quantidade de dados processados aumenta em cada ano, tanto em sistemas de larga escala como em ambientes de menor dimensão. Da mesma forma, a computação mais está a ser executada para processar os dados, e mais comunicação é utilizada para distribuir os dados. Este fenómeno é associado com um aumento constante da capacidade de computação, armazenamento e recursos de comunicação disponíveis, embora com características diferentes. O impacto do crescimento actual do uso de ferramentas de virtualização tornou mais popular a utilização de máquinas virtuais. O uso de laboratórios de testes virtuais está-se a tornar mais popular na prática de testes de QA. Esta abordagem permite que os “testers” testem aplicações diferentes sem se basearem na configuração permanente de um sistema. Usando máquinas virtuais em testes de QA, podemos simular diferentes computadores com diferentes sistemas operativos através de único computador físico ou criar um laboratório virtual inteiro, com várias máquinas virtuais configuradas de forma diferente. Estes “computadores” virtuais funcionam de forma independente uns dos outros e podemos lançar duas ou mais plataformas virtuais simultaneamente no computador, economizando o custo de ter que comprar mais hardware apenas para executar testes de controlo de qualidade. As aplicações em execução numa máquina virtual comportam-se como se estivessem em execução no seu próprio sistema físico. Também pode ser útil testar aplicações web, pois pode-se testar simultaneamente aplicações web em vários navegadores que funcionam independentemente uns dos outros em diferentes máquinas virtuais, mais uma vez, sem o custo de comprar mais hardware para necessidades de testing. Testar aplicações com máquinas virtuais pode ter diferentes utilidades. Pode-se utilizar para testes distribuídos de aplicações cliente-servidor, testes funcionais, testes de regressão, etc. Mas, não importa que tipo de testes de QA que usamos, também será mais eficaz se for automatizado e qualquer tipo de testes em laboratórios virtuais podem ser facilmente automatizadas. Um sistema de informático é um sistema dinâmico e as configurações dos sistemas operativos mudam continuamente. Instalação ou atualização de drivers de software e hardware acontece com frequência e instalação de diferentes versões de um aplicativo afeta a estrutura interna do sistema, podendo influenciar os resultados dos testes. Enquanto que a compra de vários computadores para suportar várias plataformas é uma opção para alguns, muitas vezes é muito caro para a maioria dos laboratórios de testes. Felizmente, as máquinas virtuais são uma solução para estes problemas e a um custo muito mais eficaz. Depois de ter criado e configurado uma máquina virtual ou um laboratório cheio de máquinas virtuais, pode-se ter a configuração de um sistema estável, o que é muito importante ao testar aplicações. No entanto, pode ser preciso um computador mais potente para executar várias máquinas virtuais num único computador ao mesmo tempo, mas isso é muitas vezes mais barato do que comprar três ou mais computadores físicos. Com um laboratório virtual num computador, podemos executar testes distribuídos com aplicações cliente-servidor sem a necessidade de vários computadores. Nesta dissertação, com base nas características apresentadas anteriormente, é apresentada VirtualMigra. A plataforma VirtualMigra é uma ferramenta que permite fazer a migração de máquinas virtuais independentemente do seu fabricante entre diferentes utilizadores de uma LAN.O uso das API´s do Oracle Virtualbox e do VMware Workstation permitem um confortável e intuitivo nível de abstracção para os utilizadores. Realizaram-se experiências exaustivas para testar a plataforma e estas foram realizadas com sucesso num ambiente real, estando assim a plataforma pronta para exploração real.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography