Academic literature on the topic 'Distributed virtualization. eng'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distributed virtualization. eng.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distributed virtualization. eng"

1

Mikkilineni, Rao, Giovanni Morana, Daniele Zito, and Marco Di Sano. "Service Virtualization Using a Non-von Neumann Parallel, Distributed, and Scalable Computing Model." Journal of Computer Networks and Communications 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/604018.

Full text
Abstract:
This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance, and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end-to-end transaction management of computing tasks (provided by the autonomous distributed computing elements) to be implemented as network-level FCAPS management. While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP/Perl/Python (LAMP) based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions. We did not use Hypervisors, Virtual machines, or layers of complex virtualization management systems in implementing this prototype.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Lifeng, Celimuge Wu, Tsutomu Yoshinaga, Xianfu Chen, Tutomu Murase, and Yusheng Ji. "Multihop Data Delivery Virtualization for Green Decentralized IoT." Wireless Communications and Mobile Computing 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/9805784.

Full text
Abstract:
Decentralized communication technologies (i.e., ad hoc networks) provide more opportunities for emerging wireless Internet of Things (IoT) due to the flexibility and expandability of distributed architecture. However, the performance degradation of wireless communications with the increase of the number of hops becomes the main obstacle in the development of decentralized wireless IoT systems. The main challenges come from the difficulty in designing a resource and energy efficient multihop communication protocol. Transmission control protocol (TCP), the most frequently used transport layer protocol for achieving reliable end-to-end communications, cannot achieve a satisfactory result in multihop wireless scenarios as it uses end-to-end acknowledgment which could not work well in a lossy scenario. In this paper, we propose a multihop data delivery virtualization approach which uses multiple one-hop reliable transmissions to perform multihop data transmissions. Since the proposed protocol utilizes hop-by-hop acknowledgment instead of end-to-end feedback, the congestion window size at each TCP sender node is not affected by the number of hops between the source node and the destination node. The proposed protocol can provide a significantly higher throughput and shorter transmission time as compared to the end-to-end approach. We conduct real-world experiments as well as computer simulations to show the performance gain from our proposed protocol.
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Jin-young, Minkyoung Cho, and Jik-Soo Kim. "Employing Vertical Elasticity for Efficient Big Data Processing in Container-Based Cloud Environments." Applied Sciences 11, no. 13 (July 4, 2021): 6200. http://dx.doi.org/10.3390/app11136200.

Full text
Abstract:
Recently, “Big Data” platform technologies have become crucial for distributed processing of diverse unstructured or semi-structured data as the amount of data generated increases rapidly. In order to effectively manage these Big Data, Cloud Computing has been playing an important role by providing scalable data storage and computing resources for competitive and economical Big Data processing. Accordingly, server virtualization technologies that are the cornerstone of Cloud Computing have attracted a lot of research interests. However, conventional hypervisor-based virtualization can cause performance degradation problems due to its heavily loaded guest operating systems and rigid resource allocations. On the other hand, container-based virtualization technology can provide the same level of service faster with a lightweight capacity by effectively eliminating the guest OS layers. In addition, container-based virtualization enables efficient cloud resource management by dynamically adjusting the allocated computing resources (e.g., CPU and memory) during the runtime through “Vertical Elasticity”. In this paper, we present our practice and experience of employing an adaptive resource utilization scheme for Big Data workloads in container-based cloud environments by leveraging the vertical elasticity of Docker, a representative container-based virtualization technique. We perform extensive experiments running several Big Data workloads on representative Big Data platforms: Apache Hadoop and Spark. During the workload executions, our adaptive resource utilization scheme periodically monitors the resource usage patterns of running containers and dynamically adjusts allocated computing resources that could result in substantial improvements in the overall system throughput.
APA, Harvard, Vancouver, ISO, and other styles
4

Ravichandran, S., and J. Sathiamoorthy. "An Innovative Performance of Refuge using Stowage Main Servers in Cloud Computing Equipment." Asian Journal of Computer Science and Technology 10, no. 1 (May 5, 2021): 13–17. http://dx.doi.org/10.51983/ajcst-2021.10.1.2695.

Full text
Abstract:
Distributed computing has been imagined as the cutting edge engineering of IT Enterprise. It moves the application programming and information bases to the incorporated enormous server farms, where the administration of the information and administrations may not be completely dependable. There are various security issues for distributed computing as it envelops numerous innovations including networks, information bases, working frameworks, virtualization, asset planning, exchange the board, load adjusting, simultaneousness control and memory the executives. Putting away information in an outsider's cloud framework causes genuine worry over information secrecy. Hence, security issues for a large number of these frameworks and advancements are material to distributed computing. We propose a key worker encryption conspire and incorporate it with a decentralized deletion code with the end goal that a safe conveyed stockpiling key framework is defined respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Yansen, and Ved P. Kafle. "An Availability-Enhanced Service Function Chain Placement Scheme in Network Function Virtualization." Journal of Sensor and Actuator Networks 8, no. 2 (June 14, 2019): 34. http://dx.doi.org/10.3390/jsan8020034.

Full text
Abstract:
A service function chain (SFC) is an ordered virtual network function (VNF) chain for processing traffic flows to deliver end-to-end network services in a virtual networking environment. A challenging problem for an SFC in this context is to determine where to deploy VNFs and how to route traffic between VNFs of an SFC on a substrate network. In this paper, we formulate an SFC placement problem as an integer linear programing (ILP) model, and propose an availability-enhanced VNF placing scheme based on the layered graphs approach. To improve the availability of SFC deployment, our scheme distributes VNFs of an SFC to multiple substrate nodes to avoid a single point of failure. We conduct numerical analysis and computer simulation to validate the feasibility of our SFC scheme. The results show that the proposed scheme outperforms well in different network scenarios in terms of end-to-end delay of the SFC and computation time cost.
APA, Harvard, Vancouver, ISO, and other styles
6

Benomar, Zakaria, Francesco Longo, Giovanni Merlino, and Antonio Puliafito. "Cloud-based Network Virtualization in IoT with OpenStack." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–26. http://dx.doi.org/10.1145/3460818.

Full text
Abstract:
In Cloud computing deployments, specifically in the Infrastructure-as-a-Service (IaaS) model, networking is one of the core enabling facilities provided for the users. The IaaS approach ensures significant flexibility and manageability, since the networking resources and topologies are entirely under users’ control. In this context, considerable efforts have been devoted to promoting the Cloud paradigm as a suitable solution for managing IoT environments. Deep and genuine integration between the two ecosystems, Cloud and IoT, may only be attainable at the IaaS level. In light of extending the IoT domain capabilities’ with Cloud-based mechanisms akin to the IaaS Cloud model, network virtualization is a fundamental enabler of infrastructure-oriented IoT deployments. Indeed, an IoT deployment without networking resilience and adaptability makes it unsuitable to meet user-level demands and services’ requirements. Such a limitation makes the IoT-based services adopted in very specific and statically defined scenarios, thus leading to limited plurality and diversity of use cases. This article presents a Cloud-based approach for network virtualization in an IoT context using the de-facto standard IaaS middleware, OpenStack, and its networking subsystem, Neutron. OpenStack is being extended to enable the instantiation of virtual/overlay networks between Cloud-based instances (e.g., virtual machines, containers, and bare metal servers) and/or geographically distributed IoT nodes deployed at the network edge.
APA, Harvard, Vancouver, ISO, and other styles
7

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (December 18, 2017): 2313–18. http://dx.doi.org/10.48084/etasr.1557.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage virtualization system, the separation of data path from metadata path is acceptable (and even desirable) for large files since one MDT (Metadata Target) open RPC is typically a small fraction of the total number of read or write RPCs. This hurts small file performance significantly when there is only a single read or write RPC for the file data. Since applications require data for processing and considering in-situ architecture which brings data or metadata close to applications for processing, how the in-situ processing can be exploited in Lustre is the domain of this dissertation work. The earlier research exploited Lustre supporting in-situ processing when Hadoop/MapReduce is integrated with Lustre, but still, the scope of performance improvement existed in Lustre. The aim of the research is to check whether it is feasible and beneficial to move the small files to the MDT so that additional RPCs and I/O overhead can be eliminated, and read/write performance of Lustre file system can be improved.
APA, Harvard, Vancouver, ISO, and other styles
8

Bhardwaj, Akashdeep, and Sam Goundar. "Comparing Single Tier and Three Tier Infrastructure Designs against DDoS Attacks." International Journal of Cloud Applications and Computing 7, no. 3 (July 2017): 59–75. http://dx.doi.org/10.4018/ijcac.2017070103.

Full text
Abstract:
With the rise in cyber-attacks on cloud environments like Brute Force, Malware or Distributed Denial of Service attacks, information security officers and data center administrators have a monumental task on hand. Organizations design data center and service delivery with the aim of catering to maximize device provisioning & availability, improve application performance, ensure better server virtualization and end up securing data centers using security solutions at internet edge protection level. These security solutions prove to be largely inadequate in times of a DDoS cyber-attack. In this paper, traditional data center design is reviewed and compared to the proposed three tier data center. The resilience to withstand against DDoS attacks is measured for Real User Monitoring parameters, compared for the two infrastructure designs and the data is validated using T-Test.
APA, Harvard, Vancouver, ISO, and other styles
9

Baciu, George, Yungzhe Wang, and Chenhui Li. "Cognitive Visual Analytics of Multi-Dimensional Cloud System Monitoring Data." International Journal of Software Science and Computational Intelligence 9, no. 1 (January 2017): 20–34. http://dx.doi.org/10.4018/ijssci.2017010102.

Full text
Abstract:
Hardware virtualization has enabled large scale computational service delivery models with high cost leverage and improved resource utilization on cloud computing platforms. This has completely changed the landscape of computing in the last decade. It has also enabled large–scale data analytics through distributed high performance computing. Due to the infrastructure complexity, end–users and administrators of cloud platforms can rarely obtain a full picture of the state of cloud computing systems and data centers. Recent monitoring tools enable users to obtain large amounts of data with respect to many utilization parameters of cloud platforms. However, they fail to get the maximal overall insight into the resource utilization dynamics of cloud platforms. Furthermore, existing tools make it difficult to observe large-scale patterns, making it difficult to learn from the past behavior of cloud system dynamics. In this work, the authors describe a perceptual-based interactive visualization platform that gives users and administrators a cognitive view of cloud computing system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
10

Vidal, Ivan, Borja Nogales, Diego Lopez, Juan Rodríguez, Francisco Valera, and Arturo Azcorra. "A Secure Link-Layer Connectivity Platform for Multi-Site NFV Services." Electronics 10, no. 15 (August 3, 2021): 1868. http://dx.doi.org/10.3390/electronics10151868.

Full text
Abstract:
Network Functions Virtualization (NFV) is a key technology for network automation and has been instrumental to materialize the disruptive view of 5G and beyond mobile networks. In particular, 5G embraces NFV to support the automated and agile provision of telecommunication and vertical services as a composition of versatile virtualized components, referred to as Virtual Network Functions (VNFs). It provides a high degree of flexibility in placing these components on distributed NFV infrastructures (e.g., at the network edge, close to end users). Still, this flexibility creates new challenges in terms of VNF connectivity. To address these challenges, we introduce a novel secure link-layer connectivity platform, L2S. Our solution can automatically be deployed and configured as a regular multi-site NFV service, providing the abstraction of a layer-2 switch that offers link-layer connectivity to VNFs deployed on remote NFV sites. Inter-site communications are effectively protected using existing security solutions and protocols, such as IP security (IPsec). We have developed a functional prototype of L2S using open-source software technologies. Our evaluation results indicate that this prototype can perform IP tunneling and cryptographic operations at Gb/s data rates. Finally, we have validated L2S using a multi-site NFV ecosystem at the Telefonica Open Network Innovation Centre (5TONIC), using our solution to support a multicast-based IP television service.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Distributed virtualization. eng"

1

Cruz, Daniel Igarashi. "FLEXLAB : Middleware de virtualização de hardware para gerenciamento centralizado de computadores em rede /." São José do Rio Preto : [s.n.], 2008. http://hdl.handle.net/11449/98696.

Full text
Abstract:
Orientador: Marcos Antônio Cavenaghi
Banca: Renata Spolon Lobato
Banca: Ronaldo Lara Gonçalves
Resumo: O gerenciamento de um conglomerado de computadores em rede é uma atividade potencialmente complexa devido à natureza heterogênea destes equipamentos. Estas redes podem apresentar computadores com diferentes configurações em sua camada de software básico e aplicativos em função das diferenças de configuração de hardware em cada nó da rede. Neste cenário, cada computador torna-se uma entidade gerenciada individualmente, exigindo uma atividade manual de configuração da imagem de sistema ou com automatização limitada à camada de aplicativos. Tecnologias que oferecem gestão centralizada, como arquiteturas thin-client ou terminal de serviços, penalizam o desempenho das estações e oferecem capacidade reduzida para atender um número crescente de usuários uma vez que todo o processamento dos aplicativos dos clientes é executado em um único nó da rede. Outras arquiteturas para gerenciamento centralizado que atuam em camada de software são ineficazes em oferecer uma administração baseada em uma imagem única de configuração dado o forte acoplamento entre as camadas de software e hardware. Compreendendo as deficiências dos modelos tradicionais de gerenciamento centralizado de computadores, o objetivo deste trabalho é o desenvolvimento do FlexLab, mecanismo de gerenciamento de computadores através de Imagem de Sistema Única baseado em um middleware de virtualização distribuída. Por meio do middleware de virtualização do FlexLab, os computadores em rede de um ambiente são capazes de realizar o processo de boot remoto a partir de uma Imagem de Sistema Única desenvolvida sobre um hardware virtualizado. Esta imagem é hospedada e acessada a partir de um servidor central da rede, padronizando assim as configurações de software básico e aplicativos mesmo em um cenário de computadores com configuração heterogênea de hardware, simplificando... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Computer network management is a potentially complex task due to the heterogeneous nature of the hardware configuration of these machines. These networks may offer computers with different configuration in their basic software layer due to the configuration differences in their hardware layer and thus, in this scenario, each computer becomes an individual managed entity in the computer network and then requiring an individual and manually operated configuration procedure or automated maintenance restricted to application layer. Thin-client or terminal services do offer architectures for centralized management, however these architectures impose performance penalties for client execution and offer reduced scalability support in order to serve a growing number of users since all application processing is hosted and consume processing power of a single network node: the server. In the other hand, architectures for centralized management based on applications running over software layer are inefficient in offer computer management based on a single configuration image due to the tight coupling between software and hardware layers. Understanding the drawbacks of the theses centralized computer management solutions, the aim of this project is to develop the FlexLab, centralized computer management architecture through a Single System Image based on a distributed virtualization middleware. Through FlexLab virtualization middleware, the computers of a network environment are able to remote boot from a Single System Image targeting the virtual machine hardware. This Single System Image is hosted at a central network server and thus, standardizing basic software and applications configurations for networks with heterogeneous computer hardware configuration which simplifies computer management since all computers may be managed through a Single System Image. The experiments have shown that... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Distributed virtualization. eng"

1

Verma, Chitresh, and Rajiv Pandey. "Mobile Cloud Computing Integrating Cloud, Mobile Computing, and Networking Services Through Virtualization." In Research Anthology on Architectures, Frameworks, and Integration Strategies for Distributed and Cloud Computing, 209–26. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5339-8.ch010.

Full text
Abstract:
Mobile computing is a critical technology area which is actively integrated with field of cloud computing. It is broadly an application of virtualization technology at both ends of client server architecture. The mobile and cloud computing is a natural combination as mobile devices have limited computing and storage capacity, thus to reap the benefits of high end computing, cloud is the answer. Thus, amalgamation of mobile platform with cloud platform is inevitable. This chapter shall deliberate on the various aspects of mobile computing, mobile cloud computing and its relationship with virtualization technology. The detailed integration aspects and virtualization shall be signified through case study and suitable real time examples. The chapter shall envisage a case study, modeling the virtualization in the context of mobile cloud.
APA, Harvard, Vancouver, ISO, and other styles
2

Mikkilineni, Rao, Giovanni Morana, and Ian Seyler. "Implementing Distributed, Self-Managing Computing Services Infrastructure using a Scalable, Parallel and Network-Centric Computing Model." In Achieving Federated and Self-Manageable Cloud Infrastructures, 57–78. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1631-8.ch004.

Full text
Abstract:
This chapter introduces a new network-centric computing model using Distributed Intelligent Managed Element (DIME) network architecture (DNA). A parallel signaling network overlay over a network of self-managed von Neumann computing nodes is utilized to implement dynamic fault, configuration, accounting, performance, and security management of both the nodes and the network based on business priorities, workload variations and latency constraints. Two implementations of the new computing model are described which demonstrate the feasibility of the new computing model. One implementation provides service virtualization at the Linux process level and another provides virtualization of a core in a many-core processor. Both point to an alternative way to assure end-to-end transaction reliability, availability, performance, and security in distributed Cloud computing, reducing current complexity in configuring and managing virtual machines and making the implementation of Federation of Clouds simpler.
APA, Harvard, Vancouver, ISO, and other styles
3

Gharajeh, Mohammad Samadi. "Applications of Virtualization Technology in Grid Systems and Cloud Servers." In Advances in Computer and Electrical Engineering, 1–28. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2785-5.ch001.

Full text
Abstract:
Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users' services via a large and often global network of computers. Virtualization technology can enhance the efficiency of these networks by dedicating the available resources to multiple execution environments. This chapter describes applications of virtualization technology in grid systems and cloud servers. It presents different aspects of virtualized networks in systematic and teaching issues. Virtual machine abstraction virtualizes high-performance computing environments to increase the service quality. Besides, grid virtualization engine and virtual clusters are used in grid systems to accomplish users' services in virtualized environments, efficiently. The chapter, also, explains various virtualization technologies in cloud severs. The evaluation results analyze performance rate of the high-performance computing and virtualized grid systems in terms of bandwidth, latency, number of nodes, and throughput.
APA, Harvard, Vancouver, ISO, and other styles
4

Kitanov, Stojan, Borislav Popovski, and Toni Janevski. "Quality Evaluation of Cloud and Fog Computing Services in 5G Networks." In Research Anthology on Architectures, Frameworks, and Integration Strategies for Distributed and Cloud Computing, 1770–805. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5339-8.ch086.

Full text
Abstract:
Because of the increased computing and intelligent networking demands in 5G network, cloud computing alone encounters too many limitations, such as requirements for reduced latency, high mobility, high scalability, and real-time execution. A new paradigm called fog computing has emerged to resolve these issues. Fog computing distributes computing, data processing, and networking services to the edge of the network, closer to end users. Fog applied in 5G significantly improves network performance in terms of spectral and energy efficiency, enable direct device-to-device wireless communications, and support the growing trend of network function virtualization and separation of network control intelligence from radio network hardware. This chapter evaluates the quality of cloud and fog computing services in 5G network, and proposes five algorithms for an optimal selection of 5G RAN according to the service requirements. The results demonstrate that fog computing is a suitable technology solution for 5G networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Bhardwaj, Akashdeep, and Sam Goundar. "Comparing Single Tier and Three Tier Infrastructure Designs against DDoS Attacks." In Research Anthology on Combating Denial-of-Service Attacks, 541–58. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5348-0.ch028.

Full text
Abstract:
With the rise in cyber-attacks on cloud environments like Brute Force, Malware or Distributed Denial of Service attacks, information security officers and data center administrators have a monumental task on hand. Organizations design data center and service delivery with the aim of catering to maximize device provisioning & availability, improve application performance, ensure better server virtualization and end up securing data centers using security solutions at internet edge protection level. These security solutions prove to be largely inadequate in times of a DDoS cyber-attack. In this paper, traditional data center design is reviewed and compared to the proposed three tier data center. The resilience to withstand against DDoS attacks is measured for Real User Monitoring parameters, compared for the two infrastructure designs and the data is validated using T-Test.
APA, Harvard, Vancouver, ISO, and other styles
6

Baciu, George, Yungzhe Wang, and Chenhui Li. "Cognitive Visual Analytics of Multi-Dimensional Cloud System Monitoring Data." In Research Anthology on Architectures, Frameworks, and Integration Strategies for Distributed and Cloud Computing, 1433–48. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5339-8.ch070.

Full text
Abstract:
Hardware virtualization has enabled large scale computational service delivery models with high cost leverage and improved resource utilization on cloud computing platforms. This has completely changed the landscape of computing in the last decade. It has also enabled large–scale data analytics through distributed high performance computing. Due to the infrastructure complexity, end–users and administrators of cloud platforms can rarely obtain a full picture of the state of cloud computing systems and data centers. Recent monitoring tools enable users to obtain large amounts of data with respect to many utilization parameters of cloud platforms. However, they fail to get the maximal overall insight into the resource utilization dynamics of cloud platforms. Furthermore, existing tools make it difficult to observe large-scale patterns, making it difficult to learn from the past behavior of cloud system dynamics. In this work, the authors describe a perceptual-based interactive visualization platform that gives users and administrators a cognitive view of cloud computing system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
7

Popescu, George V. "Distributed Indexing Networks for Efficient Large-Scale Group Communication." In Handbook of Research on P2P and Grid Systems for Service-Oriented Computing, 360–81. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-686-5.ch015.

Full text
Abstract:
Recently a new category of communication network paradigms has emerged: overlay networks for content distribution and group communication, application level multicast and distributed hash tables for efficient indexing and look-up of network resources, etc. As these ideas mature, new Internet architectures emerge. The authors describe here an autonomic, self-optimizing, network virtualization middleware architecture designed for large scale distributed applications. The proposed architecture uses end-hosts and proxies at the edge of the network as the forwarding nodes for distributing content to multiple receivers using simple point-to-point communication. Routing nodes have the capability to process the content prior to forwarding to meet the heterogeneous requirements of receivers. The proposed architecture builds upon a new network abstraction. Distributed indexing networks (DIN) is a new paradigm of communication networks design that relies on assigning indices to communication entities, communication infrastructure nodes and distributed infrastructure resources to control and disseminate information. DINs are in essence overlay networks whose topology is defined by a set of connectivity rules on indices assigned to network nodes. DINs route data packets using network indices (identifiers) and descriptors contained in the application level routing header; messages are routed hop by hop by querying at each node an application level routing indexing structure. As an application of DINs, the authors present an index-based routing multicast protocol together with its distribution tree optimization algorithm. To support applications involving large dynamic multicast groups, the application level multicast scheme uses hierarchical group membership aggregation and stateless forwarding within clusters of network nodes. The authors define the information space (IS) as the multidimensional space that indexes all information available in the network. The information includes infrastructure information (network nodes addresses, storage nodes location), network measurements data, distributed content descriptors, communication group identifiers, real-time published streams and other application dependent communication semantics, etc. The entity communication interest (ECI) is the vector describing the time-dependent information preferences of a network entity (multicast group client, user, etc.). Communication control architecture partitions the IS into interest cells mapped to multicast communication groups. The proposed control algorithm uses proximity-based clustering of network nodes and hierarchical communication interest aggregation to achieve scalability. The authors show that large-scale group communication in the proposed distributed indexing networks requires low computation overhead with a controlled degradation of the end-to-end data path performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Yeboah-Boateng, Ezer Osei. "Cyber-Security Concerns with Cloud Computing." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 105–35. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-1721-4.ch005.

Full text
Abstract:
Information is modeled into virtual objects to create value for its owner. The value chain involves stakeholders with varied responsibilities in the cyber-market. Cloud computing emerged out of virtualization, distributed and grid computing, and has altered the value creation landscape, through strategic and sensitive information management. It offers services that use resources in a utility fashion. The flexible, cost-effective service models are opportunities for SMEs. Whilst using these tools for value-creation is imperative, a myriad of security concerns confront both providers and end-users. Vulnerabilities and threats are key concerns, so that value created is strategically aligned with corporate vision, appropriated and sustained. What is the extent of impact? Expert opinions were elicited of 4 C-level officers and 10 security operatives. Shared technology issues, malicious insiders and service hijacking are considered major threats. Also, an intuitive strategic model for Value-Creation Cloud-based Cyber-security is proposed as guidance in fostering IT-enabled initiatives.
APA, Harvard, Vancouver, ISO, and other styles
9

Yeboah-Boateng, Ezer Osei. "Cyber-Security Concerns With Cloud Computing." In Cyber Security and Threats, 995–1026. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5634-3.ch049.

Full text
Abstract:
Information is modeled into virtual objects to create value for its owner. The value chain involves stakeholders with varied responsibilities in the cyber-market. Cloud computing emerged out of virtualization, distributed and grid computing, and has altered the value creation landscape, through strategic and sensitive information management. It offers services that use resources in a utility fashion. The flexible, cost-effective service models are opportunities for SMEs. Whilst using these tools for value-creation is imperative, a myriad of security concerns confront both providers and end-users. Vulnerabilities and threats are key concerns, so that value created is strategically aligned with corporate vision, appropriated and sustained. What is the extent of impact? Expert opinions were elicited of 4 C-level officers and 10 security operatives. Shared technology issues, malicious insiders and service hijacking are considered major threats. Also, an intuitive strategic model for Value-Creation Cloud-based Cyber-security is proposed as guidance in fostering IT-enabled initiatives.
APA, Harvard, Vancouver, ISO, and other styles
10

Kitanov, Stojan, Borislav Popovski, and Toni Janevski. "Quality Evaluation of Cloud and Fog Computing Services in 5G Networks." In Enabling Technologies and Architectures for Next-Generation Networking Capabilities, 1–36. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6023-4.ch001.

Full text
Abstract:
Because of the increased computing and intelligent networking demands in 5G network, cloud computing alone encounters too many limitations, such as requirements for reduced latency, high mobility, high scalability, and real-time execution. A new paradigm called fog computing has emerged to resolve these issues. Fog computing distributes computing, data processing, and networking services to the edge of the network, closer to end users. Fog applied in 5G significantly improves network performance in terms of spectral and energy efficiency, enable direct device-to-device wireless communications, and support the growing trend of network function virtualization and separation of network control intelligence from radio network hardware. This chapter evaluates the quality of cloud and fog computing services in 5G network, and proposes five algorithms for an optimal selection of 5G RAN according to the service requirements. The results demonstrate that fog computing is a suitable technology solution for 5G networks.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distributed virtualization. eng"

1

Mishra, Prateek, Sanjay Kumar Yadav, and Sunil Arora. "TCB Minimization towards Secured and Lightweight IoT End Device Architecture using Virtualization at Fog Node." In 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC). IEEE, 2020. http://dx.doi.org/10.1109/pdgc50313.2020.9315850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meckler, Milton. "Design for Sustainable Data Center Energy Use and Eco-Footprint." In ASME 2010 4th International Conference on Energy Sustainability. ASMEDC, 2010. http://dx.doi.org/10.1115/es2010-90116.

Full text
Abstract:
What does remain a growing concern for many users of Data Centers is their continuing availability following the explosive growth of internet services in recent years, The recent maximizing of Data Center IT virtualization investments has resulted in improving the consolidation of prior (under utilized) server and cabling resources resulting in higher overall facility utilization and IT capacity. It has also resulted in excessive levels of equipment heat release, e.g. high energy (i.e. blade type) servers and telecommunication equipment, that challenge central and distributed air conditioning systems delivering air via raised floor or overhead to rack mounted servers arranged in alternate facing cold and hot isles (in some cases reaching 30 kW/rack or 300 W/ft2) and returning via end of isle or separated room CRAC units, which are often found to fight each other, contributing to excessive energy use. Under those circumstances, hybrid, indirect liquid cooling facilities are often required to augment above referenced air conditioning systems in order to prevent overheating and degradation of mission critical IT equipment to maintain rack mounted subject rack mounted server equipment to continue to operate available within ASHRAE TC 9.9 prescribed task psychometric limits and IT manufacturers specifications, beyond which their operational reliability cannot be assured. Recent interest in new web-based software and secure cloud computing is expected to further accelerate the growth of Data Centers which according to a recent study, the estimated number of U.S. Data Centers in 2006 consumed approximately 61 billion kWh of electricity. Computer servers and supporting power infrastructure for the Internet are estimated to represent 1.5% of all electricity generated which along with aggregated IT and communications, including PC’s in current use have also been estimated to emit 2% of global carbon emissions. Therefore the projected eco-footprint of Data Centers into the future has now become a matter of growing concern. Accordingly our paper will focus on how best to improve the energy utilization of fossil fuels that are used to power Data Centers, the energy efficiency of related auxiliary cooling and power infrastructures, so as to reduce their eco-footprint and GHG emissions to sustainable levels as soon as possible. To this end, we plan to demonstrate significant comparative savings in annual energy use and reduction in associated annual GHG emissions by employing a on-site cogeneration system (in lieu of current reliance on remote electric power generation systems), introducing use of energy efficient outside air (OSA) desiccant assisted pre-conditioners to maintain either Class1, Class 2 and NEBS indoor air dew-points, as needed, when operated with modified existing (sensible only cooling and distributed air conditioning and chiller systems) thereby eliminating need for CRAC integral unit humidity controls while achieving a estimated 60 to 80% (virtualized) reduction in the number servers within a existing (hypothetical post-consolidation) 3.5 MW demand Data Center located in southeastern (and/or southern) U.S., coastal Puerto Rico, or Brazil characterized by three (3) representative microclimates ranging from moderate to high seasonal outside air (OSA) coincident design humidity and temperature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography