To see the other types of publications on this topic, follow the link: Distributed virtualization. eng.

Journal articles on the topic 'Distributed virtualization. eng'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 journal articles for your research on the topic 'Distributed virtualization. eng.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mikkilineni, Rao, Giovanni Morana, Daniele Zito, and Marco Di Sano. "Service Virtualization Using a Non-von Neumann Parallel, Distributed, and Scalable Computing Model." Journal of Computer Networks and Communications 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/604018.

Full text
Abstract:
This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance, and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end-to-end transaction management of computing tasks (provided by the autonomous distributed computing elements) to be implemented as network-level FCAPS management. While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP/Perl/Python (LAMP) based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions. We did not use Hypervisors, Virtual machines, or layers of complex virtualization management systems in implementing this prototype.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Lifeng, Celimuge Wu, Tsutomu Yoshinaga, Xianfu Chen, Tutomu Murase, and Yusheng Ji. "Multihop Data Delivery Virtualization for Green Decentralized IoT." Wireless Communications and Mobile Computing 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/9805784.

Full text
Abstract:
Decentralized communication technologies (i.e., ad hoc networks) provide more opportunities for emerging wireless Internet of Things (IoT) due to the flexibility and expandability of distributed architecture. However, the performance degradation of wireless communications with the increase of the number of hops becomes the main obstacle in the development of decentralized wireless IoT systems. The main challenges come from the difficulty in designing a resource and energy efficient multihop communication protocol. Transmission control protocol (TCP), the most frequently used transport layer protocol for achieving reliable end-to-end communications, cannot achieve a satisfactory result in multihop wireless scenarios as it uses end-to-end acknowledgment which could not work well in a lossy scenario. In this paper, we propose a multihop data delivery virtualization approach which uses multiple one-hop reliable transmissions to perform multihop data transmissions. Since the proposed protocol utilizes hop-by-hop acknowledgment instead of end-to-end feedback, the congestion window size at each TCP sender node is not affected by the number of hops between the source node and the destination node. The proposed protocol can provide a significantly higher throughput and shorter transmission time as compared to the end-to-end approach. We conduct real-world experiments as well as computer simulations to show the performance gain from our proposed protocol.
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Jin-young, Minkyoung Cho, and Jik-Soo Kim. "Employing Vertical Elasticity for Efficient Big Data Processing in Container-Based Cloud Environments." Applied Sciences 11, no. 13 (July 4, 2021): 6200. http://dx.doi.org/10.3390/app11136200.

Full text
Abstract:
Recently, “Big Data” platform technologies have become crucial for distributed processing of diverse unstructured or semi-structured data as the amount of data generated increases rapidly. In order to effectively manage these Big Data, Cloud Computing has been playing an important role by providing scalable data storage and computing resources for competitive and economical Big Data processing. Accordingly, server virtualization technologies that are the cornerstone of Cloud Computing have attracted a lot of research interests. However, conventional hypervisor-based virtualization can cause performance degradation problems due to its heavily loaded guest operating systems and rigid resource allocations. On the other hand, container-based virtualization technology can provide the same level of service faster with a lightweight capacity by effectively eliminating the guest OS layers. In addition, container-based virtualization enables efficient cloud resource management by dynamically adjusting the allocated computing resources (e.g., CPU and memory) during the runtime through “Vertical Elasticity”. In this paper, we present our practice and experience of employing an adaptive resource utilization scheme for Big Data workloads in container-based cloud environments by leveraging the vertical elasticity of Docker, a representative container-based virtualization technique. We perform extensive experiments running several Big Data workloads on representative Big Data platforms: Apache Hadoop and Spark. During the workload executions, our adaptive resource utilization scheme periodically monitors the resource usage patterns of running containers and dynamically adjusts allocated computing resources that could result in substantial improvements in the overall system throughput.
APA, Harvard, Vancouver, ISO, and other styles
4

Ravichandran, S., and J. Sathiamoorthy. "An Innovative Performance of Refuge using Stowage Main Servers in Cloud Computing Equipment." Asian Journal of Computer Science and Technology 10, no. 1 (May 5, 2021): 13–17. http://dx.doi.org/10.51983/ajcst-2021.10.1.2695.

Full text
Abstract:
Distributed computing has been imagined as the cutting edge engineering of IT Enterprise. It moves the application programming and information bases to the incorporated enormous server farms, where the administration of the information and administrations may not be completely dependable. There are various security issues for distributed computing as it envelops numerous innovations including networks, information bases, working frameworks, virtualization, asset planning, exchange the board, load adjusting, simultaneousness control and memory the executives. Putting away information in an outsider's cloud framework causes genuine worry over information secrecy. Hence, security issues for a large number of these frameworks and advancements are material to distributed computing. We propose a key worker encryption conspire and incorporate it with a decentralized deletion code with the end goal that a safe conveyed stockpiling key framework is defined respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Yansen, and Ved P. Kafle. "An Availability-Enhanced Service Function Chain Placement Scheme in Network Function Virtualization." Journal of Sensor and Actuator Networks 8, no. 2 (June 14, 2019): 34. http://dx.doi.org/10.3390/jsan8020034.

Full text
Abstract:
A service function chain (SFC) is an ordered virtual network function (VNF) chain for processing traffic flows to deliver end-to-end network services in a virtual networking environment. A challenging problem for an SFC in this context is to determine where to deploy VNFs and how to route traffic between VNFs of an SFC on a substrate network. In this paper, we formulate an SFC placement problem as an integer linear programing (ILP) model, and propose an availability-enhanced VNF placing scheme based on the layered graphs approach. To improve the availability of SFC deployment, our scheme distributes VNFs of an SFC to multiple substrate nodes to avoid a single point of failure. We conduct numerical analysis and computer simulation to validate the feasibility of our SFC scheme. The results show that the proposed scheme outperforms well in different network scenarios in terms of end-to-end delay of the SFC and computation time cost.
APA, Harvard, Vancouver, ISO, and other styles
6

Benomar, Zakaria, Francesco Longo, Giovanni Merlino, and Antonio Puliafito. "Cloud-based Network Virtualization in IoT with OpenStack." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–26. http://dx.doi.org/10.1145/3460818.

Full text
Abstract:
In Cloud computing deployments, specifically in the Infrastructure-as-a-Service (IaaS) model, networking is one of the core enabling facilities provided for the users. The IaaS approach ensures significant flexibility and manageability, since the networking resources and topologies are entirely under users’ control. In this context, considerable efforts have been devoted to promoting the Cloud paradigm as a suitable solution for managing IoT environments. Deep and genuine integration between the two ecosystems, Cloud and IoT, may only be attainable at the IaaS level. In light of extending the IoT domain capabilities’ with Cloud-based mechanisms akin to the IaaS Cloud model, network virtualization is a fundamental enabler of infrastructure-oriented IoT deployments. Indeed, an IoT deployment without networking resilience and adaptability makes it unsuitable to meet user-level demands and services’ requirements. Such a limitation makes the IoT-based services adopted in very specific and statically defined scenarios, thus leading to limited plurality and diversity of use cases. This article presents a Cloud-based approach for network virtualization in an IoT context using the de-facto standard IaaS middleware, OpenStack, and its networking subsystem, Neutron. OpenStack is being extended to enable the instantiation of virtual/overlay networks between Cloud-based instances (e.g., virtual machines, containers, and bare metal servers) and/or geographically distributed IoT nodes deployed at the network edge.
APA, Harvard, Vancouver, ISO, and other styles
7

Jayakumar, N., and A. M. Kulkarni. "A Simple Measuring Model for Evaluating the Performance of Small Block Size Accesses in Lustre File System." Engineering, Technology & Applied Science Research 7, no. 6 (December 18, 2017): 2313–18. http://dx.doi.org/10.48084/etasr.1557.

Full text
Abstract:
Storage performance is one of the vital characteristics of a big data environment. Data throughput can be increased to some extent using storage virtualization and parallel data paths. Technology has enhanced the various SANs and storage topologies to be adaptable for diverse applications that improve end to end performance. In big data environments the mostly used file systems are HDFS (Hadoop Distributed File System) and Lustre. There are environments in which both HDFS and Lustre are connected, and the applications directly work on Lustre. In Lustre architecture with out-of-band storage virtualization system, the separation of data path from metadata path is acceptable (and even desirable) for large files since one MDT (Metadata Target) open RPC is typically a small fraction of the total number of read or write RPCs. This hurts small file performance significantly when there is only a single read or write RPC for the file data. Since applications require data for processing and considering in-situ architecture which brings data or metadata close to applications for processing, how the in-situ processing can be exploited in Lustre is the domain of this dissertation work. The earlier research exploited Lustre supporting in-situ processing when Hadoop/MapReduce is integrated with Lustre, but still, the scope of performance improvement existed in Lustre. The aim of the research is to check whether it is feasible and beneficial to move the small files to the MDT so that additional RPCs and I/O overhead can be eliminated, and read/write performance of Lustre file system can be improved.
APA, Harvard, Vancouver, ISO, and other styles
8

Bhardwaj, Akashdeep, and Sam Goundar. "Comparing Single Tier and Three Tier Infrastructure Designs against DDoS Attacks." International Journal of Cloud Applications and Computing 7, no. 3 (July 2017): 59–75. http://dx.doi.org/10.4018/ijcac.2017070103.

Full text
Abstract:
With the rise in cyber-attacks on cloud environments like Brute Force, Malware or Distributed Denial of Service attacks, information security officers and data center administrators have a monumental task on hand. Organizations design data center and service delivery with the aim of catering to maximize device provisioning & availability, improve application performance, ensure better server virtualization and end up securing data centers using security solutions at internet edge protection level. These security solutions prove to be largely inadequate in times of a DDoS cyber-attack. In this paper, traditional data center design is reviewed and compared to the proposed three tier data center. The resilience to withstand against DDoS attacks is measured for Real User Monitoring parameters, compared for the two infrastructure designs and the data is validated using T-Test.
APA, Harvard, Vancouver, ISO, and other styles
9

Baciu, George, Yungzhe Wang, and Chenhui Li. "Cognitive Visual Analytics of Multi-Dimensional Cloud System Monitoring Data." International Journal of Software Science and Computational Intelligence 9, no. 1 (January 2017): 20–34. http://dx.doi.org/10.4018/ijssci.2017010102.

Full text
Abstract:
Hardware virtualization has enabled large scale computational service delivery models with high cost leverage and improved resource utilization on cloud computing platforms. This has completely changed the landscape of computing in the last decade. It has also enabled large–scale data analytics through distributed high performance computing. Due to the infrastructure complexity, end–users and administrators of cloud platforms can rarely obtain a full picture of the state of cloud computing systems and data centers. Recent monitoring tools enable users to obtain large amounts of data with respect to many utilization parameters of cloud platforms. However, they fail to get the maximal overall insight into the resource utilization dynamics of cloud platforms. Furthermore, existing tools make it difficult to observe large-scale patterns, making it difficult to learn from the past behavior of cloud system dynamics. In this work, the authors describe a perceptual-based interactive visualization platform that gives users and administrators a cognitive view of cloud computing system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
10

Vidal, Ivan, Borja Nogales, Diego Lopez, Juan Rodríguez, Francisco Valera, and Arturo Azcorra. "A Secure Link-Layer Connectivity Platform for Multi-Site NFV Services." Electronics 10, no. 15 (August 3, 2021): 1868. http://dx.doi.org/10.3390/electronics10151868.

Full text
Abstract:
Network Functions Virtualization (NFV) is a key technology for network automation and has been instrumental to materialize the disruptive view of 5G and beyond mobile networks. In particular, 5G embraces NFV to support the automated and agile provision of telecommunication and vertical services as a composition of versatile virtualized components, referred to as Virtual Network Functions (VNFs). It provides a high degree of flexibility in placing these components on distributed NFV infrastructures (e.g., at the network edge, close to end users). Still, this flexibility creates new challenges in terms of VNF connectivity. To address these challenges, we introduce a novel secure link-layer connectivity platform, L2S. Our solution can automatically be deployed and configured as a regular multi-site NFV service, providing the abstraction of a layer-2 switch that offers link-layer connectivity to VNFs deployed on remote NFV sites. Inter-site communications are effectively protected using existing security solutions and protocols, such as IP security (IPsec). We have developed a functional prototype of L2S using open-source software technologies. Our evaluation results indicate that this prototype can perform IP tunneling and cryptographic operations at Gb/s data rates. Finally, we have validated L2S using a multi-site NFV ecosystem at the Telefonica Open Network Innovation Centre (5TONIC), using our solution to support a multicast-based IP television service.
APA, Harvard, Vancouver, ISO, and other styles
11

Dawadi, Babu Ram, Subarna Shakya, and Rajendra Paudyal. "CoMMoN: The Real-Time Container and Migration Monitoring as a Service in the Cloud." Journal of the Institute of Engineering 12, no. 1 (March 6, 2017): 51–62. http://dx.doi.org/10.3126/jie.v12i1.16770.

Full text
Abstract:
With the advancement of computing technologies, the modern cloud computing and virtualization system is highly distributed by the invention of better and light weight distributed applications packaging toolkit over the cloud environment, called the container technology. Cloud containers feature the management of applications with easy plug and play ability, migration, replication, relocation, upgrading et cetera in the real time. Such containers running different applications over the cloud infrastructure may consume different resources that require real time monitoring. Docker is an open source platform independent tool to create, deploy, and run applications by using Containers. Billions of applications for the end users and SMEs running over the cloud require efficient management, monitoring and operations to achieve better SLAs for cloud service providers. With this research, a Docker Container and Migration Monitoring System (CoMMoN) has been developed in which a customer running its application in a Container shall monitor its application in the real-time as well as take decision for the migration of Container to another network by service provider for liad balancing or by the customer under the violation of SLA. Once the customer registers its Container to the monitoring system, the monitoring probe collects different monitoring metrics and sends those parameters to remote monitoring system. The system which stores the monitoring metrics into InFluxDB, a time series database, also monitors the real-time migration of the Containers among the network.Journal of the Institute of Engineering, 2016, 12(1): 51-62
APA, Harvard, Vancouver, ISO, and other styles
12

Chiang, Mei-Ling, and Tsung-Te Hou. "A Scalable Virtualized Server Cluster Providing Sensor Data Storage and Web Services." Symmetry 12, no. 12 (November 25, 2020): 1942. http://dx.doi.org/10.3390/sym12121942.

Full text
Abstract:
With the rapid development of the Internet of Things (IoT) technology, diversified applications deploy extensive sensors to monitor objects, such as PM2.5 air quality monitoring. The sensors transmit data to the server periodically and continuously. However, a single server cannot provide efficient services for the ever-growing IoT devices and the data they generate. This study bases on the concept of symmetry of architecture and quantities in system design and explores the load balancing issue to improve performance. This study uses the Linux Virtual Server (LVS) and virtualization technology to deploy a virtual machine (VM) cluster. It consists of a front-end server, also a load balancer, to dispatch requests, and several back-end servers to provide services. These receive data from sensors and provide Web services for browsing real-time sensor data. The Hadoop Distributed File System (HDFS) and HBase are used to store the massive amount of received sensor data. Because load-balancing is critical for resource utilization, this study also proposes a new load distribution algorithm for VM-based server clusters that simultaneously provide multiple services, such as sensor services and Web service. It considers the aggregate load of all back-end servers on the same physical server that provides multiple services. It also considers the difference between physical machines and VMs. Algorithms such as those for LVS, which do not consider these factors, can cause load imbalance between physical servers. The experimental results demonstrate that the proposed system is fault tolerant, highly scalable, and offers high availability and high performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Vinay, Gatla, and T. Pavan Kumar. "Centralized real-time logs investigation in virtual data-center." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 1. http://dx.doi.org/10.14419/ijet.v7i2.7.10242.

Full text
Abstract:
Penetration testing is a specialized security auditing methodology where a tester simulates an attack on a secured system. The main theme of this paper itself reflects how one can collect the massive amount of log files which are generated among virtual datacenters in real time which in turn also posses invisible information with excessive organization value. Such testing usually ranges across all aspects concerned to log management across a number of servers among virtual data centers. In fact, Virtualization limits the costs by reducing the need for physical hardware systems. Instead, require high-end hardware for processing. In the real-time scenario, we usually come across multiple logs among VCenter, ESXi, a VM which is very typical for performing manual analysis with a bit more time-consuming. Instead of configuring secure-ids automatically in a Centralized log management server gains a powerful full insight. Along with using accurate search algorithms, fields searching, which includes title, author, and also content comes out of searching, sorting fields, multiple-index search with merged results simultaneously updates files, with joint results grouping automatically configures few plugs among search engine file formats were effective measures in an investigation. Finally, by using the Flexibility Network Security Monitor, Traffic Investigation, offensive detection, Log Recording, Distributed inquiry with full program's ability can export data to a variety of visualization dashboard which exactly needed for Log Investigations across Virtual Data Centers in real time.
APA, Harvard, Vancouver, ISO, and other styles
14

Srinivas, Chalasani. "Real-time logs in virtual data-center." International Journal of Engineering & Technology 7, no. 2 (May 17, 2018): 746. http://dx.doi.org/10.14419/ijet.v7i2.9434.

Full text
Abstract:
Penetration testing is a specialized security auditing methodology where a tester simulates an attack on a secured system. The main theme of this paper itself reflects how one can collect the massive amount of log files which are generated among virtual datacenters in real time which in turn also posses invisible information with excessive organization value. Such testing usually ranges across all aspects concerned to log management across a number of servers among virtual data centers. In fact, Virtualization limits the costs by reducing the need for physical hardware systems. Instead, require high-end hardware for processing. In the real-time scenario, we usually come across multiple logs among VCenter, ESXi, a VM which is very typical for performing manual analysis with a bit more time-consuming. Instead of configuring secure-ids automatically in a Centralized log management server gains a powerful full insight. Along with using accurate search algorithms, fields searching, which includes title, author, and also content comes out of searching, sorting fields, multiple-index search with merged results simultaneously updates files, with joint results grouping automatically configures few plugs among search engine file formats were effective measures in an investigation. Finally, by using the Flexibility Network Security Monitor, Traffic Investigation, offensive detection, Log Recording, Distributed inquiry with full program's ability can export data to a variety of visualization dashboard which exactly needed for Log Investigations across Virtual Data Centers in real time.
APA, Harvard, Vancouver, ISO, and other styles
15

Strumberger, Ivana, Milan Tuba, Nebojsa Bacanin, and Eva Tuba. "Cloudlet Scheduling by Hybridized Monarch Butterfly Optimization Algorithm." Journal of Sensor and Actuator Networks 8, no. 3 (August 11, 2019): 44. http://dx.doi.org/10.3390/jsan8030044.

Full text
Abstract:
Cloud computing technology enables efficient utilization of available physical resources through the virtualization where different clients share the same underlying physical hardware infrastructure. By utilizing the cloud computing concept, distributed, scalable and elastic computing resources are provided to the end-users over high speed computer networks (the Internet). Cloudlet scheduling that has a significant impact on the overall cloud system performance represents one of the most important challenges in this domain. In this paper, we introduce implementations of the original and hybridized monarch butterfly optimization algorithm that belongs to the category of swarm intelligence metaheuristics, adapted for tackling the cloudlet scheduling problem. The hybridized monarch butterfly optimization approach, as well as adaptations of any monarch butterfly optimization version for the cloudlet scheduling problem, could not be found in the literature survey. Both algorithms were implemented within the environment of the CloudSim platform. The proposed hybridized version of the monarch butterfly optimization algorithm was first tested on standard benchmark functions and, after that, the simulations for the cloudlet scheduling problem were performed using artificial and real data sets. Based on the obtained simulation results and the comparative analysis with six other state-of-the-art metaheuristics and heuristics, under the same experimental conditions and tested on the same problem instances, a hybridized version of the monarch butterfly optimization algorithm proved its potential for tackling the cloudlet scheduling problem. It has been established that the proposed hybridized implementation is superior to the original one, and also that the task scheduling problem in cloud environments can be more efficiently solved by using such an algorithm with positive implications to the cloud management.
APA, Harvard, Vancouver, ISO, and other styles
16

Федорович, Олег Євгенович, and Юрій Леонідович Прончаков. "МЕТОД ФОРМУВАННЯ ЛОГІСТИЧНИХ ТРАНСПОРТНИХ ВЗАЄМОДІЙ ДЛЯ НОВОГО ПОРТФЕЛЮ ЗАМОВЛЕНЬ РОЗПОДІЛЕНОГО ВІРТУАЛЬНОГО ВИРОБНИЦТВА." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (April 26, 2020): 102–8. http://dx.doi.org/10.32620/reks.2020.2.09.

Full text
Abstract:
The subject of research in the paper is to organize the logistics of interactions in a distributed virtual manufacture of high-tech products. The work aims to develop the method to find rational routes in the heterogeneous transport network, considering the time (deadlines), costs, and risks. The article addresses the following tasks: research of logistics interactions across the virtual manufacture of high-tech products (aircraft, automotive, etc.). Logistic interactions are carried out in a heterogeneous transport network that connects individual technological processes of high-tech manufacture. The features of logistics related to the virtualization of manufacture in the form of the new portfolio of orders are analyzed. The suppliers of materials, raw materials, and components that are located in the nodes of the heterogeneous transport network and which are the sources of goods transported in the distributed virtual manufacture are determined. To assess the possible routes of goods transportation in the heterogeneous transport network the values of time (deadlines of goods delivery), as well as freight costs and risks, are considered. The purposeful search for rational routes is carried out using the proposed algorithm for generating and moving numerical "waves" in a heterogeneous transport network. The simulation model to simulate the process of numerical “wave” propagation using planning and implementing of events related to the arrival of goods transported to the nodes of the heterogeneous transport network has been built. The algorithm to simulate the flow of requests in neighboring nodes concerning the node of the considered heterogeneous transport network has been developed. The dead-end nodes and possible parallel paths of equal types of transportation combining neighboring nodes are considered. The built algorithm includes two different phases of simulation: to achieve the “final” node and the inverse phase that is to define the route. The search algorithm is universal and makes it possible to find the routes with minimal time of goods transportation, minimal costs, and risks. The search for a compromise route that satisfies the conflicting criteria of time, cost, and risk has been proposed. The method of rational routes search is designed in the form of the agent simulation model. The following methods are used in the article: system analysis to create the topology of heterogeneous transport network structure; graph theory to represent the flows and routes of goods transportation; simulation to generate and move numerical "waves"; route optimization based on simulation; multi-criteria optimization to find the rational route; agent-oriented simulation to create the routes in the heterogeneous transport network. Conclusions: the proposed method to find the rational routes in a heterogeneous transport network of distributed virtual manufacture allows organizing the logistics transport interactions in distributed virtual production at the initial stages of planning of the new portfolio of orders.
APA, Harvard, Vancouver, ISO, and other styles
17

Arnold, Paul, and Dirk von Hugo. "Future integrated communication network architectures enabling heterogeneous service provision." Advances in Radio Science 16 (September 4, 2018): 59–66. http://dx.doi.org/10.5194/ars-16-59-2018.

Full text
Abstract:
Abstract. This paper summarizes expectations and requirements towards future converged communication systems denoted by 5th Generation (5G). Multiple research and standardization activities globally contribute to the definition and specification of an Information and Communication Technology (ICT) to provide business customers and residential users with both, existing and future upcoming services which demand for higher data rates and granted performance figures in terms of QoS parameters, such as low latency and high reliability. Representative use case families are threefold and represented as enhanced Mobile Broadband (eMBB), massive Internet of Things (mIoT), and Critical Communication, i.e. Ultra-Low Latency (ULL)/Ultra-High Reliability (UHR). To deploy and operate a dedicated network for each service or use case separately would raise the expenses and service costs to an unduly high amount. Instead provision of a commonly shared physical infrastructure offering resources for transport, processing, and storage of data to several separated logical networks (slices) individually managed and configured by potentially multiple service providers is the main concept of this new approach. Beside a multitude of other initiatives the EU-funded 5G NORMA project (5G Novel Radio Multiservice adaptive network Architecture) has developed an architecture which enables not only network programmability (configurability in software), but also network slicing and Multi Tenancy (allowing independent 3rd parties to offer an end-to-end service tailored according to their needs) in a mobile network. Major aspects dealt with here are the selectable support of mobility (on-demand) and service-aware QoE/QoS (Quality of Experience/Service) control. Specifically we will report on the outcome of the analysis of design criteria for Mobility Management schemes and the result of an exemplary application of the modular mobility function to scenarios with variable service requirements (e.g. high-terminal speed vs. on-demand mobility or portability of devices). An efficient sharing of scarce frequency resources in new radio systems demands for tight coordination of orchestration and assignment (scheduling) of resources for the different network slices as per capacity and priority (QoS) demand. Dynamicity aspects in changing algorithms and schemes to manage, configure, and optimize the resources at the radio base stations according to slice specific Service Level Agreements (SLAs) are investigated. It has been shown that architectural issues in terms of hierarchy (centralized vs. distributed) and layering, i.e. separation of control (signaling) and (user) data plane will play an essential role to increase the elasticity of network infrastructures which is in focus of applying SDN (Software Defined Networking) and NFV (Network Function Virtualization) to next generation communication systems. An outlook towards follow-on standardization and open research questions within different SDOs (Standards Defining Organizations) and recently started cooperative projects concludes the contribution.
APA, Harvard, Vancouver, ISO, and other styles
18

Uddin, Mueen, Mohammed Hamdi, Abdullah Alghamdi, Mesfer Alrizq, Mohammad Sulleman Memon, Maha Abdelhaq, and Raed Alsaqour. "Server consolidation: A technique to enhance cloud data center power efficiency and overall cost of ownership." International Journal of Distributed Sensor Networks 17, no. 3 (March 2021): 155014772199721. http://dx.doi.org/10.1177/1550147721997218.

Full text
Abstract:
Cloud computing is a well-known technology that provides flexible, efficient, and cost-effective information technology solutions for multinationals to offer improved and enhanced quality of business services to end-users. The cloud computing paradigm is instigated from grid and parallel computing models as it uses virtualization, server consolidation, utility computing, and other computing technologies and models for providing better information technology solutions for large-scale computational data centers. The recent intensifying computational demands from multinationals enterprises have motivated the magnification for large complicated cloud data centers to handle business, monetary, Internet, and commercial applications of different enterprises. A cloud data center encompasses thousands of millions of physical server machines arranged in racks along with network, storage, and other equipment that entails an extensive amount of power to process different processes and amenities required by business firms to run their business applications. This data center infrastructure leads to different challenges like enormous power consumption, underutilization of installed equipment especially physical server machines, CO2 emission causing global warming, and so on. In this article, we highlight the data center issues in the context of Pakistan where the data center industry is facing huge power deficits and shortcomings to fulfill the power demands to provide data and operational services to business enterprises. The research investigates these challenges and provides solutions to reduce the number of installed physical server machines and their related device equipment. In this article, we proposed server consolidation technique to increase the utilization of already existing server machines and their workloads by migrating them to virtual server machines to implement green energy-efficient cloud data centers. To achieve this objective, we also introduced a novel Virtualized Task Scheduling Algorithm to manage and properly distribute the physical server machine workloads onto virtual server machines. The results are generated from a case study performed in Pakistan where the proposed server consolidation technique and virtualized task scheduling algorithm are applied on a tier-level data center. The results obtained from the case study demonstrate that there are annual power savings of 23,600 W and overall cost savings of US$78,362. The results also highlight that the utilization ratio of already existing physical server machines has increased to 30% compared to 10%, whereas the number of server machines has reduced to 50% contributing enormously toward huge power savings.
APA, Harvard, Vancouver, ISO, and other styles
19

Chouhan, Durga, Nilima Gautam, Gaurav Purohit, and Rajesh Bhdada. "A survey on virtualization techniques in Mobile edge computing." WEENTECH Proceedings in Energy, March 13, 2021, 455–68. http://dx.doi.org/10.32438/wpe.412021.

Full text
Abstract:
In the present scenario, the field of Information Technology(IT) is moving from physical storage to cloud storage as "cloud" providers deliver on-demand resources over the Internet. MEC's key idea is to provide an IT infrastructure system and cloud computing services at the mobile network's edge, within the RAN and close to mobile users. MEC expands the idea of cloud computing by taking the benefits of the cloud closer to consumers in the form of a network edge, resulting in less latency from end to end. It is a decentralized computing infrastructure where some applications use signal processing, storage, control and computing are distributed between the data source and the cloud in the most effective and logical way. Virtualization is the main cloud infrastructure technology used in MEC. Virtualization is accomplished by virtualizing the software or hardware resource layer. Virtualization in MEC can be done by the hypervisor, Virtual machine, Docker Container or by Kubernetes. Hypervisors and VMs are the technologies used earlier. Docker is the technology we use nowadays, and Kubernetes is the future of Virtualization. In the face of large-scale and highly scalable needs, the cloud computing infrastructure is hard to fulfil in a short time, and the conventional virtual machine-based cloud host absorbs a lot of device resources on its own hence in this paper, we will address Docker as new container technology and introduce you to how this technology has solved previous problems in Virtualization, including the creation and deployment of large applications. The purpose of this paper is to provide a detailed survey of related MEC research and technological developments where specifically relevant research and future directions are illustrated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography