To see the other types of publications on this topic, follow the link: Server load balancing.

Journal articles on the topic 'Server load balancing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Server load balancing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mishra, Swati, and Sanjaya Kumar Panda. "An Efficient Server Minimization Algorithm for Internet Distributed Systems." International Journal of Rough Sets and Data Analysis 4, no. 4 (2017): 17–30. http://dx.doi.org/10.4018/ijrsda.2017100102.

Full text
Abstract:
The increasing use of online services leads to an unequal distribution of the loads among the servers. As a result, the problem is to balance the loads among the servers such that the total number of active servers is minimized. One of the possible solutions is to transfer the loads from the underutilized server to a suitable server and make the underutilized server to sleep mode. In this paper, a server minimization algorithm (SMA) is proposed for the solution of server minimization and the load balancing problem. The proposed algorithm reduces the number of servers by merging the loads of the two least loaded servers. Then it determines the standard deviation of the server loads for load balancing. The proposed SMA is compared with an existing load balancing algorithm using the number of minimized servers, load standard deviation and load factor. The simulation results show the efficacy of the SMA.
APA, Harvard, Vancouver, ISO, and other styles
2

Fadila, Aina, Muhammad Nasir, and Safriadi Safriadi. "Implementasi Sistem Load Balancing Web Server Pada Jaringan public Cloud Computing Menggunakan Least Connection." Journal of Artificial Intelligence and Software Engineering (J-AISE) 3, no. 2 (2023): 50. http://dx.doi.org/10.30811/jaise.v3i2.4578.

Full text
Abstract:
Web adalah sebuah perangkat lunak dengan berbasis data yang berfungsi untuk menerima permintaan dari client dan tanggapan permintaan dengan mentranfer melalui browser yang merupakan halaman situs web. Dibalik kemudahan pengaksesan segala informasi terdapat permasalahan yang terjadi pada trafik yang menuju web server yaitu dengan meningkatnya permintaan informasi akan dapat menjadikan trafik menuju web server menjadi overload dan akhirnya menjadi down karena tidak mampu menjalankan permintaan yang berlebihan. Untuk mengatasi permasalahan tersebut adalah dengan menggunakan load balancing yang bertugas untuk mendistribusikan beban trafik kebanyak server. Rumusan masalah yang terdapat adalah Bagaimana sitem monitoring jalanya trafik secara real time dan Bagaimana performa web server yang menggunakan load balancing dan web server tidak menggunakan load balancing. Tujuannya untuk melihat system monitoring secara real time dan mengetahui performa web server menggunakan load balancing dan tidak menggunakan load balancing.Pada penelitian ini diselesaikan dengan menerapkan load balancing pada jaringan public dan menerapkan load balancing Haproxy pada server serta didukung algoritma least connetion. Bedasarkan analisis, dapat diperoleh hasil bahwa keberhasilan system jalannya traffic secara real time yaitu 90 % dan hasil uji performa dari web server menggunakan aplikasi jmeter dengan jumlah traffic 1000 permintaan dalam satu waktu dengan looping 1,10,50 dan 100 pada load balancing nilai rata-rata throughput 630.2/sec dan tidak menggunakan load balancing nilai rata-rata throughput 354.5/sec.Kata Kunci : Load balancing, Web Server, Apache, JMeter, DockerAbstractWeb is a software with data-based that functions to receive requests from clients and respond to requests by transferring through a browser which is a website page. Behind the ease of accessing all information, there are problems that occur in traffic to the web server, namely with the increase in requests for information, it will be able to make traffic to the web server become overloaded and eventually down because it is unable to carry out excessive requests. To overcome this problem is to use load balancing which is in charge of distributing traffic loads to many servers. The formulation of the problem is how the system monitors traffic in real time and how the performance of web servers that use load balancing and web servers do not use load balancing. The goal is to see the monitoring system in real time and find out the performance of the web server using load balancing and not using load balancing. This research was completed by applying load balancing on public networks and applying Haproxy load balancing on servers and supported by least connetion algorithms. Based on the analysis, and the results of performance tests from the web server using the JMet application with the number of traffic 1000 requests at one time with looping 1, 10, 50 and 100 on load balancing average throughput value of 164.2 / sec and not using load balancing average throughput value of 612.2 / sec.Keywords— Load balancing, Web Server, Apache JMeter, Docker.
APA, Harvard, Vancouver, ISO, and other styles
3

Ohta, Satoru, and Ryuichi Andou. "WWW Server Load Balancing Technique Employing Passive Measurement of Server Performance." ECTI Transactions on Electrical Engineering, Electronics, and Communications 8, no. 1 (2009): 59–66. http://dx.doi.org/10.37936/ecti-eec.201081.172018.

Full text
Abstract:
Server load balancing is indispensable within World Wide Web (WWW) for providing high-quality service. In server load balancing, since the server loads and capacities are not always identical, traffic should be distributed by measuring server performance to improve the service quality. This study proposes a load balancing technique conducted by passive measurement, which estimates the server performance via user traffic passing through the load balancer. Since this method evaluates server performance without executing any programs on the server, no additional server or network load is generated. This paper first presents a server performance metric that can be passively measured. The presented metric utilizes the characteristics of TCP SYN and SYN ACK messages exchanged in the TCP connection establishment phase. An experiment shows that the metric correctly identifies server performance degradation. The paper then proposes a load balancing algorithm based on the metric, and its implementation issues. The proposed algorithm distributes fewer requests to servers that do not have su±cient capacities. Because of this, the algorithm achieves good performance in a heterogeneous environment where servers with different capacities coexist. The effectiveness of the proposed load balancing technique is confirmed experimentally.
APA, Harvard, Vancouver, ISO, and other styles
4

Weng, Wentao, Xingyu Zhou, and R. Srikant. "Optimal Load Balancing with Locality Constraints." ACM SIGMETRICS Performance Evaluation Review 49, no. 1 (2022): 49–50. http://dx.doi.org/10.1145/3543516.3456279.

Full text
Abstract:
Applications in cloud platforms motivate the study of efficient load balancing under job-server constraints and server heterogeneity. In this paper, we study load balancing on a bipartite graph where left nodes correspond to job types and right nodes correspond to servers, with each edge indicating that a job type can be served by a server. Thus edges represent locality constraints, i.e., an arbitrary job can only be served at servers which contain certain data and/or machine learning (ML) models. Servers in this system can have heterogeneous service rates. In this setting, we investigate the performance of two policies named Join-the-Fastest-of-the-Shortest-Queue (JFSQ) and Join-the-Fastest-of-the-Idle-Queue (JFIQ), which are simple variants of Join-the-Shortest-Queue and Join-the-Idle-Queue, where ties are broken in favor of the fastest servers. Under a "well-connected'' graph condition, we show that JFSQ and JFIQ are asymptotically optimal in the mean response time when the number of servers goes to infinity. In addition to asymptotic optimality, we also obtain upper bounds on the mean response time for finite-size systems. We further show that the well-connectedness condition can be satisfied by a random bipartite graph construction with relatively sparse connectivity.
APA, Harvard, Vancouver, ISO, and other styles
5

Januar Al Amien and Doni Winarso. "ANALISIS PENINGKATAN KINERJA FTP SERVER MENGGUNAKAN LOAD BALANCING PADA CONTAINER." JURNAL FASILKOM 9, no. 3 (2019): 8–18. http://dx.doi.org/10.37859/jf.v9i3.1667.

Full text
Abstract:
Abstract
 Cloud computing is a technology that answers the challenge of the need for efficient computing technology. There are many things that can be implemented using cloud computing technologies such as web services, storage services, applications and others. Use of cloud computing using container technology can help in the management of applications and optimize the use of resources in the form of memory and processor usage on the server. In this research docker containers implemented by service of FTP (File Transfer Protocol). The FTP service is made into 3 containers within a single server computer. To handle load problems performance on the FTP server against overload requests, load balancing is used. Load balancing is a method to improve performance while reducing the performance load on FTP servers. Based on the test results, the use of multi container and load balancing in the FTP server in load with two algorithm least connection and raound robin handling has result of smaller memory usage and utilization of processor usage evenly. Both algorithms are recommended for handling loads for FTP servers and will be more efficient when applied to servers with the same specifications and loads
 Keywords: Cloud Computing, Docker, FTP, Load Balancing, HAProxy, Least Connection, Round Robin.
 Abstrak
 Cloud computing merupakan teknologi yang menjawab tantangan akan kebutuhan teknologi komputasi yang efisien. Terdapat banyak hal yang dapat diimplementasikan menggunakan teknologi cloud computing seperti web service, layanan penyimpanan, aplikasi dan lain-lain. Penerapan cloud computing dengan menggunakan teknologi container dapat membantu dalam pengelolaan aplikasi serta mengoptimalkan penggunaan sumber daya berupa penggunaan memory dan prosesor pada server. Dalam penelitian ini penerapan docker container diimplementasikan menggunakan layanan aplikasi FTP (File Transfer Protocol). Layanan FTP dibuat menjadi 3 container didalam satu computer server. Untuk menangani permasalahan beban kinerja pada FTP server terhadap permintaan yang terlalu berat (overload) digunakan load balancing. Load balancing merupakan metode untuk meningkatkan kinerja sekaligus mengurangi beban kinerja pada FTP server. Berdasarkan hasil pengujian, penerapan multi container serta load balancing didalam FTP server dalam penanganan beban dengan dua algortima least connection dan round robin memiliki hasil penggunaan memory yang lebih kecil dan pemanfaatan penggunaan prosesor yang merata kedua algoritma tersebut direkomendasikan untuk penanganan beban untuk ftp server dan akan lebih efisien apabila diterapkan pada server dengan spesifikasi dan beban yang sama.
 Kata Kunci: Cloud Computing, Docker, FTP, Load Balancing, HAProxy, Least Connection, Round Robin .
APA, Harvard, Vancouver, ISO, and other styles
6

Fancy, C., and M. Pushpalatha. "Traffic-aware adaptive server load balancing for software defined networks." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (2021): 2211. http://dx.doi.org/10.11591/ijece.v11i3.pp2211-2218.

Full text
Abstract:
Servers in data center networks handle heterogenous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TA-ASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

C., Fancy, and Pushpalatha M. "Traffic-aware adaptive server load balancing for softwaredefined networks." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (2021): 2211–18. https://doi.org/10.11591/ijece.v11i3.pp2211-2218.

Full text
Abstract:
Servers in data center networks handle heterogeneous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TAASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Danilevičius, Ernestas, and Liudvikas Kaklauskas. "STUDY OF HIGH AVAILABILITY AND PERFORMACE OFF SERVER CLUSTER." PROFESSIONAL STUDIES: THEORY AND PRACTICE 27, no. 1 (2023): 89–94. http://dx.doi.org/10.56131/pstp.2023.27.1.154.

Full text
Abstract:
The article analyzes selected software solutions for balancing traffic in the server cluster: Traefik, HAProxy, and NGINX. For the demonstration, a system model consists of a cluster of three servers connected to the management servers with load-balancing solutions that share a public IP address. The application servers use the same database available in a multi-master configuration and the management servers are connected via the BGP protocol. User requests reach the server cluster through redundant traffic balancing subsystems. After testing the designed system, it was found that HAProxy is the best among all selected load-balancing solutions and ensures high cluster availability. While setting up the HAProxy it is recommended to choose the dynamic Least-Connections load balancing algorithm. Keywords: a server cluster, load-balancing, Round Robin, Least connections, HAPoxy
APA, Harvard, Vancouver, ISO, and other styles
9

Kadiyala, Ramana, and Ponnavaikko M. "AWSQ: an approximated web server queuing algorithm for heterogeneous web server cluster." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (2019): 2083–93. https://doi.org/10.11591/ijece.v9i3.pp2083-2093.

Full text
Abstract:
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.  
APA, Harvard, Vancouver, ISO, and other styles
10

Nurdiansyah, Alfian, Nugroho Suharto, and Hudiono Hudiono. "Analisis Mirror Server menggunakan Load Balancing pada Jaringan Area Lokal." Jurnal Jartel: Jurnal Jaringan Telekomunikasi 10, no. 4 (2020): 173–78. http://dx.doi.org/10.33795/jartel.v10i4.57.

Full text
Abstract:
Server merupakan serbuah sistem yang memberikan layanan tertentu pada suatu jaringan komputer. Server mempunyai sistem operasi sendiri yang disebut sistem operasi jaringan. Server juga mengontrol semua akses terhadap jaringan yang ada didalamnya. Agar membantu tugas server, dibuatlah sistem mirroring server dimana server tersebut menduplikasi sebuah data set atau tiruan persis dari sebuah server yang menyediakan berbagai informasi. Mirror server atau disebut juga sinkronisasi server merupakan duplikat dari suatu server. Untuk menambah kinerja dari server maka dibutuhkan load balancer. Load balancing adalah teknik untuk mendistribusikan internet dua jalur koneksi secara seimbang. Dengan penerapan load balancing trafik akan berjalan lebih optimal, memaksimalkan throughput dan menghindari overload pada jalur koneksi. Iptables digunakan untuk memfilter IP sehigga client mengakses server sesuai dengan zona server yang paling dekat. Sehingga load balance yang dipadukan dengan iptables dapat membuat kinerja server menjadi lebih ringan. Masalah yang sering terjadi adalah ketika banyaknya client yang mengakses sebuah server maka server akan overload dan mengakibatkan kinerja server menjadi berat karena padatnya trafik. Client yang mengakses juga mendapatkan efek dari hal tersebut yaitu akses yang lama. Dari hasil penelitian tentang perpaduan antara load balance dan iptables didapati bahwa load balance dengan algoritma round robin rata-rata delay yang didapatkan untuk server1 yaitu 0,149 detik dan 0,19122. Server2 rata-rata delay yang didapatkan 0,161 detik dan 0,012 detik.
APA, Harvard, Vancouver, ISO, and other styles
11

Arini, Arini, Andrew Fiade, and Ridwan Baharsyah. "Perbandingan Load Balancing Router Mysql Dan HAProxy Menggunakan SysBench dan Cluster Innodb Pada Sistem Operasi Centos." Cyber Security dan Forensik Digital 8, no. 1 (2025): 17–24. https://doi.org/10.14421/csecurity.2025.8.1.5004.

Full text
Abstract:
Load balancing merupakan penyeimbang server dalam mendistribusikan beban kerja ke beberapa server dengan mempertimbangkan kapasitas masing-masing server. Ketika beberapa server digunakan, layanan yang ada dapat tetap berfungsi meskipun salah satu server mengalami kegagalan. Dua model load balancing yang akan digunakan adalah MySQL Router dan HAProxy. Penelitian ini bertujuan untuk membandingkan kinerja MySQL Router dan HAProxy dari segi waktu respons, throughput, dan distribusi beban server. Selain itu, penelitian ini juga menguji sinkronisasi data antar server database dengan menggunakan Sysbench sebagai alat pengujian. Sysbench merupakan utilitas benchmark yang dapat mengevaluasi kinerja sistem melalui berbagai parameter pengujian. Hasil penelitian menunjukkan bahwa MySQL Router memiliki kemampuan load balancing yang signifikan dalam mendistribusikan beban dan memastikan ketersediaan server dibandingkan dengan HAProxy. Pengujian dengan thread (beban) terkecil hingga terbesar pada load balancer MySQL Router menghasilkan rentang TPS (Transaction Per Second) 2900 hingga 2600; seiring bertambahnya thread (beban), TPS yang diperoleh semakin menurun, dengan rentang waktu respons 2 hingga 50 ms. Namun, HAProxy menunjukkan nilai TPS yang lebih kecil, berkisar antara sekitar 900 hingga 800 TPS, tetapi menghasilkan waktu respons yang relatif lama, berkisar antara 8 hingga 160 ms. Pengujian sinkronisasi database menunjukkan efisiensi kedua model dalam menangani perubahan data pada server yang berbeda. Penelitian ini memberikan kontribusi yang signifikan terhadap pengembangan infrastruktur TI yang lebih andal dan efisien dalam organisasi, khususnya dalam konteks penggunaan MySQL InnoDB Cluster dan HAProxy pada OS CentOS. Kata kunci: Load Balancing, MySQL Router, HaProxy, InnoDB Cluster, Centos Os, Networking ---------------------------- Abstract Load balancing is a server balancer that distributes the workload among several servers, taking into account the capacity of each server. When multiple servers are used, existing services can continue to function even if one server fails. The two load balancing models to be used are MySQL Router and HAProxy. This study aims to compare the performance of MySQL Router and HAProxy in terms of response time, throughput, and server load distribution. Additionally, this study also tests data synchronization between database servers using Sysbench as a testing tool. Sysbench is a benchmark utility that can evaluate system performance through various test parameters. The results of the study show that MySQL Router has significant load balancing capabilities in distributing loads and ensuring server availability compared to HAProxy. Testing with the smallest to the largest threads on the MySQL Router load balancer resulted in a TPS range from 2900 to 2600; as the thread (load) increases, the TPS obtained decreases, with a response time range of 2 to 50 ms. However, HAProxy showed a smaller TPS value, ranging from around 900 to 800 TPS, but resulted in a relatively long response time, ranging from 8 to 160 ms. Database synchronization tests also reveal the efficiency of both models in handling data changes on different servers. This research makes a significant contribution to the development of more reliable and efficient IT infrastructure within organizations, particularly in the context of using MySQL InnoDB Cluster and HAProxy on CentOS OS. Keywords: Load Balancing, MySQL Router, HaProxy, InnoDB Cluster, Centos Os, Networking.
APA, Harvard, Vancouver, ISO, and other styles
12

Zulfianndari, Irmawati,, and Rini Nur. "EVALUASI KINERJA LOAD BALANCING DENGAN ALGORITMA SCHEDULLING NEVER QUEUE." Journal of Informatics and Computer Engineering Research 1, no. 2 (2024): 42–49. https://doi.org/10.31963/jicer.v1i2.5176.

Full text
Abstract:
Load balancing is used as a technique to handle large loads that cannot be carried out by a single server, so that the server does not experience overload. In handling load sharing, Load balancing uses a scheduling algorithm (Scheduling). The scheduling algorithm that is generally used is Round Robin which works by dividing requests evenly and then creating a queue for the server so that unfinished processes wait in the queue for quite a long time. In the Load balancing system there is an algorithm that adopts two speed models, which works by looking at the server status and the smallest connection delay, namely the Never Queue Algorithm. This study aims to determine the performance of Load balancing when the Schedulling Never Queue Algorithm is applied, based on predetermined scenarios and parameters. This study succeeded in implementing the Never Queue Algorithm in a Load balancing system for the Apache web server where the Time Per Request value will be lower if the Request received is larger when compared to using the Round Robin Algorithm. The Request Per Second value increases when the Requests sent are getting bigger. In terms of sharing server connections, the load balancer will share the load on the number of Requests based on the Shortest Expected Delay (SED) Algorithm, so several web servers receive different numbers of connections, so that processes don't stay in queues for a long time.
APA, Harvard, Vancouver, ISO, and other styles
13

K. A., Vani, and Rama Mohan Babu K. N. "An Intelligent Server load balancing based on Multi-criteria decision-making in SDN." International journal of electrical and computer engineering systems 14, no. 4 (2023): 433–42. http://dx.doi.org/10.32985/ijeces.14.4.7.

Full text
Abstract:
In an environment of rising internet usage, it is difficult to manage network traffic while maintaining a high quality of service. In highly trafficked networks, load balancers are crucial for ensuring the quality of service. Although different approaches to load-balancing have been proposed in traditional networks, some of them require manual reconfiguration of the device to accommodate new services due to a lack of programmability. These problems can be solved through the use of software-defined networks. This research paper presents a dynamic load-balancing algorithm for software-defined networks based on server response time and content mapping. The proposed technique dispatches requests to servers based on real-time server loads. This technique comprises three different modules, such as a request classification module, a server monitoring module, and an optimized dynamic load-balancing module using content-based routing. There are a variety of robust mathematical tools to address complex problems that have multiple objectives. Multi-Criteria Decision-Making is one of them. The performance of the proposed scheme has been validated by applying the Weighted Sum Method of the multi-criteria decision-making technique. The proposed method Server load balancing based on Multi-criteria Decision Making[SDLB-MCDM] is compared with different load-balancing schemes such as round robin, random, load-balancing scheme based on server response time [LBBSRT], and An SDN-aided mechanism for web load- balancing based on server statistics [SD-WLB]. The experimental results of SDLB-MCDM show a significant improvement of 58% when weights are equal and 50% when unequal weights are assigned to various QoS parameters in comparison with the ROUND ROBIN, RANDOM, LBBSRT and SD-WLB techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Cui, Yunhe, Lianshan Yan, Qing Qian, Huanlai Xing, and Saifei Li. "JSSTR: A Joint Server Selection and Traffic Routing Algorithm for the Software-Defined Data Center." Applied Sciences 8, no. 9 (2018): 1478. http://dx.doi.org/10.3390/app8091478.

Full text
Abstract:
Server load balancing technology makes services highly functional by distributing the incoming user requests to different servers. Thus, it plays a key role in data centers. However, most of the current server load balancing schemes are designed without considering the impact on the network. More specifically, when using these schemes, the server selection and routing path calculation are usually executed sequentially, which may result in inefficient use of network resources or even cause some issues in the network. As an emerging architecture, Software-Defined Networking (SDN) provides new solutions to overcome these shortcomings. Therefore, taking advantages of SDN, this paper proposes a Joint Server Selection and Traffic Routing algorithm (JSSTR) based on improving the Shuffle Frog Leaping Algorithm (SFLA) to achieve high network utilization, network load balancing and server load balancing. Evaluation results validate that the proposed algorithm can significantly improve network efficiency and balance the network load and server load.
APA, Harvard, Vancouver, ISO, and other styles
15

Kethineni Vinod Kumar, Govindu Reddylatha, Mala Sindhu, and Kamasani Jayasree. "A Comprehensive survey of Load Balancing Techniques: From Classic Methods to Modern Algorithms." International Research Journal on Advanced Engineering Hub (IRJAEH) 2, no. 02 (2024): 287–96. http://dx.doi.org/10.47392/irjaeh.2024.0044.

Full text
Abstract:
Now-a-days, cloud computing has become a cornerstone of modern technology, driving innovation, efficiency and accessibility across various industries and application. Distributed computing solves the difficulty of using distributed autonomous machines and communicating with each other over a network. Cloud computing provides clients with a range of services and capabilities that enhances productivity, accessibility and scalability while reducing the need for extensive hardware and infrastructure investments. When interest in distributed computing rises, it implies that more individuals, businesses, and organizations are exploring, adopting, and implementing distributed computing solutions. This surge in interest leads to increase in data traffic. There are two solutions for the issue of increase in data traffic, one is to server optimization (or) server performance enhancement (upgrade single server to a high performance server) but upgraded server may exceed its capacity (overload).Another one is multi servers. Ulti-server configurations are common in scenarios where the demands of an application or service exceed the capabilities of a single server, and the distribution of tasks across multiple servers is necessary for optimal performance and reliability. When there are multiple servers, there is an issue i.e. Load Adjusting. [1] Load balancing is one of the critical issues in cloud computing. In a cloud environment, where resources are often dynamically allocated and distributed, load balancing plays a central role in managing workloads efficiently. Load balancing in cloud computing is a technique used to distribute computing workloads and network traffic across multiple servers or resources within a cloud environment. The primary goal of load balancing is to optimize resource utilization, prevent individual servers from becoming overloaded, and ensure that the overall system can handle varying levels of demand efficiently. Here we also discussed about the hybrid of Cat and Mouse Optimization and Grey Wolf Optimization algorithms. This paper refers to Cloud Computing, Load balancing techniques, Load balancing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Yu, Jun Xi, and Guo Huan Lou. "The Study of Server Load Scheduling Strategy." Applied Mechanics and Materials 347-350 (August 2013): 1983–86. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1983.

Full text
Abstract:
In this paper the classification and development of server load balancing technology are briefly described and the load balancing algorithms based on server cluster are compared. A server load balancing technology and algorithm based on multiple parameters are proposed. Finally, the load balancing algorithm is tested. Testing results show that the method is feasible.
APA, Harvard, Vancouver, ISO, and other styles
17

Johari, Kushal, Gaurav KUMAR Agarwal, and Y. D. S. Arya. "Server Load Balancing Analysis for Client Task Assignment in Distributed Systems." ECS Transactions 107, no. 1 (2022): 17605–21. http://dx.doi.org/10.1149/10701.17605ecst.

Full text
Abstract:
The client-server model is widely used in many distributed systems in which proper assignment of client is an important factor for the better performance of system. A large number of clients may exist in a distributed system that establishes communication with one another through a variety of intermediary servers. The primary objective of the overall system is to allocate clients to servers to maintain the load balancing as well as communication cost. There are two criteria for server load evaluation. The first one is complete load, followed by task scheduling, with complete load and task scheduling being diametrically opposed measurements. It seems to be NP-hard to figure out the appropriate client-server allotment for a distributed system's total load and load balancing. In this research, we examine distinct server load balancing strategies for the assignment of clients in distributed systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Ramana, Kadiyala, and M. Ponnavaikko. "AWSQ: an approximated web server queuing algorithm for heterogeneous web server cluster." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (2019): 2083. http://dx.doi.org/10.11591/ijece.v9i3.pp2083-2093.

Full text
Abstract:
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.
APA, Harvard, Vancouver, ISO, and other styles
19

Rutten, Daan, and Debankur Mukherjee. "Load Balancing Under Strict Compatibility Constraints." ACM SIGMETRICS Performance Evaluation Review 49, no. 1 (2022): 51–52. http://dx.doi.org/10.1145/3543516.3456275.

Full text
Abstract:
Consider a system with N identical single-server queues and M(N) task types, where each server is able to process only a small subset of possible task types. Arriving tasks select d≥2 random compatible servers, and join the shortest queue among them. The compatibility constraints are captured by a fixed bipartite graph GN between the servers and the task types. When GN is complete bipartite, the meanfield approximation is accurate. However, such dense compatibility graphs are infeasible for large-scale implementation. We characterize a class of sparse compatibility graphs for which the meanfield approximation remains valid. For this, we introduce a novel notion, called proportional sparsity, and establish that systems with proportionally sparse compatibility graphs asymptotically match the performance of a fully flexible system. Furthermore, we show that proportionally sparse random compatibility graphs can be constructed, which reduce the server-degree almost by a factor N/ln(N) compared to the complete bipartite compatibility graph.
APA, Harvard, Vancouver, ISO, and other styles
20

Pimparkhede, Kunal. "Client side and Server Side Load Balancing." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (2021): 30–31. http://dx.doi.org/10.22214/ijraset.2021.38748.

Full text
Abstract:
Abstract: In the microservice architecture it is vital to distribute loads across replicated instances of microservices. Load distribution such that no single instance is overloaded is called as load balancing. Often the instances of microservices are replicated across different racks, different data centers or even different geographies. Modern cloud based platforms offer deployment of microservices across different server instances which are geographically disperse. Having a system that will balance the load across service instances becomes a key success criteria for accurate functioning of distributed software architecture Keywords: Load Balancing, Microservices, Distributed software system
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Xuan Phi1 and Tran Cong Hung2. "LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTING." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 7, December (2018): 01–12. https://doi.org/10.5281/zenodo.1452029.

Full text
Abstract:
Load balancing techniques in cloud computing can be applied at different levels. There are two main levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual machines is a policy of allocating resources from physical server to virtual machines for tasks or applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently so that the response time is minimized to avoid congestion. Load balancing should also be performed between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a virtual machine-level load balancing algorithm that aims to improve the average response time and average processing time of the system in the cloud environment. The proposed algorithm is compared to the algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms have optimized response times.
APA, Harvard, Vancouver, ISO, and other styles
22

Denisov, O. V. "Discrete-event simulation of load distribution between server stations." Journal of Physics: Conference Series 2182, no. 1 (2022): 012018. http://dx.doi.org/10.1088/1742-6596/2182/1/012018.

Full text
Abstract:
Abstract The simulation model of the computational load balancing in a server complex using a balancing server is proposed. The server load balancing model was developed using MATLAB/Simulink/SimEvents/Stateflow tools. The SimEvents based model makes it possible to simulate the server complex as a queuing system and to take into account the sporadic occurrence of requests. The Stateflow based event-driven model simulates the system with a variable time delay in data transmission channels and simulates state transitions for servers. This discrete-event simulation model allows evaluating the possibility of rational use of computational resources and reducing the time of service. In addition, the model allows to investigate the effectiveness of load balancing systems at the stage of their development.
APA, Harvard, Vancouver, ISO, and other styles
23

Prasad, Vinay Kumar. "Optimized Load Balancing Using Adaptive Algorithm in Cloud Computing with Round Robin Technique." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (2022): 134–49. http://dx.doi.org/10.22214/ijraset.2022.45225.

Full text
Abstract:
Abstract: Developments in the field of computer networks have been carried out by several groups. However, there are still a lot of wrong problems one is the server load. For this reason, a system will be implemented Load Balancing with the aim of overcoming the server load which is not in accordance with its capacity and to optimize server load before and after the implementation of the Round robin Algorithm Load Balancing system on the cloud servers. The method used is the comparative method, namely researches that compares and analyze two or more symptoms, compare least connection algorithm as the previous algorithm with Round robin algorithm. Load Balancing Testing with both algorithms using a software called Httperf. Httperf displays the value according to parameters. The parameters used are Throughput, Response Time, Error and CPU Utilization. The test results show that load balancing with the algorithm round robin is more effective to handle server load than algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Riskiono, Sampurna Dadi, and Donaya Pasha. "Analisis Perbandingan Server Load Balancing dengan Haproxy & Nginx dalam Mendukung Kinerja Server E- Learning." Jurnal Telekomunikasi dan Komputer 10, no. 3 (2020): 135. http://dx.doi.org/10.22441/incomtech.v10i3.8751.

Full text
Abstract:
Perkembangan dunia internet terus meningkat. Hal ini terlihat dengan berbagai sistem yang dapat diakses secara online, termasuk implementasi sistem e- learning. Kondisi ini harus didukung dengan infrastruktur server yang memadai. Server harus mampu menangani lonjakan request yang dilakukan setiap user. Oleh karenanya perlu dibangun sistem server dengan memanfaatkan metode load balancing. Sistem load balancing mampu mendistribusikan beban secara merata terhadap masing-masing server yang ada dilingkungannya. Untuk itu perlu dilakukan pengujian terhadap server load balancing yang akan menjadi tumpuan request user sebelum distribusikan ke masing-masing server. Penelitian ini akan melakukan pengujian terhadap Haproxy dan Nginx dengan melakukan pengukuran terhadap dua variable yakni throughput dan response time. Hasil pengujian menunjukkan implementasi load balancing dengan Haproxy memiliki nilai response time sebesar 585 ms lebih kecil dengan Nginx yang memiliki waktu response time 1180,58 ms pada uji koneksi 300/300 sec. Sedangkan untuk nilai throughput load balancing dengan Haproxy juga menunjukkan kinerja yang lebih baik dengan nilai sebesar 896,48 Kb/s lebih besar dibandingkan dengan Nginx yang berada pada angka 848,52 Kb/s pada uji koneksi 300/300 sec. Berdasarkan pengujian yang telah dilakukan, penerapan load balancing dengan Haproxy lebih baik dibandingkan dengan Nginx.
APA, Harvard, Vancouver, ISO, and other styles
25

Nugrahadi, Dodon Turianto, Rudy Herteno, and Muhammad Anshari. "PENGARUH IMPLEMENTASI LOAD BALANCING DAN TUNING WEB SERVER PADA RESPONSE TIME RASPBERRY PI." KLIK - KUMPULAN JURNAL ILMU KOMPUTER 6, no. 2 (2019): 211. http://dx.doi.org/10.20527/klik.v6i2.249.

Full text
Abstract:
<div class="WordSection1"><p><em>The rapid development of technology, the increase in web-based systems and development of microcontroller device, have an impact on the ability of web servers to respond in serving client requests. This study aims to analyze load balancing methods </em><em>algoritma round robin </em><em>and </em><em>tuning that significant influence for</em><em> the response time and the number of clients who are able to be handled in serving clients on the web server with microcontroller device. From this study with Stresstool testing the response time was 2064, 2331,4 and 1869,2ms for not using load balancing and 2270, 2306,2 and 2202ms with load balancing from 700 requests served by web servers. It can be concluded that web server response times that use load balancing are smaller than web servers without load balancing. Furthermore, using tuning with the response time obtained at 3103.4ms from 1100 requests. So, with tuning can reduce response time and increase the number of requests.</em><em> With level significant level calculatio, have it khown that tuning configuration give significant effe</em><em>ct</em><em> for </em><em>the response time and the number of clients in microcontroller.</em></p><p><strong>Keywords</strong>: <em>Web server, Raspberry, Load balancing, Response time, Stresstool.</em></p><p><em>Perkembangan </em><em>implementasi teknologi</em><em> yang pesat, seiring dengan perkembangannya </em><em>sistem </em><em>berbasis web dan perangkat mikrokontroler</em><em>, berdampak pada kemampuan web server dalam memberikan tanggap untuk melayani permintaan klien. Penelitian ini bertujuan untuk menganalisis metode load balance</em><em> </em><em>algoritma round robin dan tuning yang berpengaruh terhadap waktu tanggap dan banyaknya jumlah klien yang mampu ditangani dalam melayani klien pada web server</em><em> dengan mikrokontroler</em><em>. Dari penelitian ini </em><em>dengan pengujian Stresstool </em><em>didapatkan waktu tanggap sebesar </em>2331,4, 2064, 1869,2<em>ms untuk tanpa load balancing dan </em>2270, 2306,2<em> dan </em>2202<em>ms dengan load balancing dari </em><em>600 permintaan </em><em>yang dilayani </em><em>web server</em><em>. Dapat disimpulkan bahwa waktu tanggap web server yang menggunakan load balancing lebih kecil dibandingkan web server yang tidak menggunakan load balancing. Selanjutnya menggunakan tuning dengan waktu tanggap sebesar 3103,4ms dari 1100 permintaan. Jadi, tuning dapat mempersingkat waktu tanggap dan meningkatkan jumlah permintaan yang dilayani web server.</em><em> Selanjutnya dengan penghitungan tingkat pengaruh, bahwa </em><em>diketahui konfigurasi load balancing algoritma round robin dan tuning memberikan pengaruh secara signifikan terhadap waktu tanggap dan jumlah permintaan pada mikrokontroler.</em></p><p><strong><em>Kata kunci</em></strong><em> : </em><em>Web server</em><em>,</em><em> Raspberry,</em><em> Load balancing, Waktu tanggap, Stresstool,</em><em> Jmeter,</em><em> Klien</em></p></div><em><br clear="all" /> </em>
APA, Harvard, Vancouver, ISO, and other styles
26

Singh, Harikesh, and Shishir Kumar. "Dispatcher Based Dynamic Load Balancing on Web Server System." International Journal of System Dynamics Applications 1, no. 2 (2012): 15–27. http://dx.doi.org/10.4018/ijsda.2012040102.

Full text
Abstract:
The traffic increasing in the network creates bulk congestion while the bulk transfer of data evolves. Performance evaluation and high availability of servers are important factors to resolve this problem using various cluster based systems. There are several low-cost servers using the load sharing cluster system which are connected to high speed networks, and apply load balancing technique between servers. It offers high computing power and high availability. A distributed website server can provide scalability and flexibility to manage with emergent client demands. Efficiency of a replicated web server system will depend on the way of distributed incoming requests among these replicas. A distributed Web-server architectures schedule client requests among the multiple server nodes in a user-transparent way that affects the scalability and availability. The aim of this paper is the development of a load balancing techniques on distributed Web-server systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Singh, Harikesh, and Shishir Kumar. "Analysis & Minimization of the Effect of Delay on Load Balancing for Efficient Web Server Queueing Model." International Journal of System Dynamics Applications 3, no. 4 (2014): 1–16. http://dx.doi.org/10.4018/ijsda.2014100101.

Full text
Abstract:
Load balancing applications introduce delays due to load relocation among various web servers and depend upon the design of balancing algorithms and resources required to share in the large and wide applications. The performance of web servers depends upon the efficient sharing of the resources and it can be evaluated by the overall task completion time of the tasks based on the load balancing algorithm. Each load balancing algorithm introduces delay in the task allocation among the web servers, but still improved the performance of web servers dynamically. As a result, the queue-length of web server and average waiting time of tasks decreases with load balancing instants based on zero, deterministic, and random types of delay. In this paper, the effects of delay due to load balancing have been analyzed based on the factors: average queue-length and average waiting time of tasks. In the proposed Ratio Factor Based Delay Model (RFBDM), the above factors are minimized and improved the functioning of the web server system based on the average task completion time of each web server node. Based on the ratio of average task completion time, the average queue-length and average waiting time of the tasks allocated to the web server have been analyzed and simulated with Monte-Carlo simulation. The results of simulation have shown that the effects of delays in terms of average queue-length and average waiting time using proposed model have minimized in comparison to existing delay models of the web servers.
APA, Harvard, Vancouver, ISO, and other styles
28

Panda, Sanjaya Kumar, Swati Mishra, and Satyabrata Das. "An Efficient Intra-Server and Inter-Server Load Balancing Algorithm for Internet Distributed Systems." International Journal of Rough Sets and Data Analysis 4, no. 1 (2017): 1–18. http://dx.doi.org/10.4018/ijrsda.2017010101.

Full text
Abstract:
The growing popularity of Internet Distributed System has drawn enormous attention in business and research communities for handling large number of client requests. These requests are managed by a set of servers. However, the requests may not be equally distributed due to their random nature of arrivals. The optimal assignment of the requests to the servers is a well-known NP-hard problem. Therefore, many algorithms have been proposed to address this problem. However, these algorithms suffer from an excessive number of comparisons. In this paper, a Swapping-based Intra- and inter-Server (SIS) load balancing with padding algorithm is proposed for its solution. The algorithm undergoes a three-phase process to balance the loads among the servers. The proposed algorithm is compared with a client-server load balancing algorithm and the performance is measured in terms of the number of load comparisons and load factor. The simulation outcomes show the efficacy of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Cardinaels, Ellen, Sem C. Borst, and Johan S. H. van Leeuwaarden. "Job assignment in large-scale service systems with affinity relations." Queueing Systems 93, no. 3-4 (2019): 227–68. http://dx.doi.org/10.1007/s11134-019-09633-y.

Full text
Abstract:
Abstract We consider load balancing in service systems with affinity relations between jobs and servers. Specifically, an arriving job can be assigned to a fast, primary server from a particular selection associated with this job or to a secondary server to be processed at a slower rate. Such job–server affinity relations can model network topologies based on geographical proximity, or data locality in cloud scenarios. We introduce load balancing schemes that assign jobs to primary servers if available, and otherwise to secondary servers. A novel coupling construction is developed to obtain stability conditions and performance bounds. We also conduct a fluid limit analysis for symmetric model instances, which reveals a delicate interplay between the model parameters and load balancing performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Shivaliya, Shikha, and Vijay Anand. "Design of Load Balancing Technique for Cloud Computing Environment." ECS Transactions 107, no. 1 (2022): 2911–18. http://dx.doi.org/10.1149/10701.2911ecst.

Full text
Abstract:
Cloud computing allows for the provision of IT resources on-demand and has various advantages. Because the majority of firms have shifted their activities to the cloud, data centers are frequently flooded with sporadic loads. When dealing with high network traffic in the cloud, it is necessary to balance the load among servers. This is something that load balancing can help with. The primary goal of load balancing is to distribute demand evenly among all available servers such that no server is under or overloaded. Load balancing is the process of dispersing load among several nodes to make the best use of resources when work is overwhelmed. When a node is overburdened to support the load, load balancing is essential. When a node becomes overcrowded, the load is dispersed to the remaining optimal nodes.
APA, Harvard, Vancouver, ISO, and other styles
31

Et. al., M. Sakthivela,. "An Analysis of Load Balancing Algorithm Using Software- Defined Network." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 10 (2021): 5580–89. http://dx.doi.org/10.17762/turcomat.v12i10.5368.

Full text
Abstract:
A data center is one of the secured space which serves as a house for many disks, switches, servers, routers, and several other hardware for a computer. It is a hardware solution for users who are present near the data center. Cloud Computing (CC) is the newer version of the data center which is capable of giving computer services to the users. Cloud service is the one which provides the users with data center set up when needed or preferred by the user themselves. In general cloud network service will be limited to a particular location or zone. In case if the target area of the user is near then the server will complete the action needed for the user. Only certain servers can provide service for the user in such cases some servers will remain idle. If the provided service by the servers is not used properly this may also lead to the issue of processing and managing a larger set of traffic which is one of the difficult tasks to perform. More network traffic may lead to giving more amount of pressure and complexity to the data center. In such a case Load Balancing (LB) is the preferred process for reducing the network failure and degradation process of the entire network. Software-defined networking is one the upcoming process which is capable of managing the entire network and also gives a general view about the network and its further configuration which needed to be upgraded. In this research paper, a software-defined network is developed for LB algorithms. In this paper, a Dynamic Server LB algorithm (Dserv- LB) is used for open-flow switches in the software-defined network system. This algorithm is one of the packet type LB algorithm. The request in the server is directly forwarded to the web-server with a higher level of server resources. From the result of the proposed algorithm, it was found that Dserv-LB is capable of improving the performance of the entire network and properly uses the server resource.
APA, Harvard, Vancouver, ISO, and other styles
32

Mamta, Dhanda, and Gupta Parul. "SOFTWARE ENABLED LOAD BALANCER BY INTRODUCING THE CONCEPT OF SUB SERVERS." International Journal of Engineering Science and Humanities 2, no. 2 (2012): 1–10. http://dx.doi.org/10.62904/45as7c82.

Full text
Abstract:
In computer networking, load balancing is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, and minimize response time. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch). One of the most common applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm. Commonly loadbalanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol sites, NNTP servers and DNS servers. In this paper we proposes a software enabled load balancing model by introducing the concept of sub servers for regional services to overcome the overhead of the main server.
APA, Harvard, Vancouver, ISO, and other styles
33

Nuraini, Rini. "Implementasi Metode Load Balancing Sebagai Upaya Meningkatkan Kinerja Server." Journal of Information System Research (JOSH) 3, no. 4 (2022): 507–14. http://dx.doi.org/10.47065/josh.v3i4.1792.

Full text
Abstract:
The use of the internet turns out to cause problems that must be found a solution. One of them is the service provider's server load which continues to increase. This condition is triggered by the increasing number of clients from time to time. Some sites even reported receiving hundreds of thousands of connection requests simultaneously from a number of clients. From these problems, research is needed that focuses on how to design and build a server system that is able to handle the increase in incoming requests so that the server load can be unraveled. The purpose of this research is that the service provider's server can improve its service to the client. There are several approaches to overcome these problems, one of which is by applying the load balancing method. So that by implementing load balancing, the incoming large load will be distributed to each of the service provider's servers. In the tests that have been carried out, the scalability of the system has increased. This is proven when a system with load balancing is given 10000 connections, the test results have an average response time of 44.42 ms. As for the system without load balancing, the test results have an average response time value of 185.88 ms. From the test results, it can be seen that the average response time of the server system with load balancing is smaller than without load balancing, so the service performance of the system can be continuously improved by implementing load balancing.
APA, Harvard, Vancouver, ISO, and other styles
34

Марусик, Андрій Миколайович. "Web-server dynamic load balancing system." Information systems, mechanics and control, no. 12 (July 1, 2015): 25. http://dx.doi.org/10.20535/2219-380412201543678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lenhardt, Jorg, Kai Chen, and Wolfram Schiffmann. "Energy-Efficient Web Server Load Balancing." IEEE Systems Journal 11, no. 2 (2017): 878–88. http://dx.doi.org/10.1109/jsyst.2015.2465813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hou, Weiguang, Gang He, and Xinwen Liu. "Dynamic load balancing scheme on massive file transfer system." MATEC Web of Conferences 232 (2018): 04004. http://dx.doi.org/10.1051/matecconf/201823204004.

Full text
Abstract:
In this paper, a dynamic load balancing scheme applied to massive file transfer system is proposed. The scheme is designed to load balance FTP server cluster. Instead of recording connection number, runtime load information of each server is periodically collected and used in combination of static performance parameters collected on server startup to calculate the weight of servers. Improved Weighted Round-Robin algorithm is adopted in this scheme. Importantly, the weight of each server is initialized with static performance parameters and dynamically modified according to the runtime load. Apache Zookeeper cluster is used to receive all information and it will inform director of the runtime load variation and offline behavior of any server. In order to evaluate the effect of this scheme, a contrast experiment with LVS is also conducted.
APA, Harvard, Vancouver, ISO, and other styles
37

G A, Akash. "Optimizing Cloud Application Performance: A Survey on Load Balancing Techniques." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem34983.

Full text
Abstract:
In the contemporary realm of operation in the World Wide Web, the cloud applications have to meet the high availability, operations on dynamic loads and must function at a very high quality. Load balancing is a proper technique to achieve above goals to distribute the loads across the server or to the various resources. This distribution optimizes performance, scalability, reliability and resources to be used, hence making the network to be appealing to bigger organizations to adopt it. The advantages because of load balancing are reduced response time, increased throughput and fault tolerance because the network traffic load is spread evenly among different servers thereby avoiding the occurrence of a bottleneck server. Some of these ways include Round Robin, Least Connection, and Weighted Distribution, help in the process by making it possible for systems to horizontally scale up for more traffic. As cloud services becomes popular, it becomes very important to manage the load since some applications or services may be more resource intensive than others. It distributes the incoming requests following the current loads on various servers and consequently, no server is too busy or idle. Also, load balancers provide for fault tolerance in the sense that traffic will be handled by other healthy servers in the occurrence of failures hence the solutions are highly available and reliable. Load balancing algorithms such from the basic rule-dependent to complex reinforcement learning algorithms are very crucial with respect to allocation of resources, energy management, and stability of the system. These algorithms will be increasingly important as cloud environments develop further in the efficiency of delivering services and the managing of resources needed, benefiting both the economy and the environment. INDEX TERMS: Performance, Scalability, Response Times, Latency, Horizontal Scaling, Load Balancing
APA, Harvard, Vancouver, ISO, and other styles
38

He, Hui, Yana Feng, Zhigang Li, Zhenguang Zhu, Weizhe Zhang, and Albert Cheng. "Dynamic load balancing technology for cloud-oriented CDN." Computer Science and Information Systems 12, no. 2 (2015): 765–86. http://dx.doi.org/10.2298/csis141104025h.

Full text
Abstract:
With soaring demands of Internet content services, content delivery network (CDN), one of the most effective content acceleration techniques, is applied into Internet services. Content routing functions in CDN are generally realized by load balancing system. Effectiveness of load balancing strategy determines response speed to users and user experience (UE) directly. This paper extracted the most important influencing factor of CDN loading from common network services and proposed the Variable Factor Weighted Least Connection. The proposed algorithm made real-time computing and dynamic regulation in considering of effect of network applications on server load index, performance changes of the server and workload changes. It has been applied in LVS kernel system successfully. The experiment confirmed that the CDN load schedule system with Variable-Factor-Weighted-Least-Connection could balance loads among cluster servers dynamically according to processing capacity changes of servers, thus enabling to provide users desired services and contents during high large-scale concurrence accesses of users.
APA, Harvard, Vancouver, ISO, and other styles
39

Yao, Ming Hai, Na Wang, and Jin Shan Li. "The Multi-Server Load Balancing Systems Research in Large-Website Construction." Applied Mechanics and Materials 713-715 (January 2015): 2378–81. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.2378.

Full text
Abstract:
The load balancing method based on the Linux server cluster is proposed in large-website construction. The load balancing of server cluster is achieved by SSH framework, Oracle database and Ajax technologies. The load balancing of server cluster method can’t only improve performance, scalability, availability of system and reduce the execution time of multi-tasking but also eliminates network bottlenecks and improve the flexibility and reliability of the network. In order to verify the validity of the algorithm, large number of experimental data is used in the experiment. We propose a method of load balancing algorithm is compared with the traditional non-load-balancing algorithm for CPU utilization and system response time in the experiments. The experimental results show that the load balancing technology can reduce system response time and CPU utilization of server.
APA, Harvard, Vancouver, ISO, and other styles
40

Putra, Muhammad Aldi Aditia, Iskandar Fitri, and Agus Iskandar. "Implementasi High Availability Cluster Web Server Menggunakan Virtualisasi Container Docker." JURNAL MEDIA INFORMATIKA BUDIDARMA 4, no. 1 (2020): 9. http://dx.doi.org/10.30865/mib.v4i1.1729.

Full text
Abstract:
The increasing demand for information on the internet causes the traffic load on the web server to increase. Therefore it can cause the workload on a web server service to be overloaded (request), so that the server is down (overloaded). Based on previous research the application of load balancing can reduce the burden of traffic on the web server. This research method uses load balancing on servers with round robin algorithm and least connections as well as a single server as a comparison. The parameters measured are throughput, responses time, requests per second, CPU utilization. From the test results obtained Haproxy load balancing system, the least connection algorithm is superior to the round robin algorithm. Generated per-second request value of 2607,141 req / s and throughput of 9.25 MB / s for the least connection, while 2807,171 req / s and 9.30 MB / s for round robin.
APA, Harvard, Vancouver, ISO, and other styles
41

Pentanugraha, Elsan, Agus Sehatman Saragih, and Efrans Christian. "Analisis Kinerja Load Balancing Webserver Menggunakan Haproxy Terintegrasi Dengan Grafana Sebagai Monitoring Dan Notifikasi Telegram." Journal of Information Technology and Computer Science 4, no. 1 (2024): 68–80. http://dx.doi.org/10.47111/jointecoms.v4i1.13191.

Full text
Abstract:
Di era digital saat ini, Website sangat penting bagi organisasi atau perusahaan. Namun, semakin banyak pengunjung pada Website, semakin besar beban server. Ini dapat membuat server menjadi lambat atau down. Solusi untuk masalah ini adalah load balancing. Haproxy adalah perangkat lunak open-source yang bisa digunakan sebagai load Balancer. Monitoring platform yang digunakan dalam penelitian ini adalah Grafana yang memantau sumber daya server secara real-Time dan menyediakan visualisasi data yang mudah dipahami. Tujuan penelitian ini adalah menganalisis kinerja pada load balancing Web server, memberikan gambaran kinerja Haproxy sebagai load balancing dalam menangani beban berat pada Web server, dan memberikan solusi alternatif untuk meningkatkan kinerja Web server. Tahapan dalam penelitian ini peneliti melakukan analisis kebutuhan, perancangan topologi, implementasi, dan pengujian Web server menggunakan Apache JMeter. Penelitian ini menggunakan Apache JMeter untuk menguji keandalan dan stabilitas Web server saat menangani lonjakan permintaan yang besar pada skenario single server dan Load balancing. Peneliti melakukan pengujian kinerja load balancing Web server dengan beberapa parameter yaitu throughput, Response Time, error rate, memory utilization, dan CPU utilization. Berdasarkan analisis dan penelitian yang dilakukan tentang kinerja load balancing pada Web server menggunakan HAProxy dan integrasi dengan Grafana sebagai Monitoring serta notifikasi Telegram, dapat disimpulkan bahwa Grafana yang terintegrasi dengan HAProxy efektif sebagai alat Monitoring performa Web server yang memberikan visualisasi data yang jelas dan mudah dipahami. Hal ini memudahkan proses analisis dan evaluasi performa Web server. Penggunaan load balancing juga meningkatkan ketersediaan server dalam menangani permintaan, terbukti dengan penurunan persentase kegagalan server secara drastis.
APA, Harvard, Vancouver, ISO, and other styles
42

Riska, Riska, and Hendri Alamsyah. "Analisa Dan Perancangan Load Balancing Web Server Mengunakan HAProxy." Techno.Com 20, no. 4 (2021): 552–65. http://dx.doi.org/10.33633/tc.v20i4.5225.

Full text
Abstract:
Penelitian ini bertujuan untuk merancang sistem server load balancing web server dan selanjutnya melakukan analisis masalah-masalah yang ada sebagai dasar perancangan server untuk mengatasi masalah yang ada pada server yang akan dibuat. Penelitian ini dilakukan dengan menggunakan pendekatan berlandaskan pada metode perancangan atau eksperimental, selanjutnya dilakukan analisa terhadap kondisi server sebelum dan sesudah dirancangnya load balancing web server serta aplikasi website yang akan diterapkan dengan mengimplementasikan load balancing web server menggunakan HAProxy. Hasil dari penelitian ini menunjukkan bahwa load balancing web server menggunakan HAProxy dapat meningkatkan performansi server website berdasarkan tingkat ketersediaan server (uptime) sebesar 99,49% dan rata – rata waktu klik sebesar 7,291ms per user serta konsistensi data pada server website Universitas Dehasen Bengkulu. Untuk menjaga konsistensi data, pada pengembangan server ini juga memanfaatkan fasilitas replikasi data. Dengan solusi ini tingkat ketersediaan server akan terjaga dan konsistensi data yang terjamin.
APA, Harvard, Vancouver, ISO, and other styles
43

Markov, A. N. "Cluster system load balancing model with consideration of hardware characteristics of server hardware." Informatics 19, no. 4 (2022): 84–93. http://dx.doi.org/10.37661/1816-0301-2022-19-4-84-93.

Full text
Abstract:
Objectives. To upgrade and complement the existing load balancing model in multi-server systems, taking into account the hardware characteristics of the server equipment, as well as the most loaded components of the server equipment in the video conferencing service cluster in educational processes and distance education.Methods. The existing mathematical model of load balancing as a mass exchange system is considered, when significant changes are introduced: penalties for equipment downtime and penalties for waiting in a queue will depend on the load on the server hardware components in the cluster architecture of video conferencing service.Results. Formulas are given for calculating the total performance of a cluster of n servers with the maximum and minimum load of server hardware components in a videoconferencing system cluster.Conclusion. A modeling complex has been developed to test the mathematical model on a system of up to n < 10 servers in a cluster of a videoconferencing system. Based on the results of calculations of the modeling complex, it was concluded that it is necessary to upgrate the existing algorithm for balancing the load on the selected BigBlueButton video conferencing service.
APA, Harvard, Vancouver, ISO, and other styles
44

Sinlae, Alfry Aristo Jansen, Muhammad Bagir, and M. Hadi Prayitno. "Analisis Perbandingan Algoritma Round-Robin dengan Least-Connection Terhadap Peningkatan Nilai Throughput Pada Layanan Web Server." JURIKOM (Jurnal Riset Komputer) 9, no. 5 (2022): 1584. http://dx.doi.org/10.30865/jurikom.v9i5.4995.

Full text
Abstract:
The massive growth of the current internet network has triggered an increase in the number of users connected to various server services. These conditions must be handled with a good server system. This can be accomplished through the implementation of many servers, because with many servers the incoming load will be unraveled. One method that can be used in distributing the received load to a number of servers is load balancing. This study aims to obtain the best throughput value from the load balancing method using round-robin and least-connection algorithms. The results of the connection test with a request value of 500 connections/second for 1000 requests, 600 connections/second for 1200 requests, load balancing with the least-connection algorithm looks a little better. This condition is caused by the distribution of active connections that can still be handled by each server. However, during the test period of 700 connections/sec for 1400 requests, 800 connections/sec for 1600 requests, and 900 connections/sec for 1800 requests there was a change in the ability to respond to incoming requests. This certainly has a significant impact on the throughput provided by the server when processing a request.
APA, Harvard, Vancouver, ISO, and other styles
45

Bauer, Thomas, Manfred Reichert, and Peter Dadam. "Intra-Subnet Load Balancing in Distributed Workflow Management Systems." International Journal of Cooperative Information Systems 12, no. 03 (2003): 295–323. http://dx.doi.org/10.1142/s0218843003000760.

Full text
Abstract:
For enterprise-wide and cross-organizational process-oriented applications, the execution of workflows (WF) may generate a very high load. This load may affect WF servers as well as the underlying communication network. To improve system scalability, several approaches for distributed WF management have been proposed in the literature. They have in common that different partitions of a WF instance graph may be controlled by different WF servers from different subnets. The control over a particular WF instance, therefore, may be transferred from one WF server to another during run-time if this helps to reduce the overall communication load. Thus far, such distributed approaches assume that exactly one WF server resides in each subnet. A single server per subnet, however, may become overloaded. In this paper, we present and verify a novel approach for replicating WF servers in a distributed workflow management system. It enables an arbitrary and changeable distribution of the load to the WF servers of the same subnet, without requiring additional communication.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Li Na, and Xue Si Ma. "Load Balancing in the Parallel Queueing Web Server System." Applied Mechanics and Materials 143-144 (December 2011): 346–49. http://dx.doi.org/10.4028/www.scientific.net/amm.143-144.346.

Full text
Abstract:
With the explosive use of internet, contemporary web servers are susceptible to overloads during which their services deteriorate drastically and often lead to denial of services. Many companies are trying to address this problem using multiple web servers with a front-end load balancer. Load balancing has been found to provide an effective and scalable way of managing the ever-increasing web traffic. Load balancing is one of the central problems that have to be solved in parallel queueing web server system. To analyze load balancing this paper presents a queueing system that has two web servers. Firstly the centralized load balancing system is considered. The next, two routing policies are studied, the average response time and the rejection rate are derived. Finally some of our results are further considered.
APA, Harvard, Vancouver, ISO, and other styles
47

Alharbi, Fawaz, and Mustafa Mustafa. "Two-Tier Load Balancer as a Solution to a Huge Number of Servers." Journal of Engineering and Applied Sciences 9, no. 1 (2022): 1. http://dx.doi.org/10.5455/jeas.2022050101.

Full text
Abstract:
High number of users, connected devices and services on the Internet producing high traffic and load on the web servers causes a degradation of the quality of Internet services. A possible solution to this problem is to use a cluster of web servers. The cluster requires a load balancer to provide scalability and high performance of the services offered. The main load balancer is the only entry point to the server cluster in this architecture. In this paper, the researchers propose a two-tier load balancer rather than a single one to achieve more scalability and reduce the load on the main load balancer. The study also compared three features, Response Time Average, the load balancer CPU Utilization, and Servers’ CPU Utilization. The comparison uses three algorithms (Round Robin, Number of Connections, and Least Load) through two experiments. The results concluded that the Multi-Tier Load Balancing method offered better CPU utilization than a Single-Tier Load Balancing method for Round Robin and Server Load algorithms. However, the Single-Tier Load Balancing method provided better response time for all three algorithms. Moreover, the Round Robin and Server Load algorithms using a Multi-Tier method balanced the CPU utilization for all servers. This result shows the Multi-Tier method handles huge traffic and large number of servers with better CPU utilization.
APA, Harvard, Vancouver, ISO, and other styles
48

Pratama, Kresna Adi, Ridho Taufiq Subagio, Muhammad Hatta, and Victor Asih. "IMPLEMENTASI LAOD BALANCING PADA WEB SERVER MENGGUNAKAN APACHE DENGAN SERVER MIRROR DATA SECARA REAL TIME." Jurnal Digit 11, no. 2 (2021): 178. http://dx.doi.org/10.51920/jd.v11i2.203.

Full text
Abstract:
ABSTRAKPT.Trimitra Data Teknologi adalah perusahaan yang yang bergerak dalam bidang teknologi dan informasi, website menjadi salah satu cara jembatan komunikasi antara client dan perusahaan. Banyaknya client yang mengakses membuat beban sebuah web server dalam perusahaan menjadi berat dan menimbulkan masalah yaitu down nya server yang membuat client sulit untuk mengakses website perusahaan. Untuk membantu mengatasi masalah yang terjadi diterapkannya metode load balancing dengan algoritma request counting algorithm dimana bertujuan untuk membagi beban secara merata dalam web server dan memperkecil waktu respon antara client dan server, beban terbagi dengan anggota server yang terdaftar dalam server load balancing. Dengan penerapan metode load balancing maka kerja server akan menjadi lebih maksimal karena adanya sistem high availability dimana saat salah satu server mati maka kerja server akan diambil alih oleh server yang lain. Selain metode load balance penerapan sistem dengan server mirror yang dilakukan dapat membantu memaksimalkan metode load balance karena adanya replikasi otomatis antara web server yang menjadi anggota load balance baik konten website ataupun database. Hasil yang terjadi adalah web server perusahaan akan menjadi sistem yang mampu bekerja secara baik saat melayani client dalam hal layanan web server karena beban terbagi dengan baik dan kecilnya waktu respon sehingga tidak adanya kesulitan client untuk mengakses website perusahaan.Kata kunci : Load Balance, Web Server, Mirror Server.
APA, Harvard, Vancouver, ISO, and other styles
49

Budiyono, Setiyo Eko, Tatang Rohana, and Tohirin Al Mudzakir. "Penggunaan Load Balancing Pada Web Server Lokal Dengan Metode Policy Based Routing." Jurnal SAINTIKOM (Jurnal Sains Manajemen Informatika dan Komputer) 20, no. 2 (2021): 118. http://dx.doi.org/10.53513/jis.v20i2.3742.

Full text
Abstract:
Perkembangan teknologi jaringan komputer menjadi kebutuhan mutlak sebagai sarana penunjang kegiatan pada instansi-instansi saat ini. Teknologi jaringan komputer tersebut dimanfaatkan oleh SMK Jayabeka 01 Karawang sebagai sarana mengevaluasi kemampuan peserta didiknya dengan melakukan ujian berbasis komputer. Ujian dilakukan dengan menggunakan jaringan lokal dan aplikasi berbasis web server. Jaringan komputer yang digunakan sebagai sarana ujian berbasis komputer masih memiliki kekurangan, yaitu masih manualnya pembagian client yang harus mengakses server ujian dimana aplikasi ujian berbasis http. Ketika terjadi kegagalan pada web server, maka client harus dipindah secara manual ke server yang lain. Dengan permasalahan yang ada diperlukan jaringan yang dapat menyelesaikan permasalahan itu, maka dibuatlah jaringan dengan konfigurasi load balancing yang diterapkan supaya client dapat mengakses web server yang lain secara otomatis apabila ada link server yang bermasalah/putus koneksi. Metode yang digunakan untuk load balancing adalah Policy Based Routing, metode dengan mengelompokkan hak akses berdasarkan Src-address maupun Dst-address. Metode policy based routing melakukan prosesnya berdasarkan per-packet load balancing, per-connection load balancing dan per-address-pair load balancing. Dengan penggunakan metode load balancing, client akan tetap mendapatkan layanan meskipun ada link server yang mengalami gangguan koneksi, sehingga fungsi keseluruhan jaringan tidak akan terganggu.
APA, Harvard, Vancouver, ISO, and other styles
50

Ajibola, Esther B., Joshua A. Ayeni, and Adeleye S. Falohun. "Development of An Enhanced Load Balancing Algorithm for Heterogeneous Distributed System Environment." International Journal of Research and Scientific Innovation X, no. XII (2024): 484–96. http://dx.doi.org/10.51244/ijrsi.2023.1012037.

Full text
Abstract:
In a Heterogeneous Distributed System Environment (HDSE), a load balancing algorithm ensures even distribution of tasks between the various systems and the appropriate server for each of the different client request, based on server capacity, current connection time and IP address. This is to avoid uneven distribution of tasks and thereby overloading some servers to the detriment of others. An enhanced heterogeneous load balancing algorithm was developed to address the shortcomings observed in the conventional load balancers to improve on the performance of the job response time, throughput, and turnaround time. The developed load balancing algorithm was executed using scheduling techniques: the Weighted Round Robin (WRR) and Least Connection algorithms. The load balancing algorithm was coded in C-Language with in-built functions of the C-Library and the Performance Evaluation Test (PET) was carried out using Average Turnaround Time (ATAT}, Average Waiting Time (AWT) and Throughput (ThP) The results of the test demonstrated an improved performance on conventional load balancing algorithm in a HDSE.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography