To see the other types of publications on this topic, follow the link: Kubernetes.

Journal articles on the topic 'Kubernetes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Kubernetes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Raj, Pritish. "Continuous Integration for New Service Deployment and Service Validation Script for Vault." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 06 (June 12, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem35565.

Full text
Abstract:
Modern cloud-native applications demand robust security measures to safeguard sensitive data such as passwords, API keys, and encryption keys. Managing these secrets securely within Kubernetes clusters presents a significant challenge. In response, this project proposes a comprehensive solution leverag- ing HashiCorp Vault, Kubernetes, and Docker to enhance secret management and strengthen overall security posture.HashiCorp Vault serves as a centralized secrets management tool, provid- ing encryption, access control, and auditing functionalities. By integrating Vault with Kubernetes, secrets can be dynamically generated, securely stored, and automatically injected into ap- plication pods at runtime. This approach reduces the exposure of sensitive information within containerized environments and mitigates the risk of unauthorized access. The project architecture involves deploying Vault within the Kubernetes cluster, utilizing Docker containers for seamless encapsulation and portability. Kubernetes’ native integrations with Vault, such as the Kubernetes Auth method and the Vault Agent Injector, streamline the authentication and authorization processes, ensuring secure communication between applications and Vault. The project involves deploying Vault in Kubernetes for secrets management, ensuring High Availability. It focuses on generating, storing, and managing secrets securely, leveraging Vault’s dynamic secrets engine for automatic rotation. Integration with Kubernetes employs authentication methods like Service Accounts and RBAC for granular access control. Dockerization ensures application consistency and portability, with Vault Agent containers enabling seamless secret injection. Security best practices, including least privilege access and encryption, are prioritized, along with regular auditing and monitoring. Overall, the project aims to establish a robust secrets management solution within Kubernetes while empha- sizing resilience, security, and compliance in handling sensitive information. Index Terms—Docker, DevOps, CI/CD, Automation, Secrets, Kubernetes, Vault, Security
APA, Harvard, Vancouver, ISO, and other styles
2

Hadikusuma, Ridwan Satrio, Lukas Lukas, and Karel Octavianus Bachri. "Survey Paper: Optimization and Monitoring of Kubernetes Cluster using Various Approaches." Sinkron 8, no. 3 (July 1, 2023): 1357–65. http://dx.doi.org/10.33395/sinkron.v8i3.12424.

Full text
Abstract:
This research compares different methods for optimizing and monitoring Kubernetes clusters. Three referenced journals are analyzed: "Kubernetes cluster optimization using hybrid shared-state scheduling framework" by Oana-Mihaela Ungureanu, Călin Vlădeanu, Robert Kooij; "Monitoring Kubernetes Clusters Using Prometheus and Grafana" by Salma Rachman Dira, Muhammad Arif Fadhly Ridha; and "Cluster Frameworks for Efficient Scheduling and Resource Allocation in Data Center Networks: A Survey" by Kun Wang, Qihua Zhou, Song Guo, and Jiangtao Luo. These journals explore various approaches to optimizing and monitoring Kubernetes clusters. This review concludes that selecting appropriate technologies for optimizing and monitoring Kubernetes clusters can enhance performance and resource management efficiency in data centre networks. The research addresses the problem of improving Kubernetes cluster performance through optimization and efficient monitoring. The required methods include utilizing hybrid state-sharing scheduling frameworks, implementing Prometheus and Grafana for monitoring, and employing efficient cluster frameworks. The study's findings demonstrate that adopting a hybrid shared-state scheduling framework can improve Kubernetes cluster performance. Additionally, leveraging Prometheus and Grafana as monitoring tools offer valuable insights into cluster health and performance. The survey also reveals various cluster frameworks that enable efficient scheduling and resource allocation in data centre networks. In conclusion, this research emphasizes the significance of employing suitable technologies to optimize and monitor Kubernetes clusters, leading to enhanced performance and efficient resource management in data centre networks. By leveraging appropriate scheduling frameworks and monitoring tools, organizations can optimize their utilization of Kubernetes clusters and ensure efficient resource allocation
APA, Harvard, Vancouver, ISO, and other styles
3

Barreiro Megino, Fernando Harald, Jeffrey Ryan Albert, Frank Berghaus, Kaushik De, FaHui Lin, Danika MacDonell, Tadashi Maeno, et al. "Using Kubernetes as an ATLAS computing site." EPJ Web of Conferences 245 (2020): 07025. http://dx.doi.org/10.1051/epjconf/202024507025.

Full text
Abstract:
In recent years containerization has revolutionized cloud environments, providing a secure, lightweight, standardized way to package and execute software. Solutions such as Kubernetes enable orchestration of containers in a cluster, including for the purpose of job scheduling. Kubernetes is becoming a de facto standard, available at all major cloud computing providers, and is gaining increased attention from some WLCG sites. In particular, CERN IT has integrated Kubernetes into their cloud infrastructure by providing an interface to instantly create Kubernetes clusters, and the University of Victoria is pursuing an infrastructure-as-code approach to deploying Kubernetes as a flexible and resilient platform for running services and delivering resources. The ATLAS experiment at the LHC has partnered with CERN IT and the University of Victoria to explore and demonstrate the feasibility of running an ATLAS computing site directly on Kubernetes, replacing all grid computing services. We have interfaced ATLAS’ workload submission engine PanDA with Kubernetes, to directly submit and monitor the status of containerized jobs. We describe the integration and deployment details, and focus on the lessons learned from running a wide variety of ATLAS production payloads on Kubernetes using clusters of several thousand cores at CERN and the Tier 2 computing site in Victoria.
APA, Harvard, Vancouver, ISO, and other styles
4

Nadaf, Sarah R., and H. K. Krishnappa. "Kubernetes in Microservices." International Journal of Advanced Science and Computer Applications 2, no. 1 (November 12, 2022): 7–18. http://dx.doi.org/10.47679/ijasca.v2i1.19.

Full text
Abstract:
The move towards the microservice grounded armature is well underway. In this architectural style, small and approximately coupled modules are developed, stationed, and gauged singly to compose pall-native operations. still, for carrier- grade service providers to resettle to the microservices architectural style, vacuity remains a concern. Kubernetes is an open source platform that defines a set of structure blocks which inclusively give mechanisms for planting, maintaining, spanning, and healing containerized microservices. therefore, Kubernetes hides the complexity of microservice unity while managing their vacuity. In this paper, we probe further infrastructures and conduct further trials to estimate the vacuity that Kubernetes delivers for its managed microservices. We present different infrastructures for public and private shadows. We estimate the vacuity attainable through the mending capability of Kubernetes. We probe the impact of adding redundancy on the vacuity of microservice grounded operations. We conduct trials under the dereliction configuration of Kubernetes as well as under its most responsive bone . We also perform a relative evaluation with the Vacuity operation Framework( AMF), which is a proven result as a middleware service for managing high- vacuity. The results of our examinations show that in certain cases, the service outage for operations managed with Kubernetes is significantly high
APA, Harvard, Vancouver, ISO, and other styles
5

sai, Karthik. "Enhanced Visibility for Real-time Monitoring and Alerting in Kubernetes by Integrating Prometheus,Prometheus, Grafana, Loki, and Alerta." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 06 (June 10, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem35639.

Full text
Abstract:
With the increasing popularity of Kubernetes as the go-to platform for containerized applications, numerous companies are adopting Kubernetes as their preferred platform for containerized applications. As organizations embrace this container orchestration technology for its scalability, flexibility, and portability benefits, the need for robust monitoring solutions becomes paramount. Monitoring Kubernetes environments is essential to ensure the health, performance, and availability of applications running within the cluster. This paper aims to provide a comprehensive approach for monitoring Kubernetes via Prometheus, Grafana, and Alerta Prometheus, a powerful open-source monitoring system, col- lects metrics from Kubernetes pods and nodes, enabling real- time monitoring of resource utilization, performance metrics, and application health. Grafana complements Prometheus by providing intuitive visualization of collected metrics through customizable dashboards, facilitating comprehensive insights into cluster performance and trends. Loki and Promtail by Grafana are used to collect and aggregate the logs associated with the cluster. Alerta enhances the monitoring setup by enabling alerting based on predefined thresholds and conditions, ensuring prompt notification of potential issues or anomalies. Together, this stack empowers administrators to gain deep visibility into their Kubernetes infrastructure, proactively identify and mitigate issues, and maintain the high availability and reliability of their applications and services. Index Terms—Kubernetes, Prometheus, Grafana, Alerta, Loki, Promtail, Monitoring, Alerting, Logging.
APA, Harvard, Vancouver, ISO, and other styles
6

Vasireddy, Indrani, G. Ramya, and Prathima Kandi. "Kubernetes and Docker Load Balancing: State-of-the-Art Techniques and Challenges." International Journal of Innovative Research in Engineering and Management 10, no. 6 (December 2023): 49–54. http://dx.doi.org/10.55524/ijirem.2023.10.6.7.

Full text
Abstract:
In the ever-evolving landscape of container orchestration, Kubernetes stands out as a leading platform, empowering organizations to deploy, scale, and manage containerized applications seamlessly. This survey paper explores the critical domain of load balancing within Kubernetes, investigating state-of-the-art techniques and the associated challenges. Container-based virtualization has revolutionized cloud computing, and Kubernetes, as a key orchestrator, plays a central role in optimizing resource allocation, scalability, and application performance. Load balancing, a fundamental aspect of distributed systems, becomes paramount in ensuring efficient utilization of resources and maintaining high availability. The study focuses on contemporary methods for achieving effect-ive load balancing on containers, with a specific examination of Docker Swarm and Kubernetes—prominent systems for container deployment and management. The paper illustrates how Docker Swarm and Kubernetes can leverage load bal-ancing techniques to optimize traffic distribution. Load balancing algorithms are introduced and implemented in both Docker and Kubernetes, and their outcomes are systematically compared. The paper concludes by highlighting why Kuber-netes is often the preferred choice over Docker Swarm for load balancing pur-poses.This paper provides a comprehensive overview of the current state-of-the-art techniques employed in Kubernetes load balancing. Challenges inherent to Docker load balancing are addressed, encompassing is-sues related to the dynamic nature of containerized workloads, varying application demands, and the need for real-time adaptability. The survey also explores the role of load balancing in enhancing the scalability and overall performance of applications within Kubernetes clusters. In conclusion, this survey consolidates the current knowledge on Docker and Kubernetes load balancing, offering a state-of-the-art analysis while identifying challenges that pave the way for future research and advancements in the realm of container orchestration and distributed systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Poniszewska-Marańda, Aneta, and Ewa Czechowska. "Kubernetes Cluster for Automating Software Production Environment." Sensors 21, no. 5 (March 9, 2021): 1910. http://dx.doi.org/10.3390/s21051910.

Full text
Abstract:
Microservices, Continuous Integration and Delivery, Docker, DevOps, Infrastructure as Code—these are the current trends and buzzwords in the technological world of 2020. A popular tool which can facilitate the deployment and maintenance of microservices is Kubernetes. Kubernetes is a platform for running containerized applications, for example microservices. There are two main questions which answer was important for us: how to deploy Kubernetes itself and how to ensure that the deployment fulfils the needs of a production environment. Our research concentrates on the analysis and evaluation of Kubernetes cluster as the software production environment. However, firstly it is necessary to determine and evaluate the requirements of production environment. The paper presents the determination and analysis of such requirements and their evaluation in the case of Kubernetes cluster. Next, the paper compares two methods of deploying a Kubernetes cluster: kops and eksctl. Both of the methods concern the AWS cloud, which was chosen mainly because of its wide popularity and the range of provided services. Besides the two chosen methods of deployment, there are many more, including the DIY method and deploying on-premises.
APA, Harvard, Vancouver, ISO, and other styles
8

Kampa, Sandeep. "Navigating the Landscape of Kubernetes Security Threats and Challenges." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 3, no. 4 (December 25, 2024): 274–81. http://dx.doi.org/10.60087/jklst.v3.n4.p274.

Full text
Abstract:
The rise of containerization and the widespread adoption of Kubernetes have revolutionized the way applications are deployed and managed. This paradigm shift has introduced new security risks and challenges that must be addressed. This paper delves into the various security threats and vulnerabilities associated with Kubernetes, exploring mitigation strategies and best practices to enhance the overall security posture of containerized environments. Kubernetes, as a leading container orchestration platform, has become the de facto standard for managing and scaling containerized applications in modern IT infrastructure. While Kubernetes offers numerous benefits, such as improved scalability, portability, and resource optimization, it also introduces a unique set of security concerns that must be carefully navigated. This paper aims to provide a comprehensive overview of the security threats and challenges faced in Kubernetes environments, as well as the current approaches and tools used to address these critical issues.
APA, Harvard, Vancouver, ISO, and other styles
9

Pashikanti, Santosh. "Scaling AI Workloads with NVIDIA DGX Cloud and Kubernetes: A Performance Optimization Framework." International Scientific Journal of Engineering and Management 04, no. 01 (January 19, 2025): 1–5. https://doi.org/10.55041/isjem01346.

Full text
Abstract:
As artificial intelligence (AI) workloads become increasingly complex and resource-intensive, organizations face challenges in scaling their infrastructure to meet performance demands. NVIDIA DGX Cloud, combined with Kubernetes, provides a scalable, high-performance computing platform for AI workloads. This white paper outlines a detailed framework for optimizing performance when deploying AI workloads on NVIDIA DGX Cloud using Kubernetes. It delves into architectural considerations, workload scheduling, resource management, and performance tuning strategies. Web references are provided at the end for further exploration. Key words: NVIDIA DGX Cloud, AI Workload Optimization, TensorRT, Kubernetes GPU Operator, Kubernetes Cluster Security, AI Model Deployment
APA, Harvard, Vancouver, ISO, and other styles
10

Mantha, Gayathri. "Leveraging Containerization Benefits and Challenges of Docker and Kubernetes." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (December 7, 2024): 1–6. https://doi.org/10.55041/ijsrem10557.

Full text
Abstract:
Containerization has revolutionized computer program advancement and arrangement by advertising a reliable and versatile environment for applications. Docker and Kubernetes are two driving innovations in this space. Docker rearranges holder creation, whereas Kubernetes coordinates containerized applications over clusters. This white paper investigates the benefits and challenges related with Docker and Kubernetes, giving a comprehensive diagram for organizations looking to use these advances.
APA, Harvard, Vancouver, ISO, and other styles
11

Сопов, Олексій, and Анна Цитовцева. "ОСОБЛИВОСТІ МАСШТАБУВАННЯ КОНТЕЙНЕРНОГО НАВАНТАЖЕННЯ НА БАЗІ СИСТЕМИ KUBERNETES." TECHNICAL SCIENCES AND TECHNOLOGIES, no. 1(23) (2021): 103–8. http://dx.doi.org/10.25140/2411-5363-2021-1(23)-103-108.

Full text
Abstract:
На сьогодні технологія контейнеризації набуває широкого поширення, та проблема масштабування є однією з найбільш важливих для підвищення продуктивності систем. Kubernetes є передовим рішенням для керування контейнерами, проте проблема масштабування залишається мало описаною та дослідженою. У цій роботі дослідженно особливостей масштабування контейнерного навантаження на базі системи Kubernetes. Розкрито основні операції для досягнення горизонтального та вертикального масштабування. Описані властивості масштабованості системи Kubernetes. Стаття є оглядовою.
APA, Harvard, Vancouver, ISO, and other styles
12

Chippagiri, Srinivas. "Optimizing Kubernetes Network Performance: A Study of Container Network Interfaces and System Tuning Profiles." European Journal of Theoretical and Applied Sciences 2, no. 6 (November 1, 2024): 651–68. https://doi.org/10.59324/ejtas.2024.2(6).58.

Full text
Abstract:
The rapid development of cloud computing and big data has led to containers becoming the top choice for application deployment platforms due to their lightweight and flexible nature. Deploying, maintaining, and scaling containerized applications across dispersed environments has never been easier than with Kubernetes, a top container orchestration platform. Central to Kubernetes' functionality is its networking model, which connects containerized workloads seamlessly. This study explores Kubernetes network performance, emphasizing the role of Container Network Interfaces (CNIs) like Cilium, Flannel, Calico, and Antrea. Each CNI plugin is analyzed based on its architecture, functionality, and suitability for various application scenarios, highlighting trade-offs between simplicity, security, throughput, and scalability. Additionally, system tuning profiles and Kubernetes' default networking solutions are discussed to optimize communication between containers. The findings provide insights into selecting and configuring CNIs to enhance the performance, reliability, and security of Kubernetes clusters, offering guidance for deploying modern, network-intensive applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Sugiyatno and Ishak M. "Analisis Perbandingan Performasi Respon Waktu Web Server dan Failover Antara Kubernetes Dan Docker Swarm pada Container Orchestration." Jurnal Informatika Komputer, Bisnis dan Manajemen 21, no. 3 (November 20, 2023): 43–53. http://dx.doi.org/10.61805/fahma.v21i3.9.

Full text
Abstract:
Failover sebagai mekanisme dalam mengatasi single failure terintegrasi pada kubernetes dan docker swarm sebagai server cluster berbasis container atau orkestrasi container. Namun, perbedaan performansi failover dan respon waktu web server pada kubernetes dan docker swarm belum diketahui. Berangkat dari permasalahan tersebut, diperlukan suatu analisis yang mendalam terkait perbandingan performansi failover dan respon waktu web server antara kubernetes dan docker swarm dengan melakukan berbagai pengujian dan percobaan yang dilakukan pada masing-masing cluster tersebut. Hasil pengujian dengan menggunakan percobaan upload file untuk mengetahui performansi respon waktu web server pada masing-masing cluster, ditemukan data bahwa respon waktu kubernetes 54,5% lebih cepat dibandingkan dengan docker swarm. Lalu pada pengujian failover dari sisi node failure, docker swarm memiliki waktu rata-rata failover yang sangat signifikan. Namun, docker swarm tidak memiliki management container seperti management pods yang terdapat pada kubernetes, sedangkan pada saat pengujian failover ketika proses upload file berlangsung, menampilkan bahwa kedua cluster memiliki hasil yang sama yaitu website mengalami error ketika pengujian dilakukan
APA, Harvard, Vancouver, ISO, and other styles
14

Galande, Prof Bhagwati. "Event-Driven Real-Time Autoscaling Mechanisms in Kubernetes." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 30, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem35041.

Full text
Abstract:
This study highlights the pivotal role of Kubernetes autoscaling in enhancing the performance of containerized environments, addressing the limitations of traditional scaling methods, especially in adapting to the dynamic traffic patterns of modern cloud-native applications. To tackle these challenges, the research proposes a customized metric scaler designed to enhance Kubernetes' Horizontal Pod Autoscalers. By leveraging customized metrics derived from message queue traffic, this innovation optimizes application performance and enables dynamic workload scaling. To facilitate seamless integration and agile workload management, the research presents a microservices architecture comprising essential components such as Producer, RabbitMQ, Consumer, Decision, and Kubernetes Services. Furthermore, it showcases the application of this system across various domains including e-commerce platforms, real-time analytics, IoT data processing, and microservices orchestration. With its expanded support for metrics integration and insights into future scalability and adaptability enhancements, this research represents a significant stride towards efficient Kubernetes clusters. Keywords: Kubernetes, Docker, Microservices, Autoscaling, Cloud-Native, Horizontal Pod Autoscalers (HPA)
APA, Harvard, Vancouver, ISO, and other styles
15

Nayak, Deekshith, and Dr H. V. Ravish Aradhya. "Orchestrating a stateful application using Operator." Journal of University of Shanghai for Science and Technology 23, no. 06 (June 17, 2021): 514–20. http://dx.doi.org/10.51201/jusst/21/05278.

Full text
Abstract:
Containerization is a leading technological advancement in cloud-native developments. Virtualization isolates the running processes at the bare metal level but containerization isolates the processes at the operating system level. Virtualization encapsulates all the new virtual instances with a new operating system but containerization encapsulates the software only with its dependencies. Containerization avoids the problem of dependency missing between different operating systems and their distributions. The concept of containerization is old but the development of open-source tools like Docker, Kubernetes, and Openshift accelerated the adaption of this technology. Docker builds container images and Openshift or Kubernetes is an Orchestrating tool. For stateful applications, Kubernetes workload resources are not a better option to orchestrate the application, as each resource has its own identity. In such cases, the operator can be built to manage the entire life cycle of resources in the Kubernetes cluster. Operator combines human operational knowledge into software code in a systematic way. The paper discusses the default control mechanism in Kubernetes and then it explains the procedure to build the operator to orchestrate the stateful application.
APA, Harvard, Vancouver, ISO, and other styles
16

Erdenebat, Baasanjargal, Bayarjargal Bud, and Tamás Kozsik. "Challenges in service discovery for microservices deployed in a Kubernetes cluster – a case stud." Infocommunications journal 15, Special Issue (2023): 69–75. http://dx.doi.org/10.36244/icj.2023.5.11.

Full text
Abstract:
With Kubernetes emerging as one of the most popular infrastructures in the cloud-native era, the utilization of containerization and tools alongside Kubernetes is steadily gaining traction. The main goal of this paper is to evaluate the service discovery mechanisms and DNS management (CoreDNS) of Kubernetes, and to present a general study of an experiment on service discovery challenges. In large scale Kubernetes clusters, running pods, services, requests, and workloads can be substantial. The increased number of HTTP-requests often result in resource utilization concerns, e.g., spikes of errors [24], [25]. This paper investigates potential optimization strategies for enhancing the performance and scalability of CoreDNS in Kubernetes. We propose a solution to address the concerns related to CoreDNS and provide a detailed explanation of how our implementation enhances service discovery functionality. Experimental results in a real-world case show that our solution for the CoreDNS ensures consistency of the workload. Compared with the default CoreDNS configuration, our customized approach achieves better performance in terms of number of errors for requests, average latency of DNS requests, and resource usage rate.
APA, Harvard, Vancouver, ISO, and other styles
17

Venkat Marella. "Comparative Analysis of Container Orchestration Platforms: Kubernetes vs. Docker Swarm." International Journal of Scientific Research in Science and Technology 11, no. 5 (October 31, 2024): 526–43. https://doi.org/10.32628/ijsrst24105254.

Full text
Abstract:
Novel software architecture patterns, including microservices, have surfaced in the last ten years to increase the modularity of applications and to simplify their development, testing, scaling, and component replacement. In response to these emerging trends, new approaches such as DevOps methods and technologies have arisen to facilitate automation and monitoring across the whole software construction lifecycle, fostering improved collaboration between software development and operations teams. The resource management (RM) strategies of Kubernetes and Docker Swarm, two well-known container orchestration technologies, are compared in this article. The main distinctions between RM, scheduling, and scalability are examined, with an emphasis on Kubernetes' flexibility and granularity in contrast to Docker Swarm's simplicity and use. In this article, a case study comparing the performance of two popular container orchestrators—Kubernetes and Docker Swarm—over a Web application built using the microservices architecture is presented. By raising the number of users, we compare how well Docker Swarm and Kubernetes perform under stress. This study aims to provide academics and practitioners with an understanding of how well Docker Swarm and Kubernetes function in systems built using the suggested microservice architecture. The authors' Web application is a kind of loyalty program, meaning that it offers a free item upon reaching a certain quantity of purchases. According to the study's findings, Docker Swarm outperforms Kubernetes in terms of efficiency as user counts rise.
APA, Harvard, Vancouver, ISO, and other styles
18

Prakoso, Bayu Agung, and Unan Yusmaniar Oktiawati. "Analisis Perbandingan Kinerja Container Network Interface Flannel dan Cilium sebagai Interface Utama pada Multus CNI dalam Jaringan Klaster Kubernetes." Journal of Internet and Software Engineering 5, no. 2 (November 29, 2024): 99–105. http://dx.doi.org/10.22146/jise.v5i2.8612.

Full text
Abstract:
Container menjadi alternatif virtualisasi infrastruktur layanan internet berkat efisiensi penggunaan sumber daya. Infrastruktur IT dapat terdiri dari beragam container, dengan Kubernetes berperan sebagai Container Orchestration. Container Network Interface (CNI) dipergunakan dalam skenario layanan pada Kubernetes untuk mengatur jaringan sehingga memudahkan terhubungnya layanan. Namun, masalah seperti kemampuan jaringan terbatas, kurangnya fleksibilitas, dan terbatasnya skalabilitas serta keamanan menjadi isu dalam penggunaan CNI plugin. Solusi atas persoalan tersebut adalah Multus CNI yang memungkinkan beragam antarmuka jaringan pada satu pod. Studi ini melakukan evaluasi kinerja antara Flannel dan Cilium sebagai plugin CNI dalam lingkungan Kubernetes Cluster dengan melibatkan Multus CNI. Metrik yang dianalisis mencakup latency, packet loss, throughput, dan CPU usage. Hasil penelitian akan menghasilkan pemahaman lebih baik mengenai kompromi yang harus dilakukan saat memilih antara Flannel dan Cilium sebagai plugin CNI dalam lingkungan Kubernetes Cluster.
APA, Harvard, Vancouver, ISO, and other styles
19

Petrakis, Euripides G. M., Vasileios Skevakis, Panayiotis Eliades, Alkiviadis Aznavouridis, and Konstantinos Tsakos. "ModSoft-HP: Fuzzy Microservices Placement in Kubernetes." Electronics 13, no. 1 (December 22, 2023): 65. http://dx.doi.org/10.3390/electronics13010065.

Full text
Abstract:
The growing popularity of microservices architectures generated the need for tools that orchestrate their deployment in containerized infrastructures, such as Kubernetes. Microservices running in separate containers are packed in pods and placed in virtual machines (nodes). For applications with multiple communicating microservices, the decision of which services should be placed in the same node has a certain impact on both the running time and the operation cost of an application. The default Kubernetes scheduler is not optimal in that case. In this work, the service placement problem is treated as graph clustering. An application is modeled using a graph with nodes and edges representing communicating microservices. Graph clustering partitions the graph into clusters of microservices with high-affinity rates. Then, the microservices of each cluster are placed in the same Kubernetes node. A class of methods resorts to hard clustering (i.e., each microservice is placed in exactly one node). We advocate that graph clustering should be fuzzy to allow high-utilized microservices to run in more than one instance (i.e., pods) in different nodes. ModSoft-HP Scheduler is a custom Kubernetes scheduler that takes scheduling decisions based on the results of the ModSoft fuzzy clustering method followed by heuristic packing (HP). For proof of concept, the workloads of two applications (i.e., an e-commerce application, eShop, and an IoT architecture) are given as input to the default Kubernetes Scheduler, the Bisecting K-means, and the Heuristic First Fit (hard) clustering schedulers and to the ModSoft-HP fuzzy clustering method. The experimental results demonstrate that ModSoft-HP can achieve up to 90% reduction of egress traffic, up to 20% savings in response time, and up to 25% less hosting costs compared to service placement with the default Kubernetes Scheduler in the Google Kubernetes Engine.
APA, Harvard, Vancouver, ISO, and other styles
20

Sadiq, Amin, Hassan Jamil Syed, Asad Ahmed Ansari, Ashraf Osman Ibrahim, Manar Alohaly, and Muna Elsadig. "Detection of Denial of Service Attack in Cloud Based Kubernetes Using eBPF." Applied Sciences 13, no. 8 (April 7, 2023): 4700. http://dx.doi.org/10.3390/app13084700.

Full text
Abstract:
Kubernetes is an orchestration tool that runs and manages container-based workloads. It works as a collection of different virtual or physical servers that support multiple storage capacities, provide network functionalities, and keep all containerized applications active in a desired state. It also provides an increasing fleet of different facilities, known as microservices. However, Kubernetes’ scalability has led to a complex network structure with an increased attack vector. Attackers can launch a Denial of service (DoS) attack against servers/machines in Kubernetes by producing fake traffic load, for instance. DoS or Distributed Denial of service (DDoS) attacks are malicious attempts to disrupt a targeted service by flooding the target’s service with network packets. Constant observation of the network traffic is extremely important for the early detection of such attacks. Extended Berkeley Packet Filter (eBPF) and eXpress Datapath (XDP) are advanced technologies in the Linux kernel that perform high-speed packet processing. In the case of Kubernetes, eBPF and XDP can be used to protect against DDoS attacks by enabling fast and efficient network security policies. For example, XDP can be used to filter out traffic that is not authorized to access the Kubernetes cluster, while eBPF can be used to monitor network traffic for signs of DDoS attacks, such as excessive traffic from a single source. In this research, we utilize eBPF and XDP to build a detection and observation mechanism to filter out malicious content and mitigate a Denial of Service attack on Kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
21

Haragi L, Darshan, Mahith S, and Prof Sahana B. "Infrastructure Optimization in Kubernetes Cluster." Journal of University of Shanghai for Science and Technology 23, no. 06 (June 17, 2021): 546–55. http://dx.doi.org/10.51201/jusst/21/05292.

Full text
Abstract:
Kubernetes is a compact, extensible, open-source stage for overseeing containerized responsibilities and administrations, that works with both decisive setup and robotization. Kubernetes is like VMs, however having loosened up isolation properties to share the Operating System (OS) among the applications. The container conversely with VM, has its own document framework, a portion of Central Processing Unit(CPU), memory, process space, and much more. Kubernetes cluster is a bunch of node machines for running containerized applications. Each cluster contains a control plane and at least one node. Infrastructure Optimization is the process of analyzing and arranging the portion of cloud resources that power applications and workloads to augment the presentation and limit squander due to over-provisioning. In the paper, a “Movie Review System” web application is designed using GoLang for backend components and HTML, CSS, and JS for frontend components. Using AWS, an EC2 instance is created and the web application is deployed onto EC2 and hosted in the instance server. The web application is also deployed on Kubernetes locally using the MiniKube tool. A performance analysis is performed for both the deployments on considering common performance metrics for both AWS EC2 / Virtual Machine (VM) and Kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
22

Sowmith Daram, Dr. Shakeb Khan, and Er. Om Goel. "Network Functions in Cloud: Kubernetes Deployment Challenges." International Journal for Research Publication and Seminar 14, no. 2 (June 29, 2023): 244–54. http://dx.doi.org/10.36676/jrps.v14.i2.1481.

Full text
Abstract:
The rapid evolution of cloud computing has paved the way for deploying network functions (NFs) in cloud environments, significantly enhancing the flexibility, scalability, and efficiency of modern network infrastructures. Kubernetes, an open-source container orchestration platform, has emerged as a leading tool for deploying and managing these cloud-based network functions. However, despite its widespread adoption, Kubernetes presents several deployment challenges specific to network functions, stemming from its design, scalability, and operational intricacies. This paper delves into the core challenges faced during the deployment of network functions on Kubernetes, focusing on issues related to network performance, security, service orchestration, and resource management. The abstract aims to provide an overview of the technical hurdles and propose potential strategies to overcome them, thus contributing to the optimization of Kubernetes-based NF deployments in cloud environments. By analyzing existing literature and case studies, the paper identifies key areas where improvements are needed and discusses the implications of these challenges for the future of cloud-based network functions. Ultimately, the paper seeks to guide network architects and cloud engineers in better understanding the complexities of Kubernetes deployments for network functions and in developing more effective strategies for successful implementation.
APA, Harvard, Vancouver, ISO, and other styles
23

Sowmith Daram, Dr. Shakeb Khan, and Er. Om Goel. "Network Functions in Cloud: Kubernetes Deployment Challenges." Global International Research Thoughts 12, no. 2 (August 31, 2024): 34–46. http://dx.doi.org/10.36676/girt.v12.i2.118.

Full text
Abstract:
The rapid evolution of cloud computing has paved the way for deploying network functions (NFs) in cloud environments, significantly enhancing the flexibility, scalability, and efficiency of modern network infrastructures. Kubernetes, an open-source container orchestration platform, has emerged as a leading tool for deploying and managing these cloud-based network functions. However, despite its widespread adoption, Kubernetes presents several deployment challenges specific to network functions, stemming from its design, scalability, and operational intricacies. This paper delves into the core challenges faced during the deployment of network functions on Kubernetes, focusing on issues related to network performance, security, service orchestration, and resource management. The abstract aims to provide an overview of the technical hurdles and propose potential strategies to overcome them, thus contributing to the optimization of Kubernetes-based NF deployments in cloud environments. By analyzing existing literature and case studies, the paper identifies key areas where improvements are needed and discusses the implications of these challenges for the future of cloud-based network functions. Ultimately, the paper seeks to guide network architects and cloud engineers in better understanding the complexities of Kubernetes deployments for network functions and in developing more effective strategies for successful implementation.
APA, Harvard, Vancouver, ISO, and other styles
24

Burns, Brendan, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes. "Borg, Omega, and Kubernetes." Communications of the ACM 59, no. 5 (April 26, 2016): 50–57. http://dx.doi.org/10.1145/2890784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Burns, Brendan, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes. "Borg, Omega, and Kubernetes." Queue 14, no. 1 (January 2016): 70–93. http://dx.doi.org/10.1145/2898442.2898444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Soma, Venkat. "CONTAINER ORCHESTRATION WITH KUBERNETES." Journal of Artificial Intelligence, Machine Learning and Data Science 2, no. 3 (July 30, 2024): 1041–45. http://dx.doi.org/10.51219/jaimld/venkat-soma/247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Suryayusra, Suryayusra, Nova Destarina, Edi Surya Negara, Edi Supratman, and Maria Ulfa. "Implementation Docker and Kubernetes Scaling Using Horizontal Scaler Method for Wordpress Services." sinkron 8, no. 4 (October 2, 2024): 2192–96. http://dx.doi.org/10.33395/sinkron.v8i4.14091.

Full text
Abstract:
Container is a technology that has recently been widely used because of the additional features that are very easy and convenient to use, especially for web hosting service developers, with Container making it easier for system admins to manage applications including building, processing and running applications on Container. With Container the process of creating and using the system will be easier but along with too many user requests so that the service does not run optimally. Therefore, the Container must have good scalability and performance. Scalability is needed for systems that can adjust to the needs of user demand and performance is needed to maintain the quality of services provided. This research aims to implement scaling using Docker and Kubernetes in terms of scalability and performance. The parameters of comparison between Docker and Kubernetes are for scalability, scaling up and scaling down time and for performance. The method in this research uses the Action Research methodology, which is a research model that is simultaneously practiced and theorized. With the initial steps of problem identification, action planning, action implementation, observation and evaluation. Based on the results that have been obtained, Docker consumes more CPU & Memory Usage Resources, namely at 500 Users Kubernetes consumes Resources with an average of 94.47%-4.70% while in Kubernetes 89.11%-4.50 because in Kubernetes itself has a complex system, especially special component components such as APIs, Metrics Server, Kubernetes manager to run the Container. While in Docker only has Docker Manager and Docker Compose components.
APA, Harvard, Vancouver, ISO, and other styles
28

Bangar Raju Cherukuri. "Containerization in cloud computing: comparing Docker and Kubernetes for scalable web applications." International Journal of Science and Research Archive 13, no. 1 (October 30, 2024): 3302–15. http://dx.doi.org/10.30574/ijsra.2024.13.1.2035.

Full text
Abstract:
Containerization has taken cloud computing to a new level where application developers can develop an attractive way of deploying portable web apps. This paper compares Docker, one of the most popular containerization technologies, and Kubernetes based on their ability to manage cloud resources, advanced integration, and microservices implementation. Docker, which produces lightweight containers, reduces complexity in the application delivery process by including all the software components and their configurations. Kubernetes, in contrast, stands tall as a manufacturing tool for the automation of containerized application deployment, scaling, and management in clusters. Here, the study's objective is to give a comparative analysis of Docker and Kubernetes to understand the best course of action. The research thus integrates a quantitative assessment of the performance of different algorithms based on simulated data with several case studies in various sectors, including e-commerce and software as a service (SaaS). Basic parameters of deployment velocity, resource consumption, system growth capabilities, and robustness of the two platforms are considered here as comprehensive measures. The present study reveals that Docker can provide simple, portable solutions for small, small, and medium-scale applications. Kubernetes offers excellent orchestration solutions that are better for large-scale and complex applications. The study concludes that Docker and Kubernetes are suitable whenever specific project demands are involved. However, Docker stands out in a basic project setup and configuration, while Kubernetes functions better within the multi-cloud/ microservices conditions. The findings of this research are useful in informing organizations on their best practices when using containerization.
APA, Harvard, Vancouver, ISO, and other styles
29

Bhardwaj, Arvind Kumar, P. K. Dutta, and Pradeep Chintale. "AI-Powered Anomaly Detection for Kubernetes Security: A Systematic Approach to Identifying Threats." Babylonian Journal of Machine Learning 2024 (August 20, 2024): 142–48. http://dx.doi.org/10.58496/bjml/2024/014.

Full text
Abstract:
This study delves into the intricacies of AI-based threat detection in Kubernetes security, with a specific focus on its role in identifying anomalous behavior. By harnessing the power of AI algorithms, vast amounts of telemetry data generated by Kubernetes clusters can be analyzed in real-time, enabling the identification of patterns and anomalies that may signify potential security threats or system malfunctions. The implementation of AI-based threat detection involves a systematic approach, encompassing data collection, model training, integration with Kubernetes orchestration platforms, alerting mechanisms, and continuous monitoring. AI-powered threat detection offers numerous advantages, including predictive threat detection, increased accuracy and scalability, shorter response times, and the ability to adapt to evolving threats. However, it also presents challenges, such as ensuring data quality, managing model complexity, mitigating false positives, addressing resource requirements, and maintaining security and privacy standards. The proposed AI-powered anomaly detection framework for Kubernetes security demonstrated significant improvements in threat identification and mitigation. Through real-time analysis of telemetry data and leveraging advanced AI algorithms, the system accurately identified over 92% of simulated security threats and anomalies across various Kubernetes clusters. Additionally, the integration of automated alerting mechanisms and response protocols reduced the average response time by 67%, enabling rapid containment of potential breaches.
APA, Harvard, Vancouver, ISO, and other styles
30

Yan, Congling. "Research and Application of Mobile Video Content Placement Based on Edge Computing." Academic Journal of Science and Technology 12, no. 1 (August 20, 2024): 21–27. http://dx.doi.org/10.54097/gsjvpk90.

Full text
Abstract:
In the era of the rapid development of computer technology and Internet, the traditional cloud server architecture and solutions have been unable to meet the needs of the huge Internet users, with the rapid development of mobile video, traditional server resilience, server cost and user experience has been unable to meet the needs of users and enterprises. Under this requirement, the edge computing platform based on kubernetes is emerged to solve the above problems. This paper first analyzes the development process and characteristics of kubernetes, and further discusses the application of mobile video cache strategy on kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
31

Nguyen, Thanh-Tung, Yu-Jin Yeom, Taehong Kim, Dae-Heon Park, and Sehan Kim. "Horizontal Pod Autoscaling in Kubernetes for Elastic Container Orchestration." Sensors 20, no. 16 (August 17, 2020): 4621. http://dx.doi.org/10.3390/s20164621.

Full text
Abstract:
Kubernetes, an open-source container orchestration platform, enables high availability and scalability through diverse autoscaling mechanisms such as Horizontal Pod Autoscaler (HPA), Vertical Pod Autoscaler and Cluster Autoscaler. Amongst them, HPA helps provide seamless service by dynamically scaling up and down the number of resource units, called pods, without having to restart the whole system. Kubernetes monitors default Resource Metrics including CPU and memory usage of host machines and their pods. On the other hand, Custom Metrics, provided by external software such as Prometheus, are customizable to monitor a wide collection of metrics. In this paper, we investigate HPA through diverse experiments to provide critical knowledge on its operational behaviors. We also discuss the essential difference between Kubernetes Resource Metrics (KRM) and Prometheus Custom Metrics (PCM) and how they affect HPA’s performance. Lastly, we provide deeper insights and lessons on how to optimize the performance of HPA for researchers, developers, and system administrators working with Kubernetes in the future.
APA, Harvard, Vancouver, ISO, and other styles
32

Gonzalez, Luis F., Ivan Vidal, Francisco Valera, Raul Martin, and Dulce Artalejo. "A Link-Layer Virtual Networking Solution for Cloud-Native Network Function Virtualisation Ecosystems: L2S-M." Future Internet 15, no. 8 (August 17, 2023): 274. http://dx.doi.org/10.3390/fi15080274.

Full text
Abstract:
Microservices have become promising candidates for the deployment of network and vertical functions in the fifth generation of mobile networks. However, microservice platforms like Kubernetes use a flat networking approach towards the connectivity of virtualised workloads, which prevents the deployment of network functions on isolated network segments (for example, the components of an IP Telephony system or a content distribution network). This paper presents L2S-M, a solution that enables the connectivity of Kubernetes microservices over isolated link-layer virtual networks, regardless of the compute nodes where workloads are actually deployed. L2S-M uses software-defined networking (SDN) to fulfil this purpose. Furthermore, the L2S-M design is flexible to support the connectivity of Kubernetes workloads across different Kubernetes clusters. We validate the functional behaviour of our solution in a moderately complex Smart Campus scenario, where L2S-M is used to deploy a content distribution network, showing its potential for the deployment of network services in distributed and heterogeneous environments.
APA, Harvard, Vancouver, ISO, and other styles
33

Palavesam, Kuppusamy Vellamadam, Mahesh Vaijainthymala Krishnamoorthy, and Aiswarya S M. "A Comparative Study of Service Mesh Implementations in Kubernetes for Multi-cluster Management." Journal of Advances in Mathematics and Computer Science 40, no. 1 (January 1, 2025): 1–16. https://doi.org/10.9734/jamcs/2025/v40i11958.

Full text
Abstract:
In modern cloud-native applications, managing communication across multiple Kubernetes clusters can be complex. Service meshes, such as Istio, Linkerd, Kuma, and Consul, help address challenges like service discovery, traffic routing, security, and observability in multi-cluster environments. This study compares these leading service meshes based on scalability, security, ease of use, performance, and operational complexity. Istio excels in scalability and advanced traffic management, making it ideal for large-scale, complex applications. Linkerd, with its simplicity and high performance, is well-suited for smaller, less complex setups. Kuma stands out for its flexibility, supporting both Kubernetes and non-Kubernetes environments, and offers built-in multi-cluster capabilities. Consul, known for its strong service discovery features, is particularly effective in hybrid cloud environments, efficiently managing multi-cluster communication. The findings provide actionable insights for organizations seeking to optimize performance, security, and operational efficiency across distributed systems, helping them select the most appropriate service mesh for their specific multi-cluster Kubernetes deployment.
APA, Harvard, Vancouver, ISO, and other styles
34

Dakić, Vedran, and Adriano Bubnjek. "Managing cloud-native applications using vSphere with Tanzu and Tanzu Kubernetes grid." Edelweiss Applied Science and Technology 8, no. 6 (November 30, 2024): 6557–78. https://doi.org/10.55214/25768484.v8i6.3409.

Full text
Abstract:
This study examines VMware cloud-native platforms, emphasizing their orchestration and automated management functionalities for container technologies, which are essential in modern IT infrastructure. The study seeks to comprehensively examine VMware Tanzu's architecture, technical elements, and efficacy in facilitating containerized applications. Case studies were used to gain comprehensive insights into brownfield and greenfield deployment strategies, illustrating practical implementation challenges and advantages for vSphere with Tanzu and Tanzu Kubernetes Grid. These studies emphasize scenarios from gradually incorporating Kubernetes into current VMware environments to developing optimized infrastructures from the ground up. Significant findings encompass strategies for scalable IT infrastructure, innovative applications of persistent storage via VMware Cloud Native Storage, and the function of Tanzu Kubernetes Grid in multi-cloud deployment contexts. The study concludes that Tanzu platforms successfully integrate traditional virtualization with contemporary containerization, providing robust scalability, flexibility, and efficient resource-use solutions. The practical implications emphasize how organizations can utilize these insights to seamlessly integrate Kubernetes workloads, optimize investments in VMware ecosystems, and respond to evolving IT requirements.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Haoran, Jun Sun, and Xiong Ke. "AI-Driven Optimization System for Large-Scale Kubernetes Clusters: Enhancing Cloud Infrastructure Availability, Security, and Disaster Recovery." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 2, no. 1 (February 29, 2024): 281–306. http://dx.doi.org/10.60087/jaigs.v2i1.244.

Full text
Abstract:
This paper presents AI-driven optimization for large Kubernetes clusters, addressing critical cloud availability, security, and disaster recovery issues. The design concept integrates advanced machine learning techniques with Kubernetes' native capabilities to improve cluster management across multiple cloud and edge environments. Key components include data collection and preprocessing, AI/ML models for predictive analytics, a decision engine, and seamless integration with the Kubernetes control plane. The system uses performance metrics, security policy management, and disaster recovery planning to improve resource utilization, threat detection, and powerful assistance. The test results show a 23% improvement in cluster utilization, a 97.8% accuracy in decision-making, and a 78% reduction in safety security time compared to the standard always there. Case studies across the e-commerce, financial services, and IoT industries have confirmed the performance in real-world deployments, showing improvements in the cost of operation, security, and reliability. This research contributes to the evolution of intelligent cloud management, providing solutions for optimizing Kubernetes deployments in complex, distributed environments.
APA, Harvard, Vancouver, ISO, and other styles
36

Böhm, Sebastian, and Guido Wirtz. "Cloud-Edge Orchestration for Smart Cities: A Review of Kubernetes-based Orchestration Architectures." EAI Endorsed Transactions on Smart Cities 6, no. 18 (May 25, 2022): e2. http://dx.doi.org/10.4108/eetsc.v6i18.1197.

Full text
Abstract:
Edge computing offers computational resources near data-generating devices to enable low-latency access. Especially for smart city contexts, edge computing becomes inevitable for providing real-time services, like air quality monitoring systems. Kubernetes, a popular container orchestration platform, is often used to efficiently manage containerized applications in smart cities. Although it misses essential requirements of edge computing, like network-related metrics for scheduling decisions, it is still considered. This paper analyzes custom cloud-edge architectures implemented with Kubernetes. Specifically, we analyze how essential requirements of edge orchestration in smart cities are solved. Also, shortcomings are identified in these architectures based on the fundamental requirements of edge orchestration. We conduct a literature review to obtain the general requirements of edge computing and edge orchestration for our analysis. We map these requirements to the capabilities of Kubernetes-based cloud-edge architectures to assess their level of achievement. Issues like using network-related metrics and the missing topology-awareness of networks are partially solved. However, requirements like real-time resource utilization, fault-tolerance, and the placement of container registries are in the early stages. We conclude that Kubernetes is an eligible candidate for cloudedge orchestration. When the formerly mentioned issues are solved, Kubernetes can successfully contribute latency-critical, large-scale, and multi-tenant application deployments for smart cities.
APA, Harvard, Vancouver, ISO, and other styles
37

Vasireddy, Indrani, Prathima Kandi, and SreeRamya Gandu. "Efficient Resource Utilization in Kubernetes: A Review of Load Balancing Solutions." International Journal of Innovative Research in Engineering and Management 10, no. 6 (December 2023): 44–48. http://dx.doi.org/10.55524/ijirem.2023.10.6.6.

Full text
Abstract:
Modern distributed systems face the challenge of efficiently distributing workloads across nodes to ensure optimal resource utilization, high avail-ability, and performance. In this context, Kubernetes, an open-source container orchestration engine, plays a pivotal role in automating deployment, scaling, and management of containerized applications. This paper explores the landscape of load balancing strategies within Kubernetes, aiming to provide a comprehensive overview of existing techniques, challenges, and best practices. The paper delves into the dynamic nature of Kubernetes environments, where applications scale dynamically, and demand for resources fluctuates. We review various load balancing approaches, including those based on traffic, resource-aware algorithms, and affinity policies. Special attention is given to the unique characteristics of containerized workloads and their impact on load balancing decisions. In this paper the implications of load balancing on the scalability and performance of applications deployed in Kubernetes clusters. It explores the trade-offs between different strategies, considering factors such as response time, throughput, and the adapt-ability to varying workloads. As cloud-native architectures continue to evolve, understanding and addressing the intricacies of load balancing in dynamic con-tainer orchestration environments become increasingly crucial. In this paper we had consolidated the current state of knowledge on load balancing in Kubernetes, providing researchers and practitioners with valuable insights and a foundation for further advancements in the quest for efficient, scalable, and resilient distrib-uted systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Taylor, Ryan Paul, Jeffrey Ryan Albert, and Fernando Harald Barreiro Megino. "A grid site reimagined: Building a fully cloud-native ATLAS Tier 2 on Kubernetes." EPJ Web of Conferences 295 (2024): 07001. http://dx.doi.org/10.1051/epjconf/202429507001.

Full text
Abstract:
The University of Victoria (UVic) operates an Infrastructure-asa-Service scientific cloud for Canadian researchers, and a Tier 2 site for the ATLAS experiment at CERN as part of the Worldwide LHC Computing Grid (WLCG). At first, these were two distinctly separate systems, but over time we have taken steps to migrate the Tier 2 grid services to the cloud. This process has been significantly facilitated by basing our approach on Kubernetes, a versatile, robust, and very widely adopted automation platform for orchestrating containerized applications. Previous work exploited the batch capabilities of Kubernetes to run grid computing jobs and replace the conventional grid computing elements by interfacing with the Harvester workload management system of the ATLAS experiment. However, the required functionality of a Tier 2 site encompasses more than just batch computing. Likewise, the capabilities of Kubernetes extend far beyond running batch jobs, and include for example scheduling recurring tasks and hosting long-running externally-accessible services in a resilient way. We are now undertaking the more complex and challenging endeavour of adapting and migrating all remaining services of the Tier 2 site — such as APEL accounting and Squid caching proxies, and in particular the grid storage element — to cloud-native deployments on Kubernetes. We aim to enable fully comprehensive deployment of a complete ATLAS Tier 2 site on a Kubernetes cluster via Helm charts, which will benefit the community by providing a streamlined and replicable way to install and configure an ATLAS site. We also describe our experience running a high-performance self-managed Kubernetes ATLAS Tier 2 cluster at the scale of 8 000 CPU cores for the last two years, and compare with the conventional setup of grid services.
APA, Harvard, Vancouver, ISO, and other styles
39

Charan Shankar Kummarapurugu. "Role-based access control in cloud-native applications: Evaluating best practices for secure multi-tenant Kubernetes environments." World Journal of Advanced Research and Reviews 1, no. 2 (March 30, 2019): 045–53. http://dx.doi.org/10.30574/wjarr.2019.1.2.0008.

Full text
Abstract:
As cloud-native applications grow in complexity and adoption, particularly within multi-tenant Kubernetes environ- ments, security and access control mechanisms are paramount. Role-Based Access Control (RBAC) is increasingly utilized as a critical security framework to manage permissions across users and services in these cloud-native platforms. However, implementing RBAC in Kubernetes presents unique challenges, especially in multi-tenant setups where robust access separation and efficient permission management are essential. This paper explores best practices for RBAC in multi-tenant Kubernetes environments, highlighting architectural design principles, po- tential vulnerabilities, and mitigation strategies. We propose an optimized RBAC model tailored for cloud-native applications, emphasizing role hierarchies, namespace isolation, and scalable access management. Our approach aims to enhance security by reducing the risk of privilege escalation and ensuring compliance with security policies across tenant boundaries. Experimental evaluation demonstrates the effectiveness of our model in min- imizing security risks and providing scalable access control in Kubernetes clusters. These findings offer actionable insights for organizations seeking to secure cloud-native applications in shared and multi-tenant infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
40

Amrutham, Naresh Kumar. "Enhancing Kubernetes Observability: A Synthetic Testing Approach for Improved Impact Analysis." International Journal for Research in Applied Science and Engineering Technology 12, no. 10 (October 31, 2024): 468–75. http://dx.doi.org/10.22214/ijraset.2024.64556.

Full text
Abstract:
Kubernetes has become the cornerstone for deploying and managing containerized applications. Its dynamic and distributed nature, while offering scalability and resilience, introduces significant complexities in observability, monitoring, and impact analysis. Traditional monitoring solutions often fall short in providing visibility into the functionalities affected by interruptions in the Kubernetes platform, leading to prolonged downtime and impact analysis. This paper investigates the challenges associated with achieving effective observability for Kubernetes environments due to their distributed architecture. We highlight the limitations of relying solely on conventional metrics, logs, and events to understand the impact of system interruptions. To address these challenges, we propose an integrated approach that combines synthetic testing with existing observability tools. By executing synthetic transactions and simulating user interactions with the Kubernetes platform across various use cases, this method proactively evaluates the system's performance and detects issues that passive monitoring might overlook. This comprehensive view enables faster identification of impacted functionalities. Through practical experiments, we demonstrate how incorporating synthetic testing improves the detection of service degradations and supports more effective troubleshooting strategies.
APA, Harvard, Vancouver, ISO, and other styles
41

Amrutham, Naresh Kumar. "Optimizing Kubernetes Environments: Best Practices for Configuring and Managing Admission Webhooks." International Journal for Research in Applied Science and Engineering Technology 12, no. 10 (October 31, 2024): 1044–51. http://dx.doi.org/10.22214/ijraset.2024.64789.

Full text
Abstract:
Abstract: Kubernetes has emerged as the leading platform for container orchestration, offering a robust framework for automating the deployment, scaling, and management of containerized applications. As its adoption increases, managing diverse workloads across various environments becomes increasingly complex, necessitating mechanisms for policy enforcement, compliance assurance, and dynamic configuration management. In Kubernetes, admission webhooks provide a means to intercept requests, enabling the mutation and validation of these requests to enforce custom policies. This paper presents comprehensive guidelines for configuring and operating admission webhooks to enhance security, consistency, and operational efficiency in Kubernetes environments. Through this paper, readers will gain hands-on experience in implementing best practices, thereby optimizing the use of admission webhooks to effectively maintain organizational policies and standards.
APA, Harvard, Vancouver, ISO, and other styles
42

Gajbhiye, Bipin, Om Goel, and Pandi Kirupa Gopalakrishna Pandian. "Managing Vulnerabilities in Containerized and Kubernetes Environments." Journal of Quantum Science and Technology 1, no. 2 (June 30, 2024): 59–71. http://dx.doi.org/10.36676/jqst.v1.i2.16.

Full text
Abstract:
The rise of containerized environments, exemplified by Docker and Kubernetes, has revolutionized software deployment and orchestration, enabling agile development and efficient resource utilization. However, the adoption of these technologies also introduces unique security challenges that organizations must address to safeguard their applications and infrastructure. This paper explores the complexities of managing vulnerabilities in containerized and Kubernetes environments, offering a comprehensive analysis of the potential risks and strategies to mitigate them.Containers encapsulate applications with their dependencies, ensuring consistency across different environments. However, this encapsulation can mask underlying vulnerabilities in the application code, base images, or third-party libraries. The ephemeral nature of containers, designed to be short-lived and scalable, adds another layer of complexity, as vulnerabilities can propagate rapidly across environments if not detected and addressed promptly. Moreover, the shared nature of the underlying host operating system and kernel in containerized environments increases the attack surface, making it crucial to secure both the containers and the host.Kubernetes, as a powerful orchestration platform, introduces additional layers of complexity in vulnerability management. The dynamic nature of Kubernetes clusters, with their multiple components such as pods, services, and nodes, can lead to misconfigurations, inadequate access controls, and exposure to security threats. Misconfigurations, such as overly permissive network policies or improper role-based access controls (RBAC), can lead to unauthorized access, privilege escalation, and data breaches. Additionally, the integration of third-party plugins and extensions into Kubernetes clusters can introduce new vulnerabilities, making it imperative to monitor and manage these components effectively.This paper delves into several key aspects of vulnerability management in containerized and Kubernetes environments. Firstly, it examines the importance of securing container images by employing best practices such as using minimal base images, regularly updating images, and scanning them for known vulnerabilities. The paper highlights the role of image scanning tools that can detect vulnerabilities in both base images and application code, emphasizing the need for continuous scanning throughout the development lifecycle.Secondly, the paper discusses the significance of runtime security in containerized environments. While securing container images is critical, monitoring and protecting containers during runtime is equally important. The paper explores tools and techniques for runtime security, including anomaly detection, behavior analysis, and intrusion detection systems that can identify and respond to threats in real-time.Furthermore, the paper addresses the challenges of managing vulnerabilities in Kubernetes clusters. It underscores the importance of securing the Kubernetes control plane, which includes securing API servers, etcd databases, and implementing stringent RBAC policies. The paper also explores the role of network security in Kubernetes, advocating for the use of network policies to control traffic flow between pods and ensure that only authorized communication is allowed.In addition to technical measures, the paper emphasizes the need for organizational practices to manage vulnerabilities effectively. This includes fostering a security-first culture, conducting regular security audits, and ensuring that development and operations teams are aligned on security best practices. The paper also highlights the importance of incident response planning and the need for rapid patching and updates to address newly discovered vulnerabilities.In conclusion, managing vulnerabilities in containerized and Kubernetes environments requires a multifaceted approach that combines technical measures with organizational practices. As organizations increasingly rely on these technologies for their application deployment and orchestration, a proactive and holistic approach to security is essential to mitigate risks and protect critical assets. This paper provides a roadmap for organizations to enhance their vulnerability management strategies, ensuring that their containerized and Kubernetes environments are secure, resilient, and capable of withstanding evolving threats.
APA, Harvard, Vancouver, ISO, and other styles
43

Aden, Abdiaziz Abdullahi. "Microservice Architecture in E-commerce Platforms using Docker and Kubernetes." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 26, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31735.

Full text
Abstract:
software development and deployment by increasing modularity, scalability, and flexibility. This technique divides apps into smaller, independent services that communicate over HTTP APIs. Docker plays an important role in containerizing these services, guaranteeing consistent operation across several settings. Docker containers are lightweight, use the host kernel, and start quicker than typical virtual machines, making them perfect for microservices. Kubernetes, Google's open-source technology, enhances microservices by automating container deployment, scaling, and administration across host clusters. This combination creates a stable environment for delivering microservices, allowing for continuous integration and deployment (CI/CD). This article investigates the synergistic usage of microservices, Docker, and Kubernetes, showing their combined efficiency in developing, deploying, and scaling applications across many environments. Keywords— microservice, architecture, Monolithic Architecture, Kubernetes, Docker
APA, Harvard, Vancouver, ISO, and other styles
44

Nguyen, Nguyen Dinh, and Taehong Kim. "Balanced Leader Distribution Algorithm in Kubernetes Clusters." Sensors 21, no. 3 (January 28, 2021): 869. http://dx.doi.org/10.3390/s21030869.

Full text
Abstract:
Container-based virtualization is becoming a de facto way to build and deploy applications because of its simplicity and convenience. Kubernetes is a well-known open-source project that provides an orchestration platform for containerized applications. An application in Kubernetes can contain multiple replicas to achieve high scalability and availability. Stateless applications have no requirement for persistent storage; however, stateful applications require persistent storage for each replica. Therefore, stateful applications usually require a strong consistency of data among replicas. To achieve this, the application often relies on a leader, which is responsible for maintaining consistency and coordinating tasks among replicas. This leads to a problem that the leader often has heavy loads due to its inherent design. In a Kubernetes cluster, having the leaders of multiple applications concentrated in a specific node may become a bottleneck within the system. In this paper, we propose a leader election algorithm that overcomes the bottleneck problem by evenly distributing the leaders throughout nodes in the cluster. We also conduct experiments to prove the correctness and effectiveness of our leader election algorithm compared with a default algorithm in Kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
45

Galande, Prof Bhagwati. "EVENT DRIVEN KUBERNETES AUTOSCALER FOR CLOUD NATIVE APPS." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (March 1, 2024): 1–13. http://dx.doi.org/10.55041/ijsrem28973.

Full text
Abstract:
Kubernetes and Docker have emerged to be a key solution for managing and organizing containers in microservice applications. With autoscaling, one of Kubernetes key features that dynamically modifies resources like CPU and memory to fit changing demands, it gives us a strong and efficient way to manage our application. Yet, the conventional autoscaling methods, such as the Horizontal Pod Autoscaler (HPA), often have difficulties on customary metrics, like CPU and memory utilization, which may not be fully suitable with the constant changes in the modern applications. This project aims at solving this problem by suggesting a unique solution which is Event Driven Kubernetes Autoscaler for Cloud Native Apps that is specifically made to work with the scaling needs of the application based on the message queue traffic. The custom scaling algorithm is an important part of the project, which unlike the previous traditional autoscalers who needed predefined metrics and did not provide much flexibility, the custom Metric based Autoscaler allows developers to select their own metrics and scaling parameters, thus providing flexibility and great user experience as they can meet the performance needs of all the applications. Keywords: Kubernetes, Docker, Microservices, Autoscaling, Cloud-Native, Horizontal Pod Autoscaler (HPA)
APA, Harvard, Vancouver, ISO, and other styles
46

Haryo, Ahmad Kusumo, and Chandra Kusuma. "Implementasi Penggunaan Kubernetes Cluster Google Cloud Platform untuk Deployment Aplikasi Wiki.js." Jurnal Informatika dan Teknologi Komputer (J-ICOM) 5, no. 1 (April 1, 2024): 11–20. http://dx.doi.org/10.55377/j-icom.v5i1.7944.

Full text
Abstract:
Setiap perusahaan memiliki produk digitalnya masing-masing, dengan pesatnya teknologi digital dan kebutuhan dari pengguna, mengharuskan perusahaan untuk dapat men-deliver layanan digitalnya agar selalu tersedia dan dapat diakses kapanpun. Demi menyajikan layanan digital yang ada setiap saat dan dalam jumlah yang banyak, para pengembang di perusahaan membuat aplikasi sesuai dengan banyaknya permintaan dan juga kebutuhan, hal ini menimbulkan masalah pada beberapa hal, misalnya semakin banyaknya aplikasi, semakin banyak pula server yang dibutuhkan, tentunya hal ini menambah biaya untuk pengembangan aplikasi, selain itu adanya dependensi antara aplikasi satu dengan yang lainnya membuat aplikasi tersebut mengalami masalah apabila pengembang mengeluarkan versi terbaru dari aplikasi. Google cloud platform merupakan salah satu platform cloud computing terkenal yang memiliki banyak layanan, seperti diantaranya Google Kubernetes Engine. Kubernetes merupakan teknologi orkestrasi container yang memungkinkan pengembang untuk memanajemen container dalam jumlah yang besar, hal tersebut dapat menguntungkan aplikasi agar tidak mudah mengalami downtime, karena container sendiri memiliki sumber daya yang terpisah dari aplikasi lainnya. Dalam penelitian ini akan dilakukan uji coba untuk penerapan ­deployment aplikasi wiki.js melalui kubernetes. Hasil uji coba dari penelitian ini adalah aplikasi wiki.js yang dapat diakses dengan pemanfaatan dari teknologi cluster kubernetes.
APA, Harvard, Vancouver, ISO, and other styles
47

Мазманян, В. Г., and Э. М. Вихтенко. "Algorithm for starting a multinode cluster Kubernetes and working with it in the CentOS Linux operating system." Vestnik of Russian New University. Series «Complex systems: models, analysis, management», no. 2 (June 21, 2022): 149–56. http://dx.doi.org/10.18137/rnu.v9187.22.02.p.149.

Full text
Abstract:
Описаны алгоритм запуска двухузлового кластера Kubernetes с использованием среды запуска контейнеров Docker в операционной системе Linux CentOS и алгоритм развертки простого веб-приложения Node.js в этом кластере. The article describes the algorithm for running a two-node Kubernetes cluster using the Docker container runtime on Linux CentOS operating system and the algorithm for deploying a simple Node.js web application in this cluster.
APA, Harvard, Vancouver, ISO, and other styles
48

Deket, Milan. "POREĐENJE DOCKER SWARM I KUBERNETES." Zbornik radova Fakulteta tehničkih nauka u Novom Sadu 36, no. 03 (March 9, 2021): 488–91. http://dx.doi.org/10.24867/12be25deket.

Full text
Abstract:
U radu su opisani osnovni pojmovi i načini korišćenja Docker, Docker Swarm i Kubernetes alata. Objašnjeno je šta je Docker i šta su kontejneri, kao i kako se koriste. U poglavljima Docker Swarm i kubernetes se opisuje kako upravljati kontejnerima u kompleksnijim rešenjima koja mogu da se nalaze na više servera kao i kako pratiti stanje celog sistema i time se braniti od neočekivanih grešaka, odnosno neočekivanog gašenja sistema.
APA, Harvard, Vancouver, ISO, and other styles
49

Schultz, David, Heath Skarlupka, Vladimir Brik, and Gonzalo Merino. "CVMFS: Stratum 0 in Kubernetes." EPJ Web of Conferences 214 (2019): 07032. http://dx.doi.org/10.1051/epjconf/201921407032.

Full text
Abstract:
IceCube is a cubic kilometer neutrino detector located at the south pole. CVMFS is a key component to IceCube’s Distributed High Throughput Computing analytics workflow for sharing 500GB of software across datacenters worldwide. Building the IceCube software suite across multiple platforms and deploying it into CVMFS has until recently been a manual, time consuming task that doesn’t fit well within an agile continuous delivery framework. Within the last 2 years a plethora of tooling around microservices has created an opportunity to upgrade the IceCube software build and deploy pipeline. We present a framework using Kubernetes to deploy Buildbot. The Buildbot pipeline is a set of pods (docker containers) in the Kubernetes cluster that builds the IceCube software across multiple platforms, tests the new software for critical errors, syncs the software to a containerized CVMFS server, and finally executes a publish. The time from code commit to CVMFS publish has been greatly reduced and has enabled the capability of publishing nightly builds to CVMFS.
APA, Harvard, Vancouver, ISO, and other styles
50

Molleti, Ramasankar. "Kubernetes Advanced Auto Scaling Techniques." Journal of Mathematical & Computer Applications 1, no. 4 (December 31, 2022): 1–4. http://dx.doi.org/10.47363/jmca/2022(1)e126.

Full text
Abstract:
This paper discusses some sophisticated autoscaling strategies in Kubernetes, and they include horizontal and vertical pod autoscaling, cluster levels autoscaling, and metrics based autoscaling. In this regard, it covers the topics of both predictive and event-triggered auto scaling approaches and their applications, the advantages and the potential issues relating to them, and more.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography