To see the other types of publications on this topic, follow the link: Serverless Kubernetes.

Journal articles on the topic 'Serverless Kubernetes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Serverless Kubernetes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Pavan, Srikanth SubbaRaju Patchamatla. "Serverless Kubernetes with AI-Augmented DevOps: Optimizing Cloud Infrastructure for Scalable and Cost-Efficient Deployments." Recent Trends in Androids and IOS Applications 7, no. 2 (2025): 1–4. https://doi.org/10.5281/zenodo.14921348.

Full text
Abstract:
<em>The emergence of serverless Kubernetes, combined with AI-augmented DevOps, is transforming cloud infrastructure by enabling scalable, cost-efficient deployments. This research explores the role of AI in optimizing serverless Kubernetes environments by integrating predictive resource management, automated failure detection, and intelligent workload scheduling. The study investigates how AI-driven automation enhances scalability, reduces operational costs, and minimizes human intervention in managing Kubernetes clusters. A comparative analysis of AI-based workload scheduling against traditio
APA, Harvard, Vancouver, ISO, and other styles
2

Adusumilli, Lakshmi Vara Prasad. "Serverless Kubernetes: The Evolution of Container Orchestration." European Journal of Computer Science and Information Technology 13, no. 30 (2025): 20–36. https://doi.org/10.37745/ejcsit.2013/vol13n302036.

Full text
Abstract:
This article examines the convergence of serverless computing and Kubernetes orchestration, representing a significant advancement in cloud-native architecture. Serverless Kubernetes implementations address fundamental operational challenges of traditional container orchestration while preserving its powerful capabilities. It explores the technical foundations enabling this evolution, including Virtual Kubelet for node abstraction, KEDA for event-driven scaling, and Knative for serverless abstractions. It analyzes implementations from major cloud providers—AWS EKS on Fargate, Azure Container I
APA, Harvard, Vancouver, ISO, and other styles
3

Anila, Gogineni. "Helm for Continuous Delivery of Serverless Applications on Kubernetes." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 9, no. 6 (2021): 1–11. https://doi.org/10.5281/zenodo.14880986.

Full text
Abstract:
This paper focuses on exploring the utilization of Helm in enabling coantinuous delivery of serverless applications that run on the Kubernetes. Helm helps in covering a package manager for Kubernetes that makes the packaging of serverless architecture more solid with the template Helm charts. By using these charts, developers are able to consolidate and orchestrate deployment pipelines over different environments. The problem addressed in the study is related to real-world issues of serverless applications, focusing on how serverless technologies have to work with Kubernetes, issues with multi
APA, Harvard, Vancouver, ISO, and other styles
4

Decker, Jonathan, Piotr Kasprzak, and Julian Martin Kunkel. "Performance Evaluation of Open-Source Serverless Platforms for Kubernetes." Algorithms 15, no. 7 (2022): 234. http://dx.doi.org/10.3390/a15070234.

Full text
Abstract:
Serverless computing has grown massively in popularity over the last few years, and has provided developers with a way to deploy function-sized code units without having to take care of the actual servers or deal with logging, monitoring, and scaling of their code. High-performance computing (HPC) clusters can profit from improved serverless resource sharing capabilities compared to reservation-based systems such as Slurm. However, before running self-hosted serverless platforms in HPC becomes a viable option, serverless platforms must be able to deliver a decent level of performance. Other re
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Vivek. "AI-DRIVEN CLOUD INFRASTRUCTURE: ADVANCES IN KUBERNETES AND SERVERLESS COMPUTING." international journal of advanced research in computer science 16, no. 2 (2025): 65–70. https://doi.org/10.26483/ijarcs.v16i2.7234.

Full text
Abstract:
Artificial Intelligence has been integrated into cloud infrastructure, making it revolutionizing modern computing by automating, scaling, and efficiency. The first of these is Kubernetes and the second is serverless computing. Kubernetes, a container orchestration platform, benefits from AI-driven enhancements in workload scheduling, auto-scaling, and resource optimization. By combining AI based predictive analytics with container deployment, overhead is reduced in terms of the operational overhead as well as the fault tolerance. However, serverless computing takes away the management of infra
APA, Harvard, Vancouver, ISO, and other styles
6

Mehdi Syed, Ali Asghar, and Shujat Ali. "Kubernetes and AWS Lambda for Serverless Computing: Optimizing Cost and Performance Using Kubernetes in a Hybrid Serverless Model." International Journal of Emerging Trends in Computer Science and Information Technology 5 (2024): 50–60. https://doi.org/10.63282/3050-9246.ijetcsit-v5i4p106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bandaru, Santosh Panendra. "Cloud Computing for Software Engineers: Building Serverless Applications." International Journal of Computer Science and Mobile Computing 12, no. 11 (2023): 90–114. https://doi.org/10.47760/ijcsmc.2023.v12i11.007.

Full text
Abstract:
Serverless computing has emerged as a revolutionary paradigm in cloud computing, enabling software engineers to focus on writing application logic while cloud providers handle infrastructure management. This research paper explores the evolution and foundational principles of cloud computing, providing a deep dive into serverless architectures. It examines core technologies such as Function as a Service (FaaS) and Backend as a Service (BaaS), evaluates leading serverless platforms, and discusses development methodologies, security considerations, and cost optimization strategies. Additionally,
APA, Harvard, Vancouver, ISO, and other styles
8

Bezrąk, Krzysztof, and Sławomir Przyłucki. "Impact of the cloud application programming language on the performance of its implementation in selected serverless environments." Journal of Computer Sciences Institute 14 (March 30, 2020): 31–36. http://dx.doi.org/10.35784/jcsi.1572.

Full text
Abstract:
Recent years of cloud technology development have brought a sharp increase in interest in solutions known as serverless systems. Their performance, and thus usefulness in potential applications, strongly depends on the method of program implementation of specific tasks. The article analyzes the impact of selected, currently the most popular, programming languages on the performance of the serverless test infrastructure running in an environment managed by the Kubernetes system. The collected data were used to formulate conclusions regarding the suitability of individual languages in the condit
APA, Harvard, Vancouver, ISO, and other styles
9

Femminella, Mauro, and Gianluca Reali. "Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing." Computers 13, no. 9 (2024): 224. http://dx.doi.org/10.3390/computers13090224.

Full text
Abstract:
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called "cold start" events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-ad
APA, Harvard, Vancouver, ISO, and other styles
10

Journal, of Global Research in Multidisciplinary Studies(JGRMS). "Serverless Computing and Cloud-Native Applications Trends in AWS, Kubernetes, and DevOps." Journal of Global Research in Multidisciplinary Studies(JGRMS) 01, no. 03 (2025): 01–07. https://doi.org/10.5281/zenodo.15245283.

Full text
Abstract:
Software development evolution lately resulted in extensive use of cloud-native with serverless computing structures which changed the way applications get designed for deployment. This paper delivers an extensive evaluation of latest developments in cloud-native and serverless models that focuses especially on DevOps methodology integration. Cloud-native architectures with their features of containerization and microservices and orchestration bring agility and scalability alongside fault tolerance capacity. Through serverless computing, developers gain complete infrastructure abstraction to r
APA, Harvard, Vancouver, ISO, and other styles
11

Grace Joseph, Sunandha Rajagopal, Dr. Amrita Priya K, and Sreelekshmi R. "Dynamic Resource Scheduling Approaches in Server Less Computing." International Research Journal on Advanced Engineering and Management (IRJAEM) 3, no. 05 (2025): 1749–58. https://doi.org/10.47392/irjaem.2025.0277.

Full text
Abstract:
Serverless computing has emerged as a transformative paradigm in cloud computing, offering event-driven execution and automated resource management without the need for explicit infrastructure provisioning. However, its dynamic, multi-tenant, and stateless nature introduces significant challenges in resource scheduling, particularly in maintaining a balance between performance, cost efficiency, and service-level agreements (SLAs). This paper presents a comprehensive review of dynamic resource scheduling approaches in serverless architectures, categorizing them into machine learning-based, heur
APA, Harvard, Vancouver, ISO, and other styles
12

Swethasri Kavuri. "Integrating Kubernetes Autoscaling for Cost Efficiency in Cloud Services." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 5 (2024): 480–502. http://dx.doi.org/10.32628/cseit241051038.

Full text
Abstract:
Kubernetes Autoscaling Mechanism for Integration into Cloud Services to Achieve Cost Efficiency Organizations have turned towards containerized applications and microservices architecture. Optimizing and using resources appropriately as per the expected operational cost becomes the need of the hour. There are several autoscaling mechanisms within Kubernetes, that include Horizontal Pod Autoscaler, Vertical Pod Autoscaler, and Cluster Autoscaler, working towards cost optimization. We study predictive scaling algorithms, multi-dimensional autoscaling strategies, and machine learning-based approa
APA, Harvard, Vancouver, ISO, and other styles
13

Choudhury, Amit, and Abhishek Kartik Nandyala. "Microservices Deployment Strategies: Navigating Challenges with Kubernetes and Serverless Architectures." International Journal of Innovative Research in Engineering and Management 11, no. 5 (2024): 127–34. http://dx.doi.org/10.55524/ijirem.2024.11.5.18.

Full text
Abstract:
This paper aims at exploring the effects of performance tuning on concerns such as reliability and uptime with respect to the three current architectures: cloud, on-premise, and hybrid models. This paper shows that performance tuning does enhance system stability and availability based on comparative data on CPU load, memory consumption, disk I/O operations, network delays, application response time, and system failure rates. Employing both quantitative and qualitative data, the study compares features taken from real-world system logs and information obtained from expert interviews indicating
APA, Harvard, Vancouver, ISO, and other styles
14

Elkholy, Mohamed, and Marwa A. Marzok. "Light weight serverless computing at fog nodes for internet of things systems." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 394–403. https://doi.org/10.11591/ijeecs.v26.i1.pp394-403.

Full text
Abstract:
Internets of things (IoT) systems collect large size of data from huge numbers of sensors. A wide rage of IoT systems relies on cloud resources to process and analyze the collected data. However, passing large amount of data to the cloud affects the overall performance and cannot support realtime requirements. Serverless computing is a promising technique that allows developer to write an application code, in any programming language, and specify an event to start its execution. Thus, IoT system can get a good benefit of serverless environment. The proposed work introduces a framework to allow
APA, Harvard, Vancouver, ISO, and other styles
15

Naayini, Prudhvi. "Scalable AI Model Deployment and Management on Serverless Cloud Architecture." International Journal of Electrical, Electronics and Computers 9, no. 1 (2024): 1–12. https://doi.org/10.22161/eec.91.1.

Full text
Abstract:
Scalable deployment of deep learning models in the cloud faces challenges in balancing performance, cost, and manageability. This paper investigates serverless cloud architecture for AI model inference, focusing on AWS technologies such as AWS Lambda, API Gateway, and Kubernetes-based serverless extensions (e.g., AWS EKS with Knative). We first outline the limitations of traditional, server-based model hosting to motivate the serverless approach. Then, we present novel strategies for scalable model serving: an adaptive resource provisioning algorithm, intelligent model caching, and efficient m
APA, Harvard, Vancouver, ISO, and other styles
16

Petrosyan, Davit, and Hrachya Astsatryan. "Serverless High-Performance Computing over Cloud." Cybernetics and Information Technologies 22, no. 3 (2022): 82–92. http://dx.doi.org/10.2478/cait-2022-0029.

Full text
Abstract:
Abstract HPC clouds may provide fast access to fully configurable and dynamically scalable virtualized HPC clusters to address the complex and challenging computation and storage-intensive requirements. The complex environmental, software, and hardware requirements and dependencies on such systems make it challenging to carry out our large-scale simulations, prediction systems, and other data and compute-intensive workloads over the cloud. The article aims to present an architecture that enables HPC workloads to be serverless over the cloud (Shoc), one of the most critical cloud capabilities f
APA, Harvard, Vancouver, ISO, and other styles
17

Pentaparthi, Sai Kalyan Reddy. "Dissecting Serverless Computing for AI-Driven Network Functions: Concepts, Challenges, and Opportunities." European Journal of Computer Science and Information Technology 13, no. 6 (2025): 1–12. https://doi.org/10.37745/ejcsit.2013/vol13n6112.

Full text
Abstract:
Serverless computing represents a transformative paradigm in cloud architecture that is fundamentally changing how network functions are deployed and managed. This article examines the intersection of serverless computing and artificial intelligence in the context of network functions, highlighting how this convergence enables more efficient, scalable, and intelligent network operations. The serverless model abstracts infrastructure management while offering automatic scaling and consumption-based pricing, creating an ideal environment for deploying AI-driven network capabilities. The architec
APA, Harvard, Vancouver, ISO, and other styles
18

Elkholy, Mohamed, and Marwa A. Marzok. "Light weight serverless computing at fog nodes for internet of things systems." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 394. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp394-403.

Full text
Abstract:
Internet of &lt;span lang="EN-US"&gt;things (IoT) systems collect large size of data from huge numbers of sensors. A wide rage of IoT systems relies on cloud resources to process and analyze the collected data. However, passing large amount of data to the cloud affects the overall performance and cannot support real-time requirements. Serverless computing is a promising technique that allows developer to write an application code, in any programming language, and specify an event to start its execution. Thus, IoT system can get a good benefit of serverless environment. The proposed work introd
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Xinxi, Nan Li, Lijuan Yuan, and Juan Zhang. "Enhancing Resource Utilization Efficiency in Serverless Education: A Stateful Approach with Rofuse." Electronics 13, no. 11 (2024): 2168. http://dx.doi.org/10.3390/electronics13112168.

Full text
Abstract:
Traditional container orchestration platforms often suffer from resource wastage in educational settings, and stateless serverless services face challenges in maintaining container state persistence during the teaching process. To address these issues, we propose a stateful serverless mechanism based on Containerd and Kubernetes, focusing on optimizing the startup process for container groups. We first implement a checkpoint/restore framework for container states, providing fundamental support for managing stateful containers. Building on this foundation, we propose the concept of “container g
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Dazhi, Jiaang Duan, Yan Yao, et al. "SoDa: A Serverless-Oriented Deadline-Aware Workflow Scheduling Engine for IoT Applications in Edge Clouds." Wireless Communications and Mobile Computing 2022 (October 7, 2022): 1–20. http://dx.doi.org/10.1155/2022/7862911.

Full text
Abstract:
As a coordination tool, workflow with a large number of interdependent tasks has increasingly become a new paradigm for orchestrating computationally intensive tasks in large-scale and complex Internet of Things (IoT) applications. Serverless computing has also recently been applied to real-world problems at the network edge as well, primarily aimed at event based IoT applications. However, the existing workflow scheduling algorithm based on the virtual machine resource model is inefficient in ensuring the QoS (Quality of Service) of users on the serverless platform. In this paper, we design a
APA, Harvard, Vancouver, ISO, and other styles
21

Srikanth Potla. "The evolution of container security in Kubernetes environments." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 2352–62. https://doi.org/10.30574/wjarr.2025.26.2.1741.

Full text
Abstract:
This article examines the security challenges associated with containerized applications in Kubernetes environments. It explores the evolution from traditional security models to container-specific approaches needed for ephemeral, distributed workloads. The methodology evaluates security solutions across vulnerability management, compliance monitoring, runtime protection, network security, and access control dimensions. The discussion highlights key challenges including container image vulnerabilities, runtime security enforcement in dynamic environments, multi-tenancy concerns, network segmen
APA, Harvard, Vancouver, ISO, and other styles
22

Pradeepkumar Palanisamy. "Cloud-native test automation: The future of scalable financial software quality engineering." World Journal of Advanced Engineering Technology and Sciences 15, no. 1 (2025): 2371–79. https://doi.org/10.30574/wjaets.2025.15.1.0461.

Full text
Abstract:
Cloud-native test automation is revolutionizing quality engineering for financial software by addressing the challenges traditional testing methodologies face with distributed systems, containerized applications, and microservices architectures. This article explores how financial institutions leverage containerization technologies, serverless execution platforms, and AI-driven test orchestration to maintain rigorous quality standards while accelerating delivery pipelines. The transformative impact of self-healing automation powered by artificial intelligence enables systems to learn from hist
APA, Harvard, Vancouver, ISO, and other styles
23

Navulipuri, Sreenivasulu. "Engineering Scalable Microservices: A Comparative Study of Serverless Vs. Kubernetes-Based Architectures." International Journal of Scientific Research and Engineering Trends 11, no. 2 (2025): 2023–34. https://doi.org/10.61137/ijsret.vol.11.issue2.333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hong, Sara, Yeeun Kim, Jaehyun Nam, and Seongmin Kim. "On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads." Sensors 24, no. 12 (2024): 3774. http://dx.doi.org/10.3390/s24123774.

Full text
Abstract:
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a- Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on comm
APA, Harvard, Vancouver, ISO, and other styles
25

Putra, Christian Bayu Anggoro, Dimas Prasetyo Tegar Asmoro, and Akmal Budi Yulianto. "Serverless Computing: A Comparative Analysis of Cloud Run and Cloud Function Prices on Google Kubernetes Engine Cluster Node Management Google Cloud." Jurnal Teknologi Informatika dan Komputer 11, no. 2 (2025): 760–70. https://doi.org/10.37012/jtik.v11i2.2780.

Full text
Abstract:
The advancement of cloud computing technology has revolutionized the way companies build and deploy applications, offering unprecedented flexibility, scalability, and efficiency. One of the most significant innovations in this space is serverless computing, which allows developers to build and run applications without managing the underlying server infrastructure. This model fundamentally changes the application development paradigm, shifting from static resource allocation to an event-driven model where resources are consumed only when needed (Sharma et al., 2021). The increasing adoption of
APA, Harvard, Vancouver, ISO, and other styles
26

Ups,USA, Srikanth Yerra. "Intelligent Workload Readjustment of Serverless Functions in Cloud to Edge Environment." International journal of data science and machine learning 05, no. 01 (2025): 182–91. https://doi.org/10.55640/ijdsml-05-01-18.

Full text
Abstract:
Serverless technologies have represented a significant advancement in cloud computing, characterized by its exceptional scalability and the granular subscription-based model provided by leading public cloud vendors. Concurrently, serverless platforms that facilitate the FaaS architecture enable users to use numerous benefits while functioning on the on-site infrastructures of enterprises. It makes it possible to install and use them on several tiers of the cloud-to-edge continuum, from IoT devices at the user end to on-site clusters near to the main sources or directly in the Cloud. The challe
APA, Harvard, Vancouver, ISO, and other styles
27

Santhosh, Podduturi. "Architectural Patterns for ML in Microservices & Cloud Architecture." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 9, no. 1 (2023): 1–13. https://doi.org/10.5281/zenodo.15087171.

Full text
Abstract:
Machine Learning (ML) is revolutionizing industries by enabling intelligent decision-making and automation. However, deploying ML models in modern cloud-native applications requires scalable, maintainable, and efficient architectural patterns. This paper explores architectural patterns that facilitate the seamless integration of ML into microservices and cloud-based ecosystems. It discusses various deployment models, including ML Model as a Service (MaaS), Event-Driven ML, Federated Learning, and Serverless ML, highlighting their advantages, challenges, and best practices.The paper delves into
APA, Harvard, Vancouver, ISO, and other styles
28

Femminella, Mauro, and Gianluca Reali. "Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies." Algorithms 17, no. 8 (2024): 320. http://dx.doi.org/10.3390/a17080320.

Full text
Abstract:
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techn
APA, Harvard, Vancouver, ISO, and other styles
29

Diwakar Krishnakumar. "Deploying a scalable PostgreSQL database on a Kubernetes cluster in a data center: A path toward serverless operations." World Journal of Advanced Research and Reviews 26, no. 1 (2025): 1021–27. https://doi.org/10.30574/wjarr.2025.26.1.1091.

Full text
Abstract:
This article presents a comprehensive framework for deploying and managing PostgreSQL databases on Kubernetes clusters within data center environments, focusing on achieving serverless-like operations. The article examines the essential components, architectural considerations, and operational strategies required for successful implementation. It addresses key challenges in infrastructure planning, component integration, security implementation, and operational management while highlighting the importance of maintaining high availability and performance optimization. The article explores the i
APA, Harvard, Vancouver, ISO, and other styles
30

Researcher. "ORCHESTRATING THE CLOUD: AI-ENHANCED RELEASE AUTOMATION IN KUBERNETES ENVIRONMENTS." International Journal of Research In Computer Applications and Information Technology (IJRCAIT) 7, no. 2 (2024): 864–78. https://doi.org/10.5281/zenodo.14045753.

Full text
Abstract:
Cloud-native architectures have revolutionized software deployment, necessitating advanced release automation strategies to manage increasingly complex environments. This article explores the synergy between Kubernetes and artificial intelligence in facilitating scalable, self-healing, and optimized deployments for cloud-native applications. We present a comprehensive framework for implementing automated deployment pipelines within Kubernetes clusters, leveraging its native features for horizontal scaling and rolling updates. The integration of AI-driven tools for predictive scaling, anomaly d
APA, Harvard, Vancouver, ISO, and other styles
31

Sargunakumar, Anishkumar. "Building Resilient Java Applications with Open-Source DevOps Tools." Journal of Software Engineering and Simulation 9, no. 3 (2023): 63–71. https://doi.org/10.35629/3795-09036371.

Full text
Abstract:
As enterprises increasingly adopt microservices-based architectures, ensuring resilience in Java applications has become a significant challenge. Open-source DevOps tools play a crucial role in automating deployment, scaling, and monitoring to enhance application resilience. These tools help organizations improve system reliability, minimize downtime, and respond to failures effectively. Kubernetes enables self-healing mechanisms, OpenShift provides enterprise-grade security, Docker ensures portability, Jenkins automates continuous integration, Prometheus enhances observability, and Helm simpl
APA, Harvard, Vancouver, ISO, and other styles
32

Sebastián, Risco, Alarcón Caterina, Langarita Sergio, Miguel Caballer, and Germán Moltó. "Rescheduling Serverless Workloads Across the Cloud-to-Edge Continuum." Future Generation Computer Systems 153 (December 21, 2023): 457–66. https://doi.org/10.1016/j.future.2023.12.015.

Full text
Abstract:
Serverless computing was a breakthrough in Cloud computing due to its high elasticity capabilities and fine-grained pay-per-use model offered by the main public Cloud providers. Meanwhile, open-source serverless platforms supporting the FaaS (Function as a Service) model allow users to take advantage of many of their benefits while operating on the on-premises platforms of organizations. This opens the possibility to deploy and exploit them on the different layers of the cloud-to-edge continuum, either on IoT (Internet of Things) devices located at the Edge (i.e. next to data acquisition devic
APA, Harvard, Vancouver, ISO, and other styles
33

Uday Bag, Uday Bag. "AI-Powered Claims Processing Transformation: Automation, Analysis, and Fraud Detection." International Journal of Advances in Engineering and Management 7, no. 4 (2025): 199–207. https://doi.org/10.35629/5252-0704199207.

Full text
Abstract:
The healthcare industry is undergoing a profound digital transformation driven by artificial intelligence and cloud-native architectures, particularly in claims processing, provider networks, and eligibility verification. As legacy on-premises systems struggle to manage increasing data volumes, evolving regulations, and demands for real-time automation, cloud-native solutions on major platforms like Azure, AWS, and Google Cloud offer scalable and secure alternatives. This article examines how microservices, Kubernetes, serverless computing, and API-driven integrations are revolutionizing healt
APA, Harvard, Vancouver, ISO, and other styles
34

Muhammad Saqib. "Optimizing Spot Instance Reliability and Security Using Cloud-Native Data and Tools." Journal of Information Systems Engineering and Management 10, no. 14s (2025): 720–31. https://doi.org/10.52783/jisem.v10i14s.2387.

Full text
Abstract:
This paper presents "Cloudlab," a comprehensive, cloud-native laboratory designed to support network security research and training. Built on Google Cloud and adhering to GitOps methodologies, Cloudlab facilitates the creation, testing, and deployment of secure, containerized workloads using Kubernetes and serverless architectures. The lab integrates tools like Palo Alto Networks firewalls, Bridgecrew for "Security as Code," and automated GitHub workflows to establish a robust Continuous Integration/Continuous Machine Learning pipeline. By providing an adaptive and scalable environment, Cloudl
APA, Harvard, Vancouver, ISO, and other styles
35

Pothen, Vivek Aby. "Strategic Azure Cloud Migration for Telecom: Best Practices and Emerging Trends." European Journal of Computer Science and Information Technology 13, no. 19 (2025): 79–92. https://doi.org/10.37745/ejcsit.2013/vol13n197992.

Full text
Abstract:
The migration of telecommunications infrastructure to cloud platforms, particularly Microsoft Azure, represents a transformative shift in how telecommunications providers manage and optimize their networks. This comprehensive article explores the imperatives driving cloud adoption in telecommunications, examining the substantial improvements in operational efficiency, cost reduction, and service reliability achieved through strategic migration initiatives. The article investigates hybrid cloud adoption strategies, the implementation of advanced Azure technologies including AI-powered analytics
APA, Harvard, Vancouver, ISO, and other styles
36

Venkata, Baladari. "Monolith to Microservices: Challenges, Best Practices, and Future Perspectives." European Journal of Advances in Engineering and Technology 8, no. 8 (2021): 123–28. https://doi.org/10.5281/zenodo.15044455.

Full text
Abstract:
The adoption of microservices architecture has significantly impacted software development, offering benefits such as enhanced scalability, increased flexibility, and accelerated deployment, although it also brings about issues including intricate communication complexities, security vulnerabilities, and elevated operational burdens. This study examines the shift from monolithic to microservices architecture, focusing on significant obstacles and effective methods including API gateways, containerization, Continuous Integration and Continuous Deployment (CI/CD) pipelines, and event-driven arch
APA, Harvard, Vancouver, ISO, and other styles
37

Bayya, Anil Kumar. "Leveraging Advanced Cloud Computing Paradigms to Revolutionize Enterprise Application Infrastructure." Asian Journal of Mathematics and Computer Research 32, no. 1 (2025): 133–54. https://doi.org/10.56557/ajomcor/2025/v32i19067.

Full text
Abstract:
Advanced cloud computing paradigms have significantly revolutionized the enterprise application landscape, providing organizations with the tools and flexibility to innovate and scale rapidly. By leveraging technologies such as serverless computing, containerization, edge computing, and multi-cloud strategies, enterprises can build applications that are not only scalable and agile but also cost-efficient and secure. These paradigms eliminate the need for extensive infrastructure management, allowing organizations to focus on core business objectives and accelerate time-to-market for new applic
APA, Harvard, Vancouver, ISO, and other styles
38

Dhanorkar, Tejas, Sai Charan Ponnoju, and Shemeer Sulaiman Kunju. "Cloud-Native Wallet Fabric: Engineering Scalable, Multicurrency e-Wallet Platforms." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 766–76. https://doi.org/10.60087/jaigs.v6i1.368.

Full text
Abstract:
The rapid evolution of digital financial ecosystems demands e-wallet platforms that can seamlessly scale while supporting diverse currencies and high transaction volumes. Traditional e-wallet systems, often constrained by monolithic architectures, struggle to meet these requirements, particularly in global, multi-currency contexts. This paper introduces the Cloud-Native Wallet Fabric (CNWF), a novel architecture designed to address these challenges through cloud-native technologies. By leveraging microservices, containerization, Kubernetes orchestration, and serverless computing, CNWF ensures
APA, Harvard, Vancouver, ISO, and other styles
39

Shrikant Thakare. "Scaling Cloud-Native Security: Defending Against DDoS attacks in Distributed Infrastructure." World Journal of Advanced Engineering Technology and Sciences 15, no. 3 (2025): 486–93. https://doi.org/10.30574/wjaets.2025.15.3.0954.

Full text
Abstract:
This technical article explores the evolution of cloud-native security strategies for defending against increasingly sophisticated Distributed Denial of Service (DDoS) attacks in modern distributed infrastructure environments. It examines how the fundamental principles of cloud-native architecture distribution, resilience, and elasticity provide inherent advantages in DDoS defense compared to traditional perimeter-based approaches. The article details a multi-layered defense blueprint incorporating auto-scaled rate-limiting layers, event-driven serverless defenses, service mesh integration, an
APA, Harvard, Vancouver, ISO, and other styles
40

Sri, Harsha Vardhan Sanne. "Resolving Scalability and Performance Bottlenecks in Aws-Based Microservices Architectures." Journal of Scientific and Engineering Research 8, no. 10 (2021): 181–86. https://doi.org/10.5281/zenodo.11820350.

Full text
Abstract:
The increasing adoption of microservices architectures within AWS environments has brought to the forefront the critical challenges of scalability and performance bottlenecks. This review paper systematically examines these challenges, presenting a comprehensive analysis of the inherent limitations and inefficiencies that impede optimal performance. The study identifies key bottlenecks such as latency, resource contention, inefficient load balancing, and suboptimal service orchestration. By leveraging a thorough literature review, the paper explores state-of-the-art strategies and best practic
APA, Harvard, Vancouver, ISO, and other styles
41

Mohit, Menghnani. "Modern Full Stack Development Practices for Scalable and Maintainable Cloud-Native Applications." International Journal of Innovative Science and Research Technology (IJISRT) 10, no. 2 (2025): 1206–16. https://doi.org/10.5281/zenodo.14959407.

Full text
Abstract:
The widespread acceptance of the cloud-native concept and the emergence of several specialized cloud-native apps have focused industry attention on the web stacks of cloud-native apps. The integration of cloud-native and full-stack development tools allows for the rapid and smooth deployment, scaling, and maintenance of web applications. Full stack development today in the cloud native era is a hybrid of its technologies and ways of working to bring about scalability, efficiency and maintainability. Built in tools like AWS Amplify, Google Firebase, Heroku can be used for a smooth deployment; s
APA, Harvard, Vancouver, ISO, and other styles
42

Gaddam, Kishore Reddi. "Building Scalable Digital Payment Systems for Emerging Markets: Cloud and Microservices as Enablers." European Journal of Computer Science and Information Technology 13, no. 29 (2025): 65–80. https://doi.org/10.37745/ejcsit.2013/vol13n296580.

Full text
Abstract:
This article explores how cloud-native architectures and containerized microservices enable the development of scalable digital payment systems tailored to emerging markets. Financial inclusion remains a significant challenge in developing regions where traditional banking infrastructure fails to reach large segments of the population. Cloud-native approaches transform payment system economics by eliminating upfront capital requirements and enabling consumption-based pricing models crucial for serving previously excluded populations. Microservices architecture provides the modularity needed to
APA, Harvard, Vancouver, ISO, and other styles
43

Preetham Kumar Dammalapati. "Advances in cloud-native microservices for system integration." World Journal of Advanced Research and Reviews 26, no. 3 (2025): 196–206. https://doi.org/10.30574/wjarr.2025.26.3.2165.

Full text
Abstract:
Cloud-native microservices have revolutionized system integration strategies across enterprise architectures, offering unprecedented advantages in scalability, resilience, and agility. This architectural paradigm decomposes monolithic applications into independently deployable, loosely coupled services that can be developed and managed separately. The transition to microservices enables more efficient development cycles through container-based deployments while allowing teams to work autonomously on distinct components. Modern integration patterns, including API-first design, event-driven comm
APA, Harvard, Vancouver, ISO, and other styles
44

Krishna Rao Vemula. "Advancements in Cloud-Native Applications: Innovative Tools and Research Frontiers." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 974–81. https://doi.org/10.32628/cseit251112100.

Full text
Abstract:
This article explores the cutting-edge developments in cloud-native applications, focusing on innovative tools and research frontiers that are shaping the future of digital infrastructure. It examines the evolving capabilities of orchestration platforms, the integration of IoT and edge computing, advancements in serverless architectures, and the growing importance of AI-driven optimization and sustainable computing practices. The article delves into case studies across e-commerce, healthcare, and financial services sectors, illustrating how cloud-native technologies are being leveraged to solv
APA, Harvard, Vancouver, ISO, and other styles
45

N. Savitha. "Real-Time Cloud Intrusion Detection with SpinalSAENet: A Sparse Autoencoder Approach with Focal Loss Optimization." Journal of Information Systems Engineering and Management 10, no. 42s (2025): 54–69. https://doi.org/10.52783/jisem.v10i42s.7853.

Full text
Abstract:
The swift growth of cloud computing has heightened cybersecurity vulnerabilities, demanding robust intrusion detection systems (IDS). Conventional IDS models face challenges, such as excessive false positives and limited flexibility. This study introduces Spinal Stacked AutoEncoder Net (SpinalSAENet), an innovative hybrid deep-learning-based IDS that merges SpinalNet and Deep Stacked AutoEncoders (DSAE) to enhance anomaly detection and data integrity verification. The system employs feature extraction and Chebyshev distance-based fusion to improve classification, while Principal Component Anal
APA, Harvard, Vancouver, ISO, and other styles
46

N.Savitha, E.Saikiran. "SpinalSAENet: An Intelligent Intrusion Detection and Data Integrity Framework for Cloud Environments." Journal of Information Systems Engineering and Management 10, no. 18s (2025): 513–24. https://doi.org/10.52783/jisem.v10i18s.2939.

Full text
Abstract:
The swift growth of cloud computing has heightened cybersecurity vulnerabilities, demanding robust intrusion detection systems (IDS). Conventional IDS models face challenges, such as excessive false positives and limited flexibility. This study introduces Spinal Stacked AutoEncoder Net (SpinalSAENet), an innovative hybrid deep-learning-based IDS that merges SpinalNet and Deep Stacked AutoEncoders (DSAE) to enhance anomaly detection and data integrity verification. The system employs feature extraction and Chebyshev distance-based fusion to improve classification, while Principal Component Anal
APA, Harvard, Vancouver, ISO, and other styles
47

Kiran, Kumar Voruganti. "Building a Self-Healing Infrastructure with Autonomous Remediation in Cloud DevOps." Journal of Scientific and Engineering Research 8, no. 11 (2021): 161–71. https://doi.org/10.5281/zenodo.12671442.

Full text
Abstract:
In the rapidly evolving landscape of cloud computing, building resilient and autonomous systems is crucial for ensuring continuous service availability and operational efficiency. This paper explores the design and implementation of a self-healing infrastructure with autonomous remediation within Cloud DevOps environments. By integrating advanced strategies such as dynamic resource allocation, machine learning-driven fault detection, real-time monitoring, and automated incident response, this research aims to enhance system reliability and minimize downtime. The study delves into the role of a
APA, Harvard, Vancouver, ISO, and other styles
48

Berezin, Sergei. "Architecting Scalable AI-CRM Systems Design Patterns, Infrastructure, and Performance Optimization." International Journal of Engineering and Computer Science 14, no. 06 (2025): 27241–48. https://doi.org/10.18535/ijecs.v14i06.5132.

Full text
Abstract:
This article presents approaches for designing scalable AI-CRM systems capable of efficiently processing large volumes of data and delivering real-time analytics. Three primary architectural patterns—microservices, an event-driven architecture with CQRS, and data-processing pipelines—are examined, and their combined use is shown to enhance system flexibility and reliability. The proposed cloud-container infrastructure leverages Docker/Kubernetes, serverless functions, and managed services for queuing, storage, and MLOps, while a service mesh is employed to ensure security and observability. Op
APA, Harvard, Vancouver, ISO, and other styles
49

Savitha, Raghunathan. "Resilience by Design: A Comprehensive Guide to Enhancing Resilience through Cloud Native Chaos Engineering." Journal of Scientific and Engineering Research 8, no. 8 (2021): 181–85. https://doi.org/10.5281/zenodo.11216267.

Full text
Abstract:
The necessity of chaos engineering in cloud native environments arises from the inherent complexities and dynamic nature of cloud computing. Organizations transitioning to microservices, containers, and serverless architectures encounter unprecedented scalability and flexibility. However, this evolution also increases system complexity and a greater potential for unpredictable failures. This whitepaper addresses this critical need by exploring how intentional fault injection can be a proactive tool for identifying vulnerabilities, enabling teams to address them before they lead to service degr
APA, Harvard, Vancouver, ISO, and other styles
50

Paul Millar, A., Olufemi Adeyemi, Vincent Garonne, et al. "Storage events: distributed users, federation and beyond." EPJ Web of Conferences 214 (2019): 04035. http://dx.doi.org/10.1051/epjconf/201921404035.

Full text
Abstract:
For federated storage to work well, some knowledge from each storage system must exist outside that system, regardless of the use case. This is needed to allow coordinated activity; e.g., executing analysis jobs on worker nodes with good accessibility to the data. Currently, this is achieved by clients notifying central services of activity; e.g., a client notifies a replica catalogue after an upload. Unfortunately, this forces end users to use bespoke clients. It also forces clients to wait for asynchronous activities to finish. dCache provides an alternative approach: storage events. In this
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!