To see the other types of publications on this topic, follow the link: Cloud Orchestration Tools.

Journal articles on the topic 'Cloud Orchestration Tools'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cloud Orchestration Tools.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Voruganti, Kiran Kumar. "Orchestrating Multi-Cloud Environments for Enhanced Flexibility and Resilience." Journal of Technology and Systems 6, no. 2 (2024): 9–25. http://dx.doi.org/10.47941/jts.1810.

Full text
Abstract:
Purpose: This paper examines the essential role of multi-cloud orchestration in navigating the complexities of the contemporary cloud computing landscape, aimed at optimizing the deployment and management of cloud resources across diverse environments.
 Methodology: Utilizing a systematic review of scholarly articles, industry reports, and case studies, including the Flexera 2021 State of the Cloud Report and insights from Gartner, alongside academic contributions from researchers like Jamshidi et al. and Garg et al., this study delves into the strategies and tools facilitating effective multi-cloud orchestration.
 Findings: The research highlights multi-cloud orchestration as a critical enabler for enhancing operational efficiency, resilience, and cost-effectiveness in cloud deployments. It emphasizes the strategic benefits of orchestrating a heterogeneous mix of cloud services, including public, private, and hybrid clouds, to meet the intricate demands of modern applications. The study underscores the importance of advanced orchestration tools in ensuring seamless operations, security, and compliance across multi-cloud architectures.
 Unique contributor to theory, policy and practice: By following the principles outlined in this paper, organizations can leverage multi-cloud orchestration to unlock the full potential of their cloud investments and achieve a well-orchestrated symphony of success.
APA, Harvard, Vancouver, ISO, and other styles
2

Irfan, Muhammad, Zhu Hong, Nueraimaiti Aimaier, and Zhu Guo Li. "SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services." Advanced Materials Research 660 (February 2013): 196–201. http://dx.doi.org/10.4028/www.scientific.net/amr.660.196.

Full text
Abstract:
Cloud Computing is not a revolution; it’s an evolution of computer science and technology emerging by leaps and bounds, in order to merge all computer science tools and technologies. Cloud Computing technology is hottest to do research and explore new horizons of next generations of Computer Science. There are number of cloud services providers (Amazon EC2), Rackspace Cloud, Terremark and Google Compute Engine) but still enterprises and common users have a number of concerns over cloud service providers. Still there is lot of weakness, challenges and issues are barrier for cloud service providers in order to provide cloud services according to SLA (Service Level agreement). Especially, service provisioning according to SLAs is core objective of each cloud service provider with maximum performance as per SLA. We have identified those challenges issues, as well as proposed new methodology as “SLA (Service Level Agreement) Driven Orchestration Based New Methodology for Cloud Computing Services”. Currently, cloud service providers are using “orchestrations” fully or partially to automate service provisioning but we are trying to integrate and drive orchestration flows from SLAs. It would be new approach to provision cloud service and deliver cloud service as per SLA, satisfying QoS standards.
APA, Harvard, Vancouver, ISO, and other styles
3

Uddesh, Piprewar, More Shubham, Lamsoge Vishal, Puramkar Balwesh, Dandhare Gayatri, and Aditya Turankar Prof. "Cloud Formation (IaC): Deploying a Containerized Application on Cloud." Recent Trends in Cloud Computing and Web Engineering 5, no. 2 (2023): 39–49. https://doi.org/10.5281/zenodo.7936767.

Full text
Abstract:
<em>Infrastructure as Code (IaC) is a practice that automates the deployment and management of infrastructure resources using machine-readable files, which describe the desired state of the desired state of the infrastructure. In this way, the infrastructure is treated as a code and is versioned, tested, and deployed like any other software artifact. Cloud providers offer IaC tools that facilitate the deployment of resources in a scalable and reproducible manner.</em> &nbsp; <em>Containers have become the preferred way to package and deploy applications due to their portability, isolation, and scalability. Container orchestration platforms such as Docker simplify the management of containerized applications by automating the deployment, scaling, and monitoring of containerized workloads.</em> &nbsp; <em>In this context, deploying a containerized application on the cloud involves defining the infrastructure resources required to support the application, such as virtual &nbsp;machines, load balancers, and storage volumes, using IaC tools. Once the infrastructure is defined, the containerized application is deployed on a container orchestration platform such as Docker, which manages the containers and their dependencies. This process enables the deployment of applications in a scalable, fault-tolerant, and cost-effective manner, while reducing the time and effort required to manage the underlying infrastructure.</em> &nbsp; <em>In summary, the use of IaC and container orchestration platforms has revolutionized the way applications are deployed and managed on the cloud. These practices enable developers to focus on writing code rather than managing infrastructure, while ensuring that the infrastructure is deployed in a scalable, reproducible, and cost-effective manner.</em>
APA, Harvard, Vancouver, ISO, and other styles
4

Bagnato, Alessandra, and Juan Cadavid. "Towards Model-Based System Engineering for Cyber-physical Systems in the MYRTUS Project." ACM SIGAda Ada Letters 44, no. 2 (2025): 66–70. https://doi.org/10.1145/3742939.3742950.

Full text
Abstract:
The MYRTUS1 project aims at unlocking the new living dimension of Cyber Physical Systems (CPS) integrating edge, fog and cloud computing platforms. This integration requires the reinvention of programming languages and tools to orchestrate collaborative distributed and decentralised components. Additionally, components must be augmented with interface contracts covering both functional and non-functional properties. This paper describes the Model-based approach that will be used during the project, including the key cloud standard, the TOSCA (Topology and Orchestration Specification for Cloud Applications) to be used to describe cloud computing services and their components, as well as the orchestration process needed to manage them.
APA, Harvard, Vancouver, ISO, and other styles
5

Mthembu, Sabelo Justice, Ijeoma Noella Ezeji, and Matthew Adigun. "Orchestration Tools For Efficient Deployment of IoT Applications In Fog Computing: A Systematic Review." International Conference on Artificial Intelligence and its Applications 2023 (November 9, 2023): 146–51. http://dx.doi.org/10.59200/icarti.2023.021.

Full text
Abstract:
Internet of Things (IoT) is the developing technology that enables devices to communicate without human interaction. IoT utilizes cloud computing services to collect and process data for IoT devices and to manage the device remotely. Cloud computing is not efficient enough to handle the fast stream of data produced by the IoT, therefore scaling up IoT applications to meet demands of high peak becomes easier and highly automated in fog computing. Containers are mostly used as virtualization solutions for IoT in fog computing. It enables the execution of small microservices to large applications. However, the rise of many lightweight containers has resulted in new application architectures and fundamentally changing how applications are deployed and visualized. Due to this change, container orchestration tools were proposed. These tools allow users to coordinate and manage containers. However, container orchestration tools need to meet the requirements of IoT applications and constraints imposed on the nodes in fog. This paper presents a systematic literature review on the selection of orchestration tools for the efficient deployment of IoT applications in fog computing. Moreover, the performance of IoT applications must be considered by applying different metrics. This paper aims to propose potential research directions to address identified gaps in the selection of orchestration tools.
APA, Harvard, Vancouver, ISO, and other styles
6

Vaño, Rafael, Ignacio Lacalle, Piotr Sowiński, Raúl S-Julián, and Carlos E. Palau. "Cloud-Native Workload Orchestration at the Edge: A Deployment Review and Future Directions." Sensors 23, no. 4 (2023): 2215. http://dx.doi.org/10.3390/s23042215.

Full text
Abstract:
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.
APA, Harvard, Vancouver, ISO, and other styles
7

Chelliah, Pethuru Raj, and Chellammal Surianarayanan. "Multi-Cloud Adoption Challenges for the Cloud-Native Era." International Journal of Cloud Applications and Computing 11, no. 2 (2021): 67–96. http://dx.doi.org/10.4018/ijcac.2021040105.

Full text
Abstract:
With the ready availability of appropriate technologies and tools for crafting hybrid clouds, the move towards employing multiple clouds for hosting and running various business workloads is garnering subtle attention. The concept of cloud-native computing is gaining prominence with the faster proliferation of microservices and containers. The faster stability and maturity of container orchestration platforms also greatly contribute towards the cloud-native era. This paper guarantees the following contributions: 1) It describes the key motivations for multi-cloud concept and implementations. 2) It also highlights various key drivers of the multi-cloud paradigm. 3) It presents a brew of challenges that are likely to occur while setting up multi-cloud. 4) It elaborates the proven and potential solution approaches to solve the challenges. The technology-inspired and tool-enabled solution approaches significantly simplify and speed up the adoption of the fast-emerging and evolving multi-cloud concept in the cloud-native era.
APA, Harvard, Vancouver, ISO, and other styles
8

Petrenko, Sergei. "Self-Healing Cloud Computing." Voprosy kiberbezopasnosti, no. 1(41) (2021): 80–89. http://dx.doi.org/10.21681/2311-3456-2021-1-80-89.

Full text
Abstract:
Purpose of the article: development of tools for building a cyber-stable private cloud. The relevance of building a cyber-resilient private cloud is confirmed by the dynamics of growth in the market volume of relevant solutions. According to PRnewswire, the market for private cloud solutions will reach 183 billion USD by 2025. At the same time, the average annual growth rate of the CAGR will be 29.4% during the forecast period. According to the analytical company Grand view research, the global market for private cloud solutions in 2018 was estimated at 30.24 billion US dollars, and it is expected that in the period from 2019 to 2025, the CAGR will be 29.6%. Research methods: It uses a set of open-source solutions that applies the advanced cloud technologies, including distributed data processing models and methods, container orchestration technologies, softwaredefined data storage architecture, and a universal database. Results: Developed tools for building a cyber-stable private cloud. Considered a possible approach to building a cyber-resilient private cloud based on the well-known and proprietary models and methods of the artificial immune systems (AIS), as well as technologies for distributed data processing, container orchestration, and others. In addition, the unique centralized fault-tolerant logging and monitoring subsystem has been developed for the described platform, as well as an innovative cybersecurity subsystem based on the following original technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Ajay Kumar Panchalingala. "Recent advances in AWS cloud services." World Journal of Advanced Engineering Technology and Sciences 15, no. 3 (2025): 259–67. https://doi.org/10.30574/wjaets.2025.15.3.0918.

Full text
Abstract:
The rapid evolution of Amazon Web Services (AWS) cloud technologies continues to reshape enterprise computing environments across global industries. This technical review examines recent innovations in AWS services that are transforming organizational capabilities and competitive positioning. Beginning with computational performance advancements through AWS Graviton processors and specialized High-Performance Computing offerings, the article explores how these ARM-based architectures deliver enhanced efficiency across diverse workloads. The expanding AI and Machine Learning ecosystem, particularly through AWS Bedrock's foundation model integration and SageMaker's democratized ML tools, enables organizations to implement sophisticated intelligence capabilities without extensive expertise. Serverless computing developments, including Lambda Function URLs and SnapStart features, alongside visual workflow orchestration through Step Functions, have simplified application development while improving performance characteristics. Container orchestration enhancements facilitate consistent hybrid deployments across on-premises and cloud environments. Looking forward, AWS continues strategic investments in quantum computing initiatives, sustainability practices, and multi-cloud compatibility tools that position the platform at the forefront of cloud innovation. These advancements collectively enable organizations to achieve greater operational agility, cost efficiency, and innovation capacity while addressing evolving challenges in security, compliance, and technical workforce development.
APA, Harvard, Vancouver, ISO, and other styles
10

Girish, Ganachari. "Designing Scalable and Cost-Efficient Cloud-Agnostic Big Data Platforms: A Comparative Study." Journal of Scientific and Engineering Research 8, no. 9 (2021): 315–21. https://doi.org/10.5281/zenodo.13758538.

Full text
Abstract:
This paper aims to explore big data architecture and the benefits of implementing cloud-agnostic solutions, especially about scalability, costs, and flexibility. This paper is limited to the advancements of orchestration tools, edge computing, and AI to overcome the challenges in areas such as administration, security, and latency. These advancements will improve abilities related to data assortment and organization in various cloud contexts.
APA, Harvard, Vancouver, ISO, and other styles
11

Spjuth, Ola, Marco Capuccini, Matteo Carone, et al. "Approaches for containerized scientific workflows in cloud environments with applications in life science." F1000Research 10 (June 29, 2021): 513. http://dx.doi.org/10.12688/f1000research.53698.1.

Full text
Abstract:
Containers are gaining popularity in life science research as they provide a solution for encompassing dependencies of provisioned tools, simplify software installations for end users and offer a form of isolation between processes. Scientific workflows are ideal for chaining containers into data analysis pipelines to aid in creating reproducible analyses. In this article, we review a number of approaches to using containers as implemented in the workflow tools Nextflow, Galaxy, Pachyderm, Argo, Kubeflow, Luigi and SciPipe, when deployed in cloud environments. A particular focus is placed on the workflow tool’s interaction with the Kubernetes container orchestration framework.
APA, Harvard, Vancouver, ISO, and other styles
12

Sina Ahmadi. "Container security in the cloud: Hardening orchestration platforms against emerging threats." World Journal of Advanced Research and Reviews 4, no. 1 (2019): 064–74. https://doi.org/10.30574/wjarr.2019.4.1.0077.

Full text
Abstract:
Container proliferation and platform orchestration tools like Kubernetes have accelerated the deployment and scalability of applications in the cloud. However, these advances come at a cost, and the old and new environments are vulnerable to lateral movement attacks, misconfiguration, unpatched container images, and inadequate access control. This paper explores comprehensive strategies to enhance container security, focusing on key areas: Network security policies, runtime security, access management, supply chain security, and orchestration platform security. The proposed framework emphasizes network segmentation, real-time anomaly detection, robust role-based access control (RBAC), automated vulnerability assessments, and optimized network configurations. In a pilot implementation, the framework reduced security incidents by 35%, improved compliance by 25%, and boosted overall operational efficiency by 20%. The success rates, proven in this study, confirm the possibility of a balanced security model for defending successful workloads in cloud orchestration platforms against external attempts of unauthorized access and data manipulation. This work emphasizes the necessity for new approaches to protecting highly dynamic containerized environments as we know them today.
APA, Harvard, Vancouver, ISO, and other styles
13

Nikhil, Bhagat. "Optimizing Performance, Cost-Efficiency, and Flexibility through Hybrid Multi-Cloud Architectures." Journal of Scientific and Engineering Research 11, no. 4 (2024): 372–79. https://doi.org/10.5281/zenodo.14273093.

Full text
Abstract:
Cloud Computing is the foundation of every modern company that is scalable, adaptable and economical. Hybrid multi-cloud environments, which combine private clouds, public clouds, and multiple cloud providers, represent the next generation for scaling cloud infrastructures. Hybrid cloud architecture lets organizations reap the security and control benefits of a private cloud while also taking advantage of the scalability and cost efficiency of a public cloud. Meanwhile, multi-cloud models avoid vendor lock-in, provide risk mitigation, and enable organizations to choose the best options from multiple providers. Hybrid and multi-cloud solutions together offer an integrated cloud architecture that maximizes usage, performance, and resilience. The paper delves into the advantages of hybrid and multi-cloud environments, including agility, cost efficiency and increased security. The paper also touches on organizational design considerations such as workload assignment, interoperability, security, and vendor selection. The paper provides guidelines for implementing hybrid multi-cloud environments where orchestration tools and automation play a vital role to facilitate the operations. Even though Hybrid multi-cloud architectures provide greater flexibility, they must be strategically designed, implemented and managed. By modernizing these environments, businesses can enhance performance, profitability, and agility, better preparing them to thrive in today&rsquo;s competitive market.
APA, Harvard, Vancouver, ISO, and other styles
14

Muruganantham Angamuthu. "Optimizing Multi-Cloud Business Intelligence: A Framework for Balancing Cost, Performance, and Security." Journal of Computer Science and Technology Studies 7, no. 4 (2025): 427–37. https://doi.org/10.32996/jcsts.2025.7.4.51.

Full text
Abstract:
This article presents a comprehensive framework for optimizing multi-cloud Business Intelligence environments through the balanced integration of cost management, performance engineering, and security governance. As organizations increasingly adopt multi-cloud strategies to leverage specialized capabilities across providers, they face complex challenges in orchestrating distributed cloud resources while maintaining operational coherence. The article examines how strategic workload distribution across multiple cloud platforms creates opportunities for cost optimization through resource allocation efficiency, reserved capacity management, and automated cost monitoring. Performance engineering across cloud boundaries is explored through specialized compute placement, storage optimization, network connectivity enhancement, and dynamic workload routing based on provider strengths. Security governance considerations address the expanded attack surface through unified identity management, standardized encryption, consistent compliance controls, and AI-driven threat detection spanning all cloud environments. Integration frameworks are identified as the foundational element that binds these pillars together, with abstraction layers, metadata management, API standardization, and orchestration tools creating a cohesive operational ecosystem. The framework demonstrates how organizations can achieve superior business intelligence outcomes while avoiding vendor lock-in, reducing operational costs, enhancing analytical performance, and maintaining robust security postures. Through this balanced approach, enterprises can transform multi-cloud complexity from an operational burden into a strategic advantage that delivers enhanced analytical agility and competitive differentiation in data-intensive business environments.
APA, Harvard, Vancouver, ISO, and other styles
15

Sainyaara, Subiya. "A Survey on Integration of Applications - Tools, Types & It’s Challenges." December 2023 2, no. 2 (2023): 405–13. http://dx.doi.org/10.36548/rrrj.2023.2.011.

Full text
Abstract:
Application integration services allow a variety of apps inside a company to share processes and business data. enabling the transformation and orchestration of the data necessary for business activities by integrating a number of assumptions and cloud apps. The program that integrates and improves the data transfer between two different software applications is called application integration. Businesses frequently use this software to build a bridge between an older on-premises application and a new cloud-based one, allowing the two systems to coexist. This article will describe application integration, its software and varieties, and the many actions that may be taken to implement application integration strategies.
APA, Harvard, Vancouver, ISO, and other styles
16

Calatrava Arroyo, Amanda, Marcos Ramos Montes, and J. Damian Segrelles Quilis. "A Pilot Experience with Software Programming Environments as a Service for Teaching Activities." Applied Sciences 11, no. 1 (2020): 341. http://dx.doi.org/10.3390/app11010341.

Full text
Abstract:
Software programming is one of the key abilities for the development of Computational Thinking (CT) skills in Science, Technology, Engineering and Mathematics (STEM). However, specific software tools to emulate realistic scenarios are required for effective teaching. Unfortunately, these tools have some limitations in educational environments due to the need of an adequate configuration and orchestration, which usually assumes an unaffordable work overload for teachers and is inaccessible for students outside the laboratories. To mitigate the aforementioned limitations, we rely on cloud solutions that automate the process of orchestration and configuration of software tools on top of cloud computing infrastructures. This way, the paper presents ACTaaS as a cloud-based educational resource that deploys and orchestrates a whole realistic software programming environment. ACTaaS provides a simple, fast and automatic way to set up a professional integrated environment without involving an overload to the teacher, and it provides an ubiquitous access to the environment. The solution has been tested in a pilot group of 28 students. Currently, there is no tool like ACTaaS that allows such a high grade of automation for the deployment of software production environments focused on educational activities supporting a wide range of cloud providers. Preliminary results through a pilot group predict its effectiveness due to the efficiency to set up a class environment in minutes without overloading the teachers, and providing ubiquitous access to students. In addition, the first student opinions about the experience were greatly positive.
APA, Harvard, Vancouver, ISO, and other styles
17

Domaschka, Jörg, Frank Griesinger, Daniel Baur, and Alessandro Rossini. "Beyond Mere Application Structure Thoughts on the Future of Cloud Orchestration Tools." Procedia Computer Science 68 (2015): 151–62. http://dx.doi.org/10.1016/j.procs.2015.09.231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Researcher. "ADVANCED DISASTER RECOVERY STRATEGIES FOR HYBRID CLOUD ENVIRONMENTS: A COMPREHENSIVE TECHNICAL GUIDE." International Journal of Computer Engineering and Technology (IJCET) 15, no. 6 (2024): 1147–59. https://doi.org/10.5281/zenodo.14329902.

Full text
Abstract:
This comprehensive technical article analysis explores advanced disaster recovery strategies specifically designed for hybrid cloud environments, addressing the evolving challenges organizations face in maintaining business continuity. The article examines the fundamental components of hybrid cloud DR solutions, including infrastructure requirements, orchestration tools, and data replication technologies. Through detailed case studies across financial services, healthcare, and manufacturing sectors, the article demonstrates the critical importance of integrated DR approaches in modern enterprise environments. The article analysis covers emerging technologies such as AI-driven orchestration, containerization, and edge computing, while providing practical insights into implementation strategies and best practices.&nbsp;Special attention is given to compliance requirements, cost management considerations, and performance optimization techniques. The article also explores future trends and technological developments that are reshaping DR strategies, offering organizations a roadmap for building resilient disaster recovery solutions that align with their business objectives and technological capabilities.
APA, Harvard, Vancouver, ISO, and other styles
19

Kanthed, Surbhi. "Automation Tools for DevOps: Leveraging Ansible, Terraform, and Beyond." International Scientific Journal of Engineering and Management 04, no. 04 (2025): 1–7. https://doi.org/10.55041/isjem01286.

Full text
Abstract:
DevOps has rapidly become a cornerstone for modern software development, providing faster release cycles and improved collaboration between development and operations teams. Central to DevOps practices is automation, which addresses the complexity of provisioning and configuring diverse computing environments. This white paper explores state-of-the-art automation tools, with a focus on Ansible for configuration management and Terraform for infrastructure as code (IaC). An extensive review of recent scholarly articles, conference papers, and real-world case studies reveals the unique strengths and limitations of these tools, including Ansible’s agentless architecture and Terraform’s robust declarative approach. In examining multi-cloud and hybrid deployments, the paper identifies best practices in modular code design, version control, automated testing, and policy-as-code for security and compliance. Empirical case studies demonstrate the performance, scalability, and maintainability benefits organizations gain from integrating Ansible and Terraform, while also highlighting challenges related to skill gaps, complex orchestration, and state management. Finally, the paper discusses emerging trends, including AI-driven infrastructure provisioning, serverless computing at the edge, and unified frameworks that incorporate multiple automation tools. By synthesizing these findings, this paper contributes a comprehensive roadmap for adopting and optimizing DevOps automation. It underscores how strategic integration of Ansible, Terraform, and complementary solutions not only reduces operational overhead but also enhances reliability, security, and agility. The outcomes empower practitioners and researchers to address current limitations, seize emerging opportunities, and drive further innovation in the rapidly evolving landscape of DevOps automation. Keywords: DevOps, Automation, Ansible, Terraform, Infrastructure as Code (IaC), Configuration Management, Orchestration, CI/CD, Policy-as-Code, Multi-Cloud, Hybrid Cloud, Security, Compliance, State Management, AI-Driven Automation, Serverless, Observability, GitOps, Modular Design, Monitoring, Scalability.
APA, Harvard, Vancouver, ISO, and other styles
20

Kumari, Swati, Vatsal Tulshyan, and Hitesh Tewari. "Cyber Security on the Edge: Efficient Enabling of Machine Learning on IoT Devices." Information 15, no. 3 (2024): 126. http://dx.doi.org/10.3390/info15030126.

Full text
Abstract:
Due to rising cyber threats, IoT devices’ security vulnerabilities are expanding. However, these devices cannot run complicated security algorithms locally due to hardware restrictions. Data must be transferred to cloud nodes for processing, giving attackers an entry point. This research investigates distributed computing on the edge, using AI-enabled IoT devices and container orchestration tools to process data in real time at the network edge. The purpose is to identify and mitigate DDoS assaults while minimizing CPU usage to improve security. It compares typical IoT devices with and without AI-enabled chips, container orchestration, and assesses their performance in running machine learning models with different cluster settings. The proposed architecture aims to empower IoT devices to process data locally, minimizing the reliance on cloud transmission and bolstering security in IoT environments. The results correlate with the update in the architecture. With the addition of AI-enabled IoT device and container orchestration, there is a difference of 60% between the new architecture and traditional architecture where only Raspberry Pi were being used.
APA, Harvard, Vancouver, ISO, and other styles
21

IJCSERD. "ORCHESTRATING CONTAINERIZED APPLICATIONS WITH KUBERNETES: A PRACTICAL IMPLEMENTATION GUIDE." International Journal of Computer Science and Engineering Research and Development (IJCSERD) 14, no. 1 (2024): 44–53. https://doi.org/10.5281/zenodo.13912266.

Full text
Abstract:
<em>Containerization has revolutionized how applications are built, delivered, and run. As an orchestration layer for containerized applications, Kubernetes is significant to understand in the context of its responsibilities. This paper gives the reader a one-stop-shop manual on providing the Kubernetes platform for containerized application orchestration. In this paper, we discuss the issues that organizations may encounter, how Kubernetes solves these, and detail its uses and effects. Furthermore, we consider what is doable using Kubernetes in cloud environments. Ultimately, this guide provides organizations with tools and information to use Kubernetes to enhance container management.</em>
APA, Harvard, Vancouver, ISO, and other styles
22

Vivek, Prasanna Prabu. "CI/CD in a Multi-Cloud World: Challenges and Solutions." International Journal of Leading Research Publication 6, no. 3 (2025): 1–8. https://doi.org/10.5281/zenodo.15154849.

Full text
Abstract:
Continuous Integration and Continuous Deployment (CI/CD) have become critical components of modern software development lifecycles, enabling rapid delivery, iterative development, and operational efficiency. As enterprises increasingly adopt multi-cloud strategies - leveraging multiple public and private cloud platforms such as AWS, Azure, Google Cloud, and on-premise infrastructure - the complexity of implementing and managing CI/CD pipelines across heterogeneous environments grows significantly. These challenges include inconsistent tooling, integration friction, data sovereignty, latency variations, vendor lock-in risks, and a lack of unified governance.Multi-cloud CI/CD introduces complications in orchestrating deployments, securing environments, synchronizing configurations, and maintaining compliance. Development teams must navigate varied APIs, access controls, and platform capabilities while ensuring consistent testing, observability, and rollback mechanisms across all cloud environments. However, emerging technologies and best practices offer robust solutions. Cross-platform pipeline orchestration tools, infrastructure as code (IaC), containerization, service meshes, and AI-driven observability frameworks are transforming how CI/CD is executed across multi-cloud architectures.This white paper explores the strategic challenges and viable solutions for deploying resilient CI/CD pipelines in multi-cloud settings. It outlines foundational principles, compares tooling ecosystems, examines real-world implementations, and presents recommendations for aligning multi-cloud CI/CD with organizational agility and compliance goals. By adopting these best practices, enterprises can achieve scalable, reliable, and automated software delivery workflows that span cloud boundaries while preserving developer productivity and platform neutrality.
APA, Harvard, Vancouver, ISO, and other styles
23

Ogbuefi, Ejielo, Jeffrey Chidera Ogeawuchi, Bright Chibunna Ubamadu, Oluwademilade Aderemi Agboola, and Oyinomomo-emi Emmanuel Akpe. "Systematic Review of Integration Techniques in Hybrid Cloud Infrastructure Projects." International Journal of Advanced Multidisciplinary Research and Studies 3, no. 6 (2023): 1634–43. https://doi.org/10.62225/2583049x.2023.3.6.4323.

Full text
Abstract:
The growing adoption of hybrid cloud infrastructures combining public and private cloud environments has introduced complex integration challenges for organizations striving to optimize performance, scalability, and data security. This systematic review aims to evaluate and synthesize the current landscape of integration techniques used in hybrid cloud infrastructure projects, with a focus on interoperability, orchestration, data synchronization, and security compliance. This examines peer-reviewed literature, technical white papers, and industry reports published between 2015 and 2024 to identify dominant patterns, frameworks, and tools employed in hybrid cloud integration. Findings indicate that integration strategies in hybrid cloud environments often rely on middleware platforms, API gateways, container orchestration (e.g., Kubernetes), and Infrastructure as Code (IaC) tools. Middleware and APIs serve as critical enablers for seamless communication between heterogeneous systems, while containerization ensures portability across cloud boundaries. Moreover, service mesh architectures and microservices-based designs are increasingly adopted to enhance scalability and observability. Security and compliance integration techniques, including identity federation, encryption standards, and policy-as-code frameworks, are also frequently cited to address regulatory requirements. The review highlights a growing interest in using AI-driven automation to manage integration complexity, especially for real-time monitoring and anomaly detection. Despite significant advances, challenges remain in achieving seamless hybrid cloud integration, particularly in areas related to latency, data governance, and vendor lock-in. The review concludes with a proposed research agenda and best practices for selecting integration techniques based on organizational needs, application architecture, and compliance considerations. By providing a consolidated view of current practices and emerging trends, this review offers valuable insights for IT professionals, cloud architects, and decision-makers involved in hybrid cloud projects.
APA, Harvard, Vancouver, ISO, and other styles
24

Sridhar Nelloru. "Architecting Multi-Cloud Immutable Infrastructure Workflows: Beyond Traditional CI/CD." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 247–54. https://doi.org/10.32628/cseit25111221.

Full text
Abstract:
This article explores the advanced realm of multi-cloud immutable infrastructure workflows, presenting a comprehensive analysis of their implementation, benefits, and future directions. It delves into the foundational principles of immutable infrastructure and their application in multi-cloud environments, highlighting the integration with Infrastructure as Code and policy-as-code frameworks. The discussion extends to advanced patterns and workflows, including strategies for unifying disparate cloud APIs, leveraging orchestration tools, and standardizing security measures across heterogeneous environments. The article examines how these approaches enhance both stability and agility in software deployment, covering dynamic scaling policies, automated rollback mechanisms, and strategies for maintaining consistency. It also addresses the operational benefits and challenges associated with these workflows, providing insights into faster service deployment, reduced operational overhead, and proactive governance management. Looking ahead, the article forecasts the impact of emerging technologies such as artificial intelligence and machine learning on multi-cloud orchestration and infrastructure management. By offering a holistic view of current practices and future trends, this article serves as a valuable resource for organizations seeking to optimize their cloud infrastructure strategies and stay ahead in the rapidly evolving landscape of software deployment and management.
APA, Harvard, Vancouver, ISO, and other styles
25

BuchiReddy Karri, Sairamakrishna, Chandra Mouli Penugonda, Srujana Karanam, Mohd Tajammul, Srinivasarao Rayankula, and Prasad Vankadara. "Enhancing Cloud-Native Applications: A Comparative Study of Java-To-Go Micro Services Migration." International Transactions on Electrical Engineering and Computer Science 4, no. 1 (2025): 1–12. https://doi.org/10.62760/iteecs.4.1.2025.127.

Full text
Abstract:
Moving microservices from Java to Go creates great opportunities for performance, scalability, and resource efficiency. Nonetheless, such a move comes with other challenges related to infrastructure changes, deployment strategies, observability, and security. This paper tries to look at elements of paramount importance as concerned with Java-to-Go migration, thereby, interrogating the key hosting environments, containerization, and orchestration. Go as a light engine introduces one of the most cost-effective deployments as organizations lean towards cloud-native architectures and Kubernetes-based orchestration [24]. The transition likewise demands adaptation of the observability practices since Go applications utilize different tools compared to Java applications. Security topics currently include dependency management, API protection, and vulnerability scanning which are highly pertinent in keeping the application intact. However, with proper planning, these challenges can leverage the advantages Go provides, ultimately presenting it as an attractive option for microservices development. Future studies should look into automated migration tools, the process of standardized best practices, and refinements to security to ease this transition.
APA, Harvard, Vancouver, ISO, and other styles
26

Pochu, Sandeep, Sai Rama Krishna Nersu, and Srikanth Reddy Kathram. "Multi-Cloud DevOps Strategies: A Framework for Agility and Cost Optimization." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 7, no. 01 (2024): 104–19. https://doi.org/10.60087/jaigs.v7i01.301.

Full text
Abstract:
This paper investigates the challenges and benefits of adopting multi-cloud strategies within DevOps environments. It highlights automation tools such as Terraform and Kubernetes to balance agility, performance, and cost, providing actionable insights for enterprises navigating complex cloud ecosystems. In today's dynamic IT landscape, multi-cloud environments have become the cornerstone of enterprise strategies, enabling organizations to leverage the unique strengths of various cloud providers. This paper presents a comprehensive framework for implementing Multi-Cloud DevOps strategies aimed at enhancing operational agility and cost optimization. The proposed framework integrates best practices for seamless deployment, monitoring, and scaling of applications across diverse cloud platforms. By employing tools for orchestration, automation, and continuous integration/continuous delivery (CI/CD), the framework ensures rapid adaptability to changing business needs while maintaining cost efficiency. This study underscores the importance of aligning DevOps principles with multi-cloud architectures, thereby empowering businesses to maximize resource utilization and achieve competitive advantages in a rapidly evolving market.
APA, Harvard, Vancouver, ISO, and other styles
27

Santosh, Pashikanti. "Optimizing Multi-Cloud Strategies: Best Practices for AWS, GCP, Azure, and Oracle Cloud Ecosystems." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 7, no. 2 (2021): 1–4. https://doi.org/10.5281/zenodo.14631871.

Full text
Abstract:
In the commercial industry, from retail to logistics, financial services, and manufacturing, businesses are under pressure to deliver seamless digital experiences while ensuring operational efficiency. Multi-cloud strategies, which combine the capabilities of leading cloud providers such as AWS, GCP, Azure, and Oracle Cloud, have emerged as a solution to meet these needs. However, optimizing a multi-cloud ecosystem requires a clear understanding of technical practices and tools, from container orchestration to advanced cost management and compliance frameworks.This white paper explores the technical terminologies and methodologies that commercial organizations can leverage to maximize their cloud investments. By focusing on interoperability, cost control, security, and advanced analytics, businesses can transform their operations while maintaining compliance and reducing complexity.
APA, Harvard, Vancouver, ISO, and other styles
28

Shah, Samarth, and Ujjawal Jain. "Comparison of Container Orchestration Engines." Integrated Journal for Research in Arts and Humanities 4, no. 6 (2024): 306–22. https://doi.org/10.55544/ijrah.4.6.24.

Full text
Abstract:
Container orchestration engines have become essential for managing containerized applications in modern cloud-native architectures. These tools automate the deployment, scaling, networking, and management of containers, enabling seamless application lifecycle management. With a growing number of orchestration solutions available, understanding their features, strengths, and limitations is crucial for selecting the right platform. This paper presents a comparative analysis of prominent container orchestration engines, highlighting their core functionalities, architectural design, and suitability for different use cases. Key areas of comparison include resource allocation, fault tolerance, scalability, and integration with DevOps workflows. The study explores how these platforms address challenges such as dynamic workload management, service discovery, and inter-container communication while maintaining high availability and system resilience. The analysis reveals that while some platforms excel in simplicity and ease of deployment, others provide advanced features tailored to complex, large-scale systems. Additionally, open-source orchestration tools are evaluated against proprietary solutions in terms of community support, customization capabilities, and total cost of ownership. This comparative study aims to assist organizations and developers in identifying the most suitable container orchestration engine based on their operational needs and technical constraints. By understanding the trade-offs and unique features of each platform, stakeholders can make informed decisions that optimize performance, reduce operational overhead, and support efficient application delivery in a rapidly evolving technology landscape. This abstract underscores the importance of aligning platform capabilities with organizational goals for successful containerized application management.
APA, Harvard, Vancouver, ISO, and other styles
29

Syed, Ziaurrahman Ashraf. "Building Automated BI Platforms: From Data Ingestion to Visualization." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 5, no. 2 (2019): 1–7. https://doi.org/10.5281/zenodo.13949502.

Full text
Abstract:
In an era where data drives decision-making, Business Intelligence (BI) platforms have evolved from static reports to automated systems delivering real-time insights. This paper outlines a comprehensive approach to building automated BI platforms, starting from data ingestion to the final visualization layer. The discussion covers data pipeline orchestration, real-time processing, cloud integration, and AI-enhanced visual analytics. Key technologies such as ETL frameworks, cloud data warehouses, and modern visualization tools are explored with technical illustrations to provide a deeper understanding of the architecture.
APA, Harvard, Vancouver, ISO, and other styles
30

Krishna Rao Vemula. "Advancements in Cloud-Native Applications: Innovative Tools and Research Frontiers." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 974–81. https://doi.org/10.32628/cseit251112100.

Full text
Abstract:
This article explores the cutting-edge developments in cloud-native applications, focusing on innovative tools and research frontiers that are shaping the future of digital infrastructure. It examines the evolving capabilities of orchestration platforms, the integration of IoT and edge computing, advancements in serverless architectures, and the growing importance of AI-driven optimization and sustainable computing practices. The article delves into case studies across e-commerce, healthcare, and financial services sectors, illustrating how cloud-native technologies are being leveraged to solve complex business challenges and drive innovation. Key tools such as Kubernetes, observability platforms, and microservices development frameworks are discussed, highlighting their role in enabling efficient, scalable, and resilient cloud-native applications. The article also considers the broader impact of these technologies on digital service delivery, emphasizing the benefits for businesses in terms of agility, user experience, and competitive advantage. Finally, it underscores the critical importance of collaboration among academia, industry, and open-source communities in driving future advancements in the cloud-native ecosystem, painting a picture of a dynamic and rapidly evolving technological landscape with far-reaching implications for the future of computing.
APA, Harvard, Vancouver, ISO, and other styles
31

Čilić, Ivan, Petar Krivić, Ivana Podnar Žarko, and Mario Kušek. "Performance Evaluation of Container Orchestration Tools in Edge Computing Environments." Sensors 23, no. 8 (2023): 4008. http://dx.doi.org/10.3390/s23084008.

Full text
Abstract:
Edge computing is a viable approach to improve service delivery and performance parameters by extending the cloud with resources placed closer to a given service environment. Numerous research papers in the literature have already identified the key benefits of this architectural approach. However, most results are based on simulations performed in closed network environments. This paper aims to analyze the existing implementations of processing environments containing edge resources, taking into account the targeted quality of service (QoS) parameters and the utilized orchestration platforms. Based on this analysis, the most popular edge orchestration platforms are evaluated in terms of their workflow that allows the inclusion of remote devices in the processing environment and their ability to adapt the logic of the scheduling algorithms to improve the targeted QoS attributes. The experimental results compare the performance of the platforms and show the current state of their readiness for edge computing in real network and execution environments. These findings suggest that Kubernetes and its distributions have the potential to provide effective scheduling across the resources on the network’s edge. However, some challenges still have to be addressed to completely adapt these tools for such a dynamic and distributed execution environment as edge computing implies.
APA, Harvard, Vancouver, ISO, and other styles
32

Sinde, Sai Priya, Bhavika Thakkalapally, Meghamala Ramidi, and Sowmya Veeramalla. "Continuous Integration and Deployment Automation in AWS Cloud Infrastructure." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 1305–9. http://dx.doi.org/10.22214/ijraset.2022.44106.

Full text
Abstract:
Abstract-The primary objective of this project is to figure out how an application is automatically deployed. To automate the deployment, we'll require a container orchestration platform. Kubernetes is now a widely used container orchestration tool. Users can deploy, scale, and manage containerized applications thanks to Kubernetes' robustness. Continuous Integration and Continuous Delivery (CI/CD) of a project is required in modern development, and the focus is on developing and executing tests on every commit to ensure your test environment is constantly up-to-date. From the integration and testing phases through the delivery and deployment phases, CI/CD brings automation and continuous monitoring of programmers throughout their lifecycle. GitHub Actions is one of the CI/CD technology available in the market. By using GitHub as a source code management tool, it's best to use GitHub Actions as a continuous integration and delivery solution because it's supplied by GitHub. This does not require any setting or setup. As a result, incorporating these tools will speed up the deployment of an application. Keywords- GitHub Actions, Amazon Web Services, Continuous Integration and Continuous Deployment, Kubernetes, Workflows.
APA, Harvard, Vancouver, ISO, and other styles
33

Mylsamy, Sekar, and Rupesh Kumar Mishra. "Cloud-Native Development and Deployment." International Journal of Research in Modern Engineering & Emerging Technology 13, no. 4 (2025): 279–89. https://doi.org/10.63345/ijrmeet.org.v13.i4.17.

Full text
Abstract:
Cloud-native development and deployment has revolutionized the modern software engineering landscape by embracing flexible, scalable, and resilient architectures. This approach leverages containerization, microservices, and orchestration frameworks to enable rapid iteration and continuous delivery. By decoupling application components, organizations achieve greater agility in responding to evolving market demands and technological advancements. Cloud-native methodologies prioritize the use of distributed systems that inherently scale, withstand failures, and facilitate seamless integration with other services. Modern development teams benefit from automated pipelines that streamline testing, deployment, and monitoring, ensuring that applications remain robust and secure across diverse cloud environments. This paradigm shift is further bolstered by an ecosystem of open-source tools and cloud platforms, which democratize access to sophisticated infrastructure capabilities without significant upfront investments. Cloud-native environments encourage innovation by reducing time-to-market and fostering a culture of continuous improvement. Emphasis on microservices allows developers to isolate issues rapidly and implement incremental enhancements without compromising the entire system. Additionally, orchestration solutions, such as Kubernetes, automate complex tasks and optimize resource management, thereby enhancing overall system performance and reliability. In summary, cloud-native development and deployment represent a transformative movement that empowers organizations to build, deploy, and maintain applications with unprecedented efficiency and resilience. This innovative approach not only streamlines the software development lifecycle but also supports dynamic scalability and improved fault tolerance, ultimately setting a new standard for digital transformation in a rapidly evolving technological landscape. By integrating cloud-native principles, businesses can achieve operational excellence and foster sustainable growth in today’s competitive environment. This evolution drives innovation.
APA, Harvard, Vancouver, ISO, and other styles
34

Nagaraju, Islavath. "Optimizing Hybrid Cloud Environments: A DevOps Approach to Managing Multi-Cloud Infrastructure." European Journal of Advances in Engineering and Technology 8, no. 3 (2021): 87–91. https://doi.org/10.5281/zenodo.13837443.

Full text
Abstract:
Organizations face a challenging task as hybrid cloud setups become more widely used: managing and optimizing resources across several cloud platforms. Hybrid and multi-cloud systems provide flexibility, scalability, and cost-effectiveness but also deal with security, monitoring, and operational consistency. A DevOps approach can greatly simplify the management of hybrid cloud infrastructures by combining automation, continuous delivery (CD), infrastructure as code (IaC), and reliable monitoring tools. This article investigates how multi-cloud infrastructures can be optimized through DevOps, guaranteeing smooth orchestration, lower downtime, and improved scalability. It also looks at how DevOps helps teams collaborate better, secures data in hybrid environments, and gives businesses the flexibility they need to adapt to changing business needs. The paper illustrates the efficacy of DevOps in optimizing hybrid cloud settings using real-world use cases, allowing companies to get the most out of their cloud technology investments while upholding operational excellence.
APA, Harvard, Vancouver, ISO, and other styles
35

Vissarapu, Srikanth. "Generative AI in Cloud-Native Development: Automating Code, Configs, and Deployment." European Journal of Computer Science and Information Technology 13, no. 38 (2025): 145–56. https://doi.org/10.37745/ejcsit.2013/vol13n38145156.

Full text
Abstract:
Generative AI is transforming cloud-native development through sophisticated automation capabilities across the software engineering lifecycle. By leveraging large language models and AI-powered tools, organizations can accelerate infrastructure provisioning, optimize application configurations, and enhance deployment reliability. This article explores how AI technologies are revolutionizing code generation, configuration management, and deployment orchestration in cloud environments. The integration of natural language processing, code understanding, and pattern recognition capabilities enables context-aware automation that reduces manual effort while improving system quality. Through examination of implementation patterns across financial services, e-commerce, healthcare, and telecommunications sectors, the article demonstrates how AI-powered cloud development delivers tangible business value through enhanced operational efficiency, accelerated innovation cycles, and improved system resilience.
APA, Harvard, Vancouver, ISO, and other styles
36

Sandobalín, Julio, and Carlos Iñiguez-Jarrín. "Modeling Cloud Infrastructure Provisioning: A Software-as-a-Service Approach." Revista Politécnica 52, no. 2 (2023): 87–98. http://dx.doi.org/10.33333/rp.vol52n2.09.

Full text
Abstract:
Provisioning means making an infrastructure element, such as a server or network device, ready for use. DevOps community leverages the Infrastructure as Code (IaC) approach to supply tools for cloud infrastructure provision. However, each provisioning tool has its scripting language, and managing different tools for several cloud providers is time-consuming and error-prone. In previous work, we presented a model-driven infrastructure provisioning tool called ARGON, which leverages the IaC approach using Model-Driven Engineering. ARGON provides a modeling language to specify cloud infrastructure resources and generates scripts to support cloud infrastructure provisioning orchestration. Since ARGON runs in the Eclipse Desktop IDE, we propose to migrate from an ARGON Desktop to an ARGON Cloud as a Software-as-a-Service approach. On the one hand, we developed a domain-specific modeling language using JavaScript Frameworks. On the other hand, we used a Model-to-Text transformation engine through a REST web service to generate scripts. Finally, we carried out an example by modeling infrastructure resources for Amazon Web Services and then generating a script for the Ansible tool.
APA, Harvard, Vancouver, ISO, and other styles
37

Chandrasehar, Amreth. "ML Powered Container Management Platform: Revolutionizing Digital Transformation through Containers and Observability." Journal of Artificial Intelligence & Cloud Computing 1, no. 1 (2022): 1–3. http://dx.doi.org/10.47363/jaicc/2023(1)130.

Full text
Abstract:
As companies adopt digital transformation, cloud-native applications become a critical part of their architecture and roadmap. Enterprise applications and tools are developed using cloud native architecture are containerized and are deployed on container orchestration platforms. Containers have revolutionized application deployments to help management, scaling and operations of workloads deployed on container platforms. But a lot of issues are faced by operators of the platforms such as complexity in managing large scale environments, security, networking, storage, observability and cost. This paper will discuss on how to build a container management platform using monitoring data to implement AI and ML models to aid organization digital transformation journey
APA, Harvard, Vancouver, ISO, and other styles
38

Omoniyi David Olufemi. "AI-enhanced predictive maintenance systems for critical infrastructure: Cloud-native architectures approach." World Journal of Advanced Engineering Technology and Sciences 13, no. 2 (2024): 229–57. http://dx.doi.org/10.30574/wjaets.2024.13.2.0552.

Full text
Abstract:
Critical infrastructure (CI), such as power grids, transportation systems, and telecommunications networks, is becoming increasingly complex, requiring sophisticated maintenance strategies and procedures to guarantee optimal performance and system durability. This paper examines the transformational potential of AI-driven predictive maintenance systems, highlighting their ability to prevent system failures, minimize downtime, and enhance resource efficiency. Integrating machine learning algorithms with real-time data analytics allows predictive maintenance frameworks to accurately foresee equipment failures, facilitating timely interventions that reduce the risk of catastrophic infrastructure breakdowns. This study primarily examines the development of cloud-native architectures, which include containers, microservices, and orchestration tools like Kubernetes, to facilitate the scalability, flexibility, and resilience required for contemporary CI maintenance systems. These designs facilitate the seamless integration of predictive maintenance solutions across geographically dispersed infrastructure, enabling effective administration of extensive datasets produced by Internet of Things (IoT) sensors, operational logs, and edge computing nodes. The document examines the essential function of intelligent data orchestration in facilitating the prompt gathering, processing, and analysis of operational data, which is vital for AI models to provide precise predictions. The amalgamation of AI-driven predictive maintenance with 5G and forthcoming 6G networks is poised to transform real-time system monitoring, diminishing latency and enhancing decision-making efficacy. Utilizing AI and cloud-native technologies substantially enhances system reliability, cost-effectiveness, and comprehensive infrastructure optimization. This article thoroughly analyses how AI, cloud-native platforms, and intelligent data orchestration may be utilized to tackle the changing maintenance issues of critical infrastructure by examining real-world case studies from sectors like power grids, telecommunications, and transportation. Integrating AI, cloud computing, and IoT in predictive maintenance improves system reliability and prepares critical infrastructure for future autonomous management and optimization developments. The study finishes by discussing new trends, such as the integration of digital twins and the synergies between AI and cloud-native solutions, which will enhance predictive maintenance capabilities.
APA, Harvard, Vancouver, ISO, and other styles
39

D. Jayadurga and A. Chandrabose. "Expanding the quantity of virtual machines utilized within an open-source cloud infrastructure." Scientific Temper 15, spl-1 (2024): 314–20. https://doi.org/10.58414/scientifictemper.2024.15.spl.37.

Full text
Abstract:
As cloud computing continues to evolve, the efficient management and scalability of virtual machines (VMs) have become pivotal for maximizing performance and resource utilization, particularly within open-source cloud infrastructures. This literature review investigates existing approaches and methodologies focused on expanding the number of VMs in open-source cloud environments. Key topics include the impact of VM scaling on resource allocation, load balancing, and energy efficiency, as well as the role of orchestration tools and hypervisor optimization in handling large-scale VM deployments. Furthermore, the review assesses the challenges related to VM density, network latency, and system reliability alongside emerging strategies for enhancing VM elasticity through containerization, microservices, and distributed computing models. This study aims to provide a comprehensive understanding of current trends, innovations, and limitations in VM expansion, offering insights into the future of scalable virtual infrastructures in open-source cloud systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Spiga, Daniele, Enol Fernandez, Vincenzo Spinoso, et al. "The DODAS Experience on the EGI Federated Cloud." EPJ Web of Conferences 245 (2020): 07033. http://dx.doi.org/10.1051/epjconf/202024507033.

Full text
Abstract:
The EGI Cloud Compute service offers a multi-cloud IaaS federation that brings together research clouds as a scalable computing platform for research accessible with OpenID Connect Federated Identity. The federation is not limited to single sign-on, it also introduces features to facilitate the portability of applications across providers: i) a common VM image catalogue VM image replication to ensure these images will be available at providers whenever needed; ii) a GraphQL information discovery API to understand the capacities and capabilities available at each provider; and iii) integration with orchestration tools (such as Infrastructure Manager) to abstract the federation and facilitate using heterogeneous providers. EGI also monitors the correct function of every provider and collects usage information across all the infrastructure. DODAS (Dynamic On Demand Analysis Service) is an open-source Platform-as-a-Service tool, which allows to deploy software applications over heterogeneous and hybrid clouds. DODAS is one of the so-called Thematic Services of the EOSC-hub project and it instantiates on-demand container-based clusters offering a high level of abstraction to users, allowing to exploit distributed cloud infrastructures with a very limited knowledge of the underlying technologies.This work presents a comprehensive overview of DODAS integration with EGI Cloud Federation, reporting the experience of the integration with CMS Experiment submission infrastructure system.
APA, Harvard, Vancouver, ISO, and other styles
41

Vijaya Kumar Katta. "Leveraging AWS cloud native services for scalable application architectures." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 2108–20. https://doi.org/10.30574/wjarr.2025.26.2.1853.

Full text
Abstract:
AWS cloud-native services enable organizations to build scalable and resilient applications in today's transformed application development landscape. AWS has pioneered technologies that have become cornerstones of modern application architecture, offering comprehensive tools for implementing sophisticated solutions. The document examines serverless computing paradigms through AWS Lambda and API Gateway, highlighting their evolution, features, and best practices for implementation. It delves into container orchestration with Amazon ECS and EKS, comparing their capabilities and introducing Fargate as a serverless container execution option. Purpose-built database services including DynamoDB, Aurora Serverless, and ElastiCache are discussed alongside storage solutions like S3, EFS, and FSx, with emphasis on appropriate data access patterns and optimization techniques. Infrastructure automation through CloudFormation and CDK is explored, alongside continuous integration and deployment pipelines that form the foundation of modern software development practices. The examination of observability and monitoring tools essential for operating cloud-native systems effectively provides a comprehensive guide to leveraging AWS services for scalable application architectures
APA, Harvard, Vancouver, ISO, and other styles
42

Mohna, Hosne Ara, Tonmoy Barua, Mohammad Mohiuddin, and Md Mostafizur Rahman. "AI-READY DATA ENGINEERING PIPELINES: A REVIEW OF MEDALLION ARCHITECTURE AND CLOUD-BASED INTEGRATION MODELS." American Journal of Scholarly Research and Innovation 01, no. 01 (2022): 319–50. https://doi.org/10.63125/51kxtf08.

Full text
Abstract:
This systematic review investigates AI-ready data engineering pipelines by analyzing 106 studies published between 2010 and 2022, focusing on Medallion Architecture, cloud-native integration models, metadata management, and lakehouse infrastructure. Following PRISMA guidelines, sources were retrieved from IEEE Xplore, Scopus, Web of Science, ScienceDirect, and Google Scholar. The review examines key architectural strategies, integration patterns, and governance mechanisms that support scalable and explainable AI workflows. Medallion Architecture was discussed in 42 studies, highlighting its tiered bronze-silver-gold design that supports modular transformations and data traceability. Case studies demonstrated reduced redundancy, enhanced reproducibility, and compatibility with MLOps practices, making it well-suited for use cases in fintech, retail, and predictive maintenance. Cloud-native tools such as AWS Glue, Azure Data Factory, and GCP Dataflow appeared in 58 articles. These platforms support real-time orchestration, autoscaling, and serverless execution. Studies reported a 30% reduction in deployment time when pipelines leveraged containerization, low-code orchestration, and cloud-native storage systems. Multi-cloud and hybrid models were noted for addressing data sovereignty, latency, and vendor lock-in concerns. Metadata and data lineage were central to 39 studies, which emphasized the importance of schema versioning, transformation tracking, and audit readiness. Tools like Apache Atlas, Amundsen, and Microsoft Purview were shown to enhance model explainability and reproducibility, reducing audit time and enabling ethical AI deployment. Thirty-six studies focused on lakehouse platforms such as Delta Lake and Apache Hudi. These systems combined the scalability of data lakes with the reliability of warehouses, enabling schema-on-read, real-time feature updates, and versioned data snapshots across training and serving pipelines. However, 31 studies noted challenges including metadata inconsistency in multi-region setups, storage overhead from versioning, and organizational gaps in MLOps responsibilities. These findings underscore the need for integrated governance, standardized roles, and cross-functional collaboration.
APA, Harvard, Vancouver, ISO, and other styles
43

Wettinger, Johannes, Tobias Binz, Uwe Breitenbücher, Oliver Kopp, and Frank Leymann. "Streamlining Cloud Management Automation by Unifying the Invocation of Scripts and Services Based on TOSCA." International Journal of Organizational and Collective Intelligence 4, no. 2 (2014): 45–63. http://dx.doi.org/10.4018/ijoci.2014040103.

Full text
Abstract:
Today, there is a huge variety of script-centric approaches, APIs, and tools available to implement automated provisioning, deployment, and management of applications in the Cloud. The automation of all these aspects is key for reducing costs. However, most of these approaches are script-centric and provide proprietary solutions employing different invocation mechanisms, interfaces, and state models. Moreover, most Cloud providers offer proprietary APIs to be used for provisioning and management purposes. Consequently, it is hard to create deployment and management plans that integrate multiple of these approaches. The goal of our work is to come up with an approach for unifying the invocation of scripts and services without handling each proprietary interface separately. A prototype realizes the presented approach in a standards-based manner using the Topology and Orchestration Specification for Cloud Applications (TOSCA).
APA, Harvard, Vancouver, ISO, and other styles
44

Šatkauskas, Nerijus, and Algimantas Venčkauskas. "Multi-Agent Dynamic Fog Service Placement Approach." Future Internet 16, no. 7 (2024): 248. http://dx.doi.org/10.3390/fi16070248.

Full text
Abstract:
Fog computing as a paradigm was offered more than a decade ago to solve Cloud Computing issues. Long transmission distances, higher data flow, data loss, latency, and energy consumption lead to providing services at the edge of the network. But, fog devices are known for being mobile and heterogenous. Their resources can be limited, and their availability can be constantly changing. A service placement optimization is needed to meet the QoS requirements. We propose a service placement orchestration, which functions as a multi-agent system. Fog computing services are represented by agents that can both work independently and cooperate. Service placement is being completed by a two-stage optimization method. Our service placement orchestrator is distributed, services are discovered dynamically, resources can be monitored, and communication messages among fog nodes can be signed and encrypted as a solution to the weakness of multi-agent systems due to the lack of monitoring tools and security.
APA, Harvard, Vancouver, ISO, and other styles
45

Gite, Kundan. "Monolithic vs. Microservices Architecture: A Comparative Study of Software Development Paradigms." International Scientific Journal of Engineering and Management 04, no. 06 (2025): 1–9. https://doi.org/10.55041/isjem04468.

Full text
Abstract:
Abstract: Software architecture is the backbone of any application, shaping its scalability, maintainability, and overall performance. Among the most widely adopted architectural patterns are Monolithic and Microservices architectures, each offering distinct benefits and trade-offs. A monolithic approach keeps all components tightly integrated into a single system, simplifying development and deployment but limiting flexibility as applications grow. In contrast, microservices break down applications into smaller, independently deployable services, improving scalability and fault isolation while introducing challenges like service orchestration and data consistency. This paper dives deep into the core differences between these two architectures, exploring their advantages, limitations, and real-world applications. It also examines key factors that influence the decision to transition from monolithic to microservices, such as scalability demands, business agility, and operational complexity. Additionally, we discuss modern solutions—like service mesh technologies and cloud-native tools—that help address the challenges of microservices adoption. By providing a balanced perspective, this study aims to help businesses and developers choose the right architecture based on their specific needs and long-term goals. Keywords: Software Architecture, Monolithic Applications, Microservices, Scalability, Service Orchestration, Cloud Computing, Containerization, Fault Isolation.
APA, Harvard, Vancouver, ISO, and other styles
46

Sushil Prabhu Prabhakaran. "Integration Patterns in Unified AI and Cloud Platforms: A Systematic Review of Process Automation Technologies." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 1932–40. https://doi.org/10.32628/cseit241061229.

Full text
Abstract:
This article comprehensively analyzes unified AI and cloud platforms, examining their role in transforming process automation and decision systems across industries. The article investigates the architectural frameworks and integration patterns that enable the convergence of AI tools, machine learning operations, and workflow orchestration within cloud-native environments. The article explores key innovations, including federated AI implementations, real-time data processing architectures, and multi-cloud integration patterns. It provides insights into their practical applications across finance, healthcare, retail, and manufacturing sectors. The article identifies critical success factors in platform implementation, including integrating MLOps frameworks, automated decision engines, and compliance tools for AI governance. Through case study analysis and architectural evaluation, we demonstrate how unified platforms address traditional challenges in AI deployment while enabling scalable, cost-efficient solutions. The findings reveal emerging patterns in platform architecture that facilitate seamless integration of edge computing, real-time analytics, and distributed AI systems, contributing to the broader understanding of enterprise AI implementation strategies. This article provides valuable insights for researchers and practitioners in cloud engineering, artificial intelligence, and systems integration while highlighting future directions for platform evolution and standardization.
APA, Harvard, Vancouver, ISO, and other styles
47

Tejasvi Nuthalapati. "How the cloud connects everything: Demystifying enterprise system integration." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 1584–94. https://doi.org/10.30574/wjaets.2025.15.2.0696.

Full text
Abstract:
Cloud-native integration has fundamentally transformed enterprise system architecture, breaking down traditional silos that have long plagued organizations. This comprehensive article examines how modern cloud technologies enable seamless connections between previously isolated systems—from HR platforms to finance tools and marketing systems. By leveraging APIs as universal connectors, event-driven architectures for real-time responsiveness, and sophisticated service orchestration for complex workflows, enterprises can create cohesive digital ecosystems that adapt to changing business needs. The article analyzes various integration patterns including point-to-point connections, hub-and-spoke models, and event mesh architectures, demonstrating their application through real-world retail scenarios. It highlights key benefits of cloud-native integration: scalability that handles fluctuating workloads, built-in resilience mechanisms, enhanced security frameworks, and cost-efficient consumption-based models. For organizations beginning their integration journey, the article provides strategic guidance on mapping existing ecosystems, identifying high-value integration opportunities, adopting cloud-native approaches, implementing incremental changes, and building internal integration competencies.
APA, Harvard, Vancouver, ISO, and other styles
48

DEETI, VINAY KUMAR. "A Full-Stack Cloud Computing Infrastructure For Machine Learning Tasks." INTERNATIONAL JOURNAL OF NOVEL RESEARCH AND DEVELOPMENT 6, no. 7 (2021): 37–45. https://doi.org/10.5281/zenodo.15128036.

Full text
Abstract:
Since ML applications underwent rapid expansion the development of scalable efficient and cost-effective cloud computing infrastructure became necessary. A complete cloud infrastructure for ML applications connects optimal frameworks to virtualized resources including computer storage systems and networking components to manage entire ML pipeline processes. The study analyzes a detailed cloud architecture that unites IaaS, PaaS and SaaS to deliver smooth development of models while enabling training and deployment management and monitoring purposes. A performance and cost efficiency analysis consists of examining three essential components such as containerized environments and serverless computing and distributed storage solutions. System reliability and scalability are improved through the discussion of automation and orchestration tools and security measures implementation. This complete approach shows its capability to boost ML workload performance by tests and benchmark examples along with resource optimization and adaptable functionality. The proposed infrastructure system creates an effective basis that enables both enterprises and researchers to streamline their deployment of ML solutions through cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
49

Gopinath Govindarajan. "Building a strong foundation in data engineering: a comprehensive guide for aspiring data analysts." World Journal of Advanced Research and Reviews 26, no. 1 (2025): 3901–7. https://doi.org/10.30574/wjarr.2025.26.1.1508.

Full text
Abstract:
This comprehensive article explores the fundamental aspects of building a strong foundation in data engineering, focusing on the transformation of data processing and management in modern organizations. The article examines the evolution of data engineering practices, highlighting the integration of artificial intelligence, cloud technologies, and automated workflows in contemporary data architectures. It investigates core technical foundations, including database management, SQL optimization, and Python programming, while analyzing the impact of cloud-native services and distributed computing on data processing capabilities. The article also delves into automation and orchestration practices, examining how modern tools and frameworks have revolutionized data pipeline management. Additionally, the article addresses critical aspects of data security and governance, providing insights into emerging best practices and regulatory compliance frameworks in the data engineering landscape.
APA, Harvard, Vancouver, ISO, and other styles
50

Ganesh Vanam. "AI-Enhanced Cloud Automation: A Framework for Next-Generation Infrastructure Management." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 12–19. https://doi.org/10.32628/cseit25111204.

Full text
Abstract:
The integration of artificial intelligence with cloud automation represents a paradigm shift in infrastructure management, offering organizations unprecedented capabilities to optimize and maintain complex IT environments. This article examines the transformative impact of AI-driven cloud automation, focusing on three key innovations: predictive scaling mechanisms, autonomous remediation systems, and intelligent container orchestration. Through analysis of current implementations across major cloud platforms and industry-leading AIOps tools, the article explores how these technologies are revolutionizing resource management, incident response, and operational efficiency. The article draws insights from implementations in healthcare and financial services sectors, demonstrating tangible improvements in system reliability, cost optimization, and innovation acceleration. The article findings suggest that AI-driven cloud automation not only enhances traditional infrastructure management practices but also enables organizations to build more resilient, scalable, and intelligent systems that can adapt to dynamic workload requirements while minimizing human intervention. This article contributes to the growing body of knowledge on intelligent infrastructure management and provides practical insights for organizations seeking to leverage AI capabilities in their cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography