To see the other types of publications on this topic, follow the link: Disaster Recovery Automation.

Journal articles on the topic 'Disaster Recovery Automation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Disaster Recovery Automation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Guang Hua, and Hui Zheng. "Application of Disaster Recovery Technology in Power Grid Dispatching Automation System." Advanced Materials Research 468-471 (February 2012): 55–59. http://dx.doi.org/10.4028/www.scientific.net/amr.468-471.55.

Full text
Abstract:
With the increasing expansion and complexity of the power system structure, the operational reliability of the power grid dispatching automation system has become increasingly demanding. The data’s rapid growth and requirements of high availability and security stimulate the development of the disaster recovery technology which has become a key factor of IT infrastructure. This article describes the disaster recovery technology in details from the measure index and level classification of disaster recovery, and analyzes its implementation process and importance by taking the disaster recovery in the power grid dispatching automation system as an example.
APA, Harvard, Vancouver, ISO, and other styles
2

Hemanth, Kumar. "Efficient Cloud-Based Disaster Recovery Plans for Airline Management Systems." Journal of Advances in Developmental Research 16, no. 1 (2025): 1–5. https://doi.org/10.5281/zenodo.14851299.

Full text
Abstract:
Airline management systems are critical infrastructures that require high availability, reliability, and resilience against potential disasters, including cyberattacks, system failures, and natural calamities. Cloud-based disaster recovery (CBDR) strategies have emerged as an efficient approach to ensuring business continuity and minimizing operational disruptions. This study explores the effectiveness of cloud-based disaster recovery plans in the airline industry, analyzing best practices, challenges, and real-world implementations. The findings suggest that cloud-based solutions offer scalability, automation, and faster recovery times, enhancing system resilience. Case studies from leading airlines highlight the successful deployment of cloud-based recovery strategies, showcasing improvements in downtime reduction and cost efficiency. Despite security concerns and compliance challenges, cloud-based disaster recovery is essential for modern airline operations, ensuring seamless recovery and continuity in the face of disruptions.
APA, Harvard, Vancouver, ISO, and other styles
3

Zein Samira, Yodit Wondaferew Weldegeorgise, Olajide Soji Osundare, Harrison Oke Ekpobimi, and Regina Coelis Kandekere. "Disaster recovery framework for ensuring SME business continuity on cloud platforms." Computer Science & IT Research Journal 5, no. 10 (2024): 2244–62. http://dx.doi.org/10.51594/csitrj.v5i10.1620.

Full text
Abstract:
Disaster recovery (DR) is a critical component of ensuring business continuity, especially for Small and Medium-sized Enterprises (SMEs) that rely heavily on cloud platforms for their operations. SMEs face unique challenges, including limited financial and technical resources, making it essential to develop a disaster recovery framework that is both cost-effective and robust. This proposes a disaster recovery framework that minimizes downtime and data loss, leveraging the capabilities of cloud platforms to ensure continuous business operations. The proposed framework focuses on three key objectives viz reducing Recovery Point Objectives (RPO), minimizing Recovery Time Objectives (RTO), and ensuring scalability. RPO refers to the amount of data that can be lost before causing significant harm to the business, while RTO measures the time it takes to restore operations after a disaster. The framework achieves these objectives through cloud-based replication and automated backup systems. Data replication across geographically distributed data centers ensures that a copy of the data is always available, while incremental backups reduce the potential for data loss, ensuring that SMEs can recover recent transactions and information with minimal disruption. Automation plays a central role in the disaster recovery process. Using tools like AWS Elastic Disaster Recovery or Azure Site Recovery, SMEs can implement automated failover procedures that trigger in the event of an outage. This automation significantly reduces manual intervention, decreasing the likelihood of human error while improving recovery speed. Furthermore, periodic testing of disaster recovery plans is incorporated to ensure preparedness, with simulations identifying any vulnerabilities in the DR strategy. By using a pay-as-you-go model for cloud resources, SMEs can scale their disaster recovery solutions as their operations grow, optimizing costs while maintaining flexibility. This framework provides a comprehensive, affordable solution for SMEs to safeguard their business continuity, protecting them from the potentially devastating impacts of data loss and downtime. Keywords: Disaster Recovery, SME Business, Cloud Platforms, Review.
APA, Harvard, Vancouver, ISO, and other styles
4

Abieba, Olumese Anthony, Chisom Elizabeth Alozie, and Olanrewaju Oluwaseun Ajayi. "Enhancing Disaster Recovery and Business Continuity in Cloud Environments through Infrastructure as Code." Journal of Engineering Research and Reports 27, no. 3 (2025): 127–36. https://doi.org/10.9734/jerr/2025/v27i31423.

Full text
Abstract:
This paper explores the transformative impact of Infrastructure as Code (IaC) on disaster recovery and business continuity in cloud environments. Infrastructure as Code is defined as the practice of managing and provisioning infrastructure through machine-readable code, facilitating automation, consistency, and scalability. The relevance of IaC in disaster recovery is highlighted, demonstrating how it enhances operational efficiency and resilience by automating key processes such as backup, failover, and restoration. Furthermore, the paper discusses the importance of business continuity, emphasizing IaC’s role in maintaining and quickly restoring critical services. The advantages of using IaC tools and practices to enforce continuity plans are examined, alongside a set of best practices for successful implementation. Ultimately, the paper concludes that adopting Infrastructure as Code is essential for organizations seeking to enhance their disaster recovery and business continuity strategies in an increasingly complex digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Minhao. "Robotics Applications in Natural Hazards." Highlights in Science, Engineering and Technology 43 (April 14, 2023): 273–79. http://dx.doi.org/10.54097/hset.v43i.7429.

Full text
Abstract:
Natural hazards not only have great contribution to fatality but also economic loss. Although government has proposed well developed polices to rapidly handle emergencies and systematically organized recovery actions, failures of emergency relief, such as ineffective rescue, can significantly increase the post-hazard death rate. With the advance of artificial intelligence, the use of robots for disaster management applications is a new trend in managing and accessing natural disasters. Disaster response robotics is capable of assisting and replacing rescue teams working in dangerous scenarios, which not only alleviates labor-intensity but also reduces the potential risks associated with rescue personnel. Research on construction automation has advanced, but extensive development is required to reach fully autonomous construction in disaster management and post-disaster recovery. Instead, human robotics collaboration is promising and able to effectively alleviate the knowledge deficit and confusion. In the article, different functions of disaster response robotics are introduced, and technical challenges and future improvements are described.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Bong-Hyun. "Design of Reliable Disaster Recovery System through Integrated Server Redundancy." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (2021): 674–79. http://dx.doi.org/10.17762/turcomat.v12i6.2069.

Full text
Abstract:
Even before the September 11 terrorist attacks in the United States in 2001, information systems prepared against disasters in Korea were extremely weak. However, as various domestic and foreign accident cases have occurred, it is recognized that preparations for this are necessary. Accordingly, at present, each institution has prepared and implemented various backup policies to protect the institution's information and data in case of disaster. Therefore, in this paper, we conducted a study to design a more stable and efficient disaster recovery system by building redundancy for server operating in integrated data center. To do this, we analyzed the redundancy design for the integrated disaster recovery server and designed the overall system configuration. Also, the design results were analyzed by testing web server redundancy and switch redundancy. In this paper, the proposed design method for stabilization and efficiency of disaster recovery system is the redundant construction of integrated server and switch. In other words, the disaster recovery system was composed of active storage and standby storage, and data stabilization was promoted through real-time replication of each other. In the existing disaster recovery system, there is a problem in stabilizing replication because there is no monitoring system for internal replication between storage arrays. To solve this problem, we designed a system that replicates all data in active storage to standby storage in real time and monitors the replication status. Therefore, introducing service conversion automation from the main system, which is the method designed in this paper, to the disaster recovery system, improves the stability and reliability of the service of the local governments, so that it is possible to operate a more efficient and advanced disaster recovery system.
APA, Harvard, Vancouver, ISO, and other styles
7

Neeli, Sethu Sesha Synam. "Empowering Database Administrators: The Essential Role of Automation in Modern Practices." International Scientific Journal of Engineering and Management 04, no. 01 (2025): 1–7. https://doi.org/10.55041/isjem01338.

Full text
Abstract:
Managing large servers and databases while maintaining the uninterrupted functioning of critical applications presents significant challenges for database administrators. Automating database administration has become a crucial component in enhancing high availability, reliability, efficiency, and scalability. To reduce mistakes, minimize manual interventions, and improve database process performance, automation is essential. Important components include disaster recovery plans, security monitoring, performance review, and automatic backups, all of which are intended to prevent data loss. This study explores how businesses use automated database administration and contemporary automation techniques to maintain a competitive edge in an increasingly data-centric environment. Keywords: Jenkins, data, ansible, automation, mistakes, CICD, access, backups, and consistency.
APA, Harvard, Vancouver, ISO, and other styles
8

Suresh Kotha Naga Venkata Hanuma. "A single-click automation tool for azure site recovery: Enhancing disaster response efficiency." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 2713–21. https://doi.org/10.30574/wjaets.2025.15.2.0839.

Full text
Abstract:
A single-click automation tool for Azure Site Recovery (ASR) transforms disaster recovery operations from complex technical processes into streamlined business functions with predictable outcomes. This PowerShell-based solution addresses critical challenges in cross-region resource migration by implementing a multi-layered architecture that manages the entire migration lifecycle with minimal manual intervention. The automation framework encompasses resource discovery, dependency mapping, configuration transformation, and deployment orchestration, significantly reducing recovery times compared to traditional manual approaches. Performance analysis demonstrates consistent improvements across diverse application architectures, from simple web applications to complex microservices implementations. While the current implementation successfully addresses many historical barriers to effective disaster recovery, opportunities remain for enhancing application-specific validation, dynamic recovery sequencing, and cross-cloud capabilities. The tool's modular design enables organizations to achieve substantially improved resilience postures without corresponding increases in operational complexity or specialized technical requirements.
APA, Harvard, Vancouver, ISO, and other styles
9

Akula, Naga Venkata Chaitanya. "Optimizing Regional Disaster Recovery in OpenShift: A Multi-Cluster Approach with RHACM and ODF." International Journal of Computational Mathematical Ideas 17, no. 01 (2025): 7027–38. https://doi.org/10.70153/ijcmi/2025.17101.

Full text
Abstract:
In the current digital environment, it is crucial for businesses to implement robust disaster recovery (DR) strategies to counteract risks posed by cyber threats, hardware malfunctions, and natural calamities. This paper examines an improved Regional Disaster Recovery (Regional-DR) strategy within OpenShift, which incorporates Red Hat Advanced Cluster Management (RHACM) and OpenShift Data Foundation (ODF) to enable smooth application failover and data replication. The practical application of Regional-DR showcases how applications and storage can be synchronized across several geographically dispersed clusters. The solution achieved a Recovery Time Objective (RTO) of 5 minutes and a Recovery Point Objective (RPO) of 3 minutes, ensuring minimal downtime and data loss. Furthermore, best practices for maintaining business continuity and enhancing system resilience using OpenShift’s built-in DR features are analysed. The findings demonstrate how enterprises can utilize OpenShift’s multi-cluster disaster recovery capabilities for effective failover management and improved performance.Key words:OpenShift Disaster Recovery,Regional Disaster Recovery (Regional-DR) Red Hat Advanced Cluster Management (RHACM),OpenShift Data Foundation (ODF) Multi-Cluster Management,Kubernetes Disaster Recovery,Failoverand Failback Automation,Persistent Storage Replication, RecoveryTime Objective (RTO), RecoveryPoint Objective (RPO), Cloud-Native Disaster Recovery, DataSynchronization, ResilientInfrastructure
APA, Harvard, Vancouver, ISO, and other styles
10

Somi, Vivek. "From Backup and Restore to Multi-Site Active: Evaluating the Spectrum of AWS Disaster Recovery Solutions." International Journal of Multidisciplinary Research and Growth Evaluation 6, no. 1 (2025): 2154–63. https://doi.org/10.54660/.ijmrge.2025.6.1.2154-2163.

Full text
Abstract:
A fundamental component of cloud architecture, disaster recovery (DR) guarantees business continuity in the event of failures. Based on Recovery Time Objective (RTO), Recovery Point Objective (RPO), cost, and complexity, this review looks at AWS disaster recovery techniques—from Backup and Restore to Multi-Site Active/Active. While Pilot Light and Warm Standby balance cost and recovery speed by retaining either minimum or reduced infrastructure, Backup and Restore offers a sluggish but reasonably priced recovery choice. Multi-Site Active/Active guarantees almost instantaneous failover but requires large operational overhead and financial outlay. Implementation of DR solutions depends much on AWS products such AWS Elastic Disaster Recovery, Amazon S3, AWS Cloud Formation, and Amazon Route 53. Key best practices to maximize DR impact are automation, testing, and monitoring. Smooth failover and restoration depend on addressing issues such data synchronization, network configuration, and application dependencies. Comparative study shows that an organization's tolerance for downtime, financial restrictions, and compliance needs determines its ideal DR strategy. While those wanting faster recovery can choose Pilot Light or Warm Standby, cost-sensitive companies can depend on Backup and Restore. Despite their great expense and complexity, Multi-Site Active/Active helps mission-critical systems needing highest availability. Resilience and efficiency will be improved by future trends in AWS DR including increased multi-region replication, serverless failover, and AI-driven automation. Organizations may reduce risk, guarantee data integrity, and accomplish flawless recovery in disaster situations by implementing a clearly defined, scalable, tested DR strategy.
APA, Harvard, Vancouver, ISO, and other styles
11

Velmurugan, Dhakshnamoorthy. "Revolutionizing Disaster Recovery: Fully Automated Cloud Solutions." European Journal of Advances in Engineering and Technology 11, no. 12 (2024): 1–2. https://doi.org/10.5281/zenodo.14540055.

Full text
Abstract:
In today's digital landscape, seamless business continuity and disaster recovery (DR) are essential. This article explores an innovative DR solution implemented in Oracle Cloud Infrastructure (OCI) across Ashburn and Singapore regions, using Ansible and Jenkins for full automation. The solution allows for independent, on-demand DR switches with minimal manual intervention, achieving excellent recovery time objectives (RTO) and recovery point objectives (RPO). It includes real-time data synchronization using Oracle Data Guard, automated switch-over processes, and integration with third-party systems. The solution also provides significant cost savings with reduced capacity when either site is inactive, achieving near-zero RPO and an RTO of 1 hour, ensuring minimal downtime for Oracle applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Lai, Horng Cherng, Ting Sung Yeh, Jen Huan Chang, et al. "A Developing Flood Automation System." Applied Mechanics and Materials 457-458 (October 2013): 1220–23. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1220.

Full text
Abstract:
The most desired disaster prevention in Taiwan has to be typhoon; therefore there is a saying good weather, good life. The government does everything to prevent nature disaster, preparedness, response and recovery, especially on typhoon flooding survey. In order to conduct instant contingency handling disaster reaction in biggest disaster conditions, the government hopes to control the disaster immediately, and calculate the disaster loss. The main purpose of this study is to build a flood automation system. Apply auto-recording water level monitoring system (internal or external water) from water stations to collect stage data, and integrate Geographic Information System by the terrain grid computing, then obtain the instant scope of inundation and flood depth. Together with the instant rainfall and forecast to achieve the effectiveness of the flood watch or warning, then we should possibly control the instant information for the anticipation of the flood disaster. Taking it further with the Flood Security Plan, we have the best reference for decision-making use in the Emergency Response Center. The study area is located in Yilan County, the most sensitive flooded region, to set up immediate foregoing typhoon instant information system or so-called flood automation system, and then by several times of investigations and post-disaster flood scar volume measured we completely record ground hydrological information, and calibrate it with water level gauges built by Taiwan National Typhoon and Flood Research Center and AWRSIP weather telemetry information platform of synthetic aperture radar images on the flooded scope and depth. The concept of flood automation system is based on the fact that the occurrence and distribution of flood is not merely a product of chance, but the result of a combination of climatic, hydrologic, geologic, topographic and soil-forming factors that together form an integrated dynamic system, thus, through a combination of real-time rainfall forecast, flood recording, automation requirement, Geographic Information System, flood warning, instant data transmission and the internet, be setting up an ideal typhoon instant information system, which can become high accurate and be suitable for easy use of flooded areas, in order to achieve sustainable development in those areas.
APA, Harvard, Vancouver, ISO, and other styles
13

Ajay Venkat Nagrale. "Cloud-based disaster recovery: An economic analysis of enterprise cost reduction strategies." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 672–79. https://doi.org/10.30574/wjaets.2025.15.2.0587.

Full text
Abstract:
This article examines the economic advantages of cloud-based disaster recovery solutions for enterprise organizations. Through a comprehensive analysis of infrastructure, operational, and labor cost factors, the article demonstrates how cloud disaster recovery architectures fundamentally transform traditional cost structures while maintaining or enhancing business resilience. The article synthesizes findings from multiple industry studies and provides a framework for quantifying cost reductions across capital expenditures, operational expenses, and potential downtime impacts. The article presents a methodological approach for enterprises to assess their current disaster recovery economics, select appropriate cloud providers, implement automation strategies, and establish ongoing cost optimization practices. The article contributes to the growing body of knowledge on cloud economics by specifically addressing the financial dimensions of disaster recovery planning and implementation, offering actionable insights for enterprise technology leaders navigating business continuity challenges in resource-constrained environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Naveen Karuturi. "High availability and disaster recovery strategies for cloud-based SAP systems." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 268–81. https://doi.org/10.30574/wjaets.2025.15.2.0517.

Full text
Abstract:
High availability and disaster recovery strategies form the backbone of resilient cloud-based SAP systems. This article explores how these complementary approaches address different levels of system resilience - with high availability focusing on component-level failures within a region through redundancy and automated failover, while disaster recovery handles catastrophic events affecting entire regions. The exploration covers infrastructure redundancy, database high availability, application-level redundancy, and automated failover mechanisms as key elements of comprehensive availability architectures. For disaster recovery, backup and recovery approaches, cross-region replication, recovery orchestration, and regular testing are essential components. The article also evaluates cloud provider-specific features from AWS, Azure, and Google Cloud Platform, highlighting their unique capabilities for SAP workloads. Implementation best practices emphasize business-driven requirements, layered defense strategies, automation, regular testing, thorough documentation, proactive monitoring, and continuous review processes to optimize resilience for these business-critical systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Anju, Bhole. "Cloud Computing for Disaster Recovery and Business Continuity." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 9, no. 4 (2023): 1–12. https://doi.org/10.5281/zenodo.14598703.

Full text
Abstract:
The emergence of cloud computing has fundamentally altered the landscape of disaster recovery (DR) and business continuity (BC) strategies. With an increasing frequency of operational disruptions, including cyber threats and natural calamities, organizations must prioritize the seamless availability of key services and the protection of critical data. Traditional DR systems, often burdened by substantial initial costs and limited physical infrastructure, contrast sharply with the dynamic, scalable, and cost-effective solutions provided by cloud technology. By harnessing the advantages of redundancy, automation, and remote access, cloud computing enables businesses to bolster their resilience and minimize downtime during crises. This paper delves into the multifaceted role of cloud computing within disaster recovery and business continuity frameworks, examining the merits of public, private, and hybrid cloud models, while also addressing the associated challenges such as security, data privacy, and compliance. Through a thorough review of contemporary practices, emerging trends, and case studies, this research elucidates how cloud technologies are redefining disaster recovery strategies across global enterprises.
APA, Harvard, Vancouver, ISO, and other styles
16

Researcher. "SIMPLIFYING AZURE SITE RECOVERY: A COMPREHENSIVE ANALYSIS OF CLOUD-BASED DISASTER RECOVERY SOLUTIONS." International Journal of Computer Engineering and Technology (IJCET) 15, no. 6 (2024): 1091–98. https://doi.org/10.5281/zenodo.14287093.

Full text
Abstract:
This article presents a comprehensive analysis of Azure Site Recovery as a cloud-based disaster recovery solution, examining its architecture, implementation frameworks, and real-world performance metrics in orchestrated failover scenarios. The article evaluates the platform's effectiveness across multiple dimensions, including replication mechanisms, manually initiated recovery processes, integration capabilities, and economic implications. Through extensive analysis of deployment scenarios and performance data, the research demonstrates significant improvements in recovery time objectives (RTO) and recovery point objectives (RPO) when proper procedures are followed, with organizations achieving recovery times as low as 15 minutes and near-zero data loss in synchronous replication scenarios after initiating failover. The article reveals that organizations implementing Azure Site Recovery, with well-defined failover procedures and trained personnel, experienced an average 76% reduction in system downtime and achieved cost savings between 40-60% compared to traditional disaster recovery solutions. The platform's architecture, featuring orchestrated failover capabilities requiring human validation, robust integration with the broader Azure ecosystem, and scalable resource management, provides a foundation for reliable and efficient disaster recovery operations when properly managed. The article also highlights critical success factors for implementation, including the importance of regular failover testing and staff training, while identifying emerging technological trends that will shape the future of cloud-based disaster recovery solutions. This article contributes valuable insights for organizations considering cloud-based disaster recovery solutions and provides a framework for evaluating and implementing Azure Site Recovery in enterprise environments, with particular emphasis on the importance of human oversight in the failover process.
APA, Harvard, Vancouver, ISO, and other styles
17

Hu, Lei, Zhe Fang, Mingda Zhang, Liangcun Jiang, and Peng Yue. "Facilitating Typhoon-Triggered Flood Disaster-Ready Information Delivery Using SDI Services Approach—A Case Study in Hainan." Remote Sensing 14, no. 8 (2022): 1832. http://dx.doi.org/10.3390/rs14081832.

Full text
Abstract:
Natural disaster response and assessment are key elements of natural hazard monitoring and risk management. Currently, the existing systems are not able to meet the specific needs of many regional stakeholders worldwide; traditional approaches with field surveys are labor-intensive, time-consuming, and expensive, especially for severe disasters that affect a large geographic area. Recent studies have demonstrated that Earth observation (EO) data and technologies provide powerful support for the natural disaster emergency response. However, challenges still exist in support of the entire disaster lifecycle—preparedness, response, and recovery—which build the gaps between the disaster Spatial Data Infrastructure (SDI) already-in-place requirements and the EO capabilities. In order to tackle some of the above challenges, this paper demonstrates how to facilitate typhoon-triggered flood disaster-ready information delivery using an SDI services approach, and proposes a web-based remote sensing disaster decision support system to facilitate natural disaster response and impact assessment, which implements on-demand disaster resource acquisition, on-the-fly analysis, automatic thematic mapping, and decision report release. The system has been implemented with open specifications to facilitate interoperability. The typhoons and floods in Hainan Province, China, are used as typical scenarios to verify the system’s applicability and effectiveness. The system improves the automation level of the natural disaster emergency response service, and provides technical support for regional remote-sensing-based disaster mitigation in China.
APA, Harvard, Vancouver, ISO, and other styles
18

Barbosa, Vandirleya, Arthur Sabino, Luiz Nelson Lima, et al. "Dependability Evaluation of a Smart Poultry Monitoring System with Disaster Recovery Mechanism." Journal of the Brazilian Computer Society 30, no. 1 (2024): 252–63. http://dx.doi.org/10.5753/jbcs.2024.3863.

Full text
Abstract:
The Internet of Things (IoT) has changed how poultry farming is carried out, offering various advantages to farmers. One notable benefit is the real-time monitoring of bird breeding tasks, ensuring the well-being of the animals. Farmers can enhance their operations through task automation by incorporating an edge server for local sensor data processing. Tasks automation enables farmers to make informed decisions, improving production efficiency, bird quality, and agribusiness profits. However, poultry farming faces challenges, with disaster recovery a critical concern. Potential events like fires, power outages, or equipment failures can significantly impact birds and production. Consequently, continuous monitoring of birds is vital, and any disruptions must be minimized to uphold system integrity. This study introduces Stochastic Petri Nets (SPN) models to evaluate the availability and reliability of an intelligent bird breeding system. The system integrates a disaster recovery solution for uninterrupted operations. Furthermore, a sensitivity analysis is conducted on the components of the smart poultry system to pinpoint the most relevant one to the system's availability in the proposed architecture. This analysis can aid system architects in developing distributed architectures, considering points of failure and recovery measures. The study results demonstrate the system's high availability and reliability, enabling farmers to make informed decisions and improve the overall productivity of their farms.
APA, Harvard, Vancouver, ISO, and other styles
19

Vaheedbasha, Shaik, and Kalyanasundaram Natarajan. "Assimilating sense into disaster recovery databases and judgement framing proceedings for the fastest recovery." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (2023): 4234–45. https://doi.org/10.11591/ijece.v13i4.pp4234-4245.

Full text
Abstract:
The replication between the primary and secondary (standby) databases can be configured in either synchronous or asynchronous mode. It is referred to as out-of-sync in either mode if there is any lag between the primary and standby databases. In the previous research, the advantages of the asynchronous method were demonstrated over the synchronous method on highly transactional databases. The asynchronous method requires human intervention and a great deal of manual effort to configure disaster recovery database setups. Moreover, in existing setups there was no accurate calculation process for estimating the lag between the primary and standby databases in terms of sequences and time factors with intelligence. To address these research gaps, the current work has implemented a self-image looping database link process and provided decision-making capabilities at standby databases. Those decisions from standby are always in favor of selecting the most efficient data retrieval method and being in sync with the primary database. The purpose of this paper is to add intelligence and automation to the standby database to begin taking decisions based on the rate of concurrency in transactions at primary and out-of-sync status at standby.
APA, Harvard, Vancouver, ISO, and other styles
20

Mahesh Babu Jalukuri. "The Evolution of Cloud Resilience: Observability, Automation, and High Availability." Journal of Computer Science and Technology Studies 7, no. 5 (2025): 48–55. https://doi.org/10.32996/jcsts.2025.7.5.7.

Full text
Abstract:
Cloud resilience has evolved from basic disaster recovery practices into a sophisticated discipline encompassing observability, automation, and distributed architecture patterns. This transformation addresses the increasing complexity of modern digital infrastructure and growing expectations for continuous availability across interconnected systems. The convergence of these three foundational pillars enables organizations to detect anomalies before service disruption, implement autonomous recovery mechanisms, and design architecturally resilient systems that gracefully handle component failures. Contemporary approaches have shifted focus from reactive recovery toward proactive resilience frameworks that anticipate potential failure modes and incorporate mitigation strategies directly into system design. The evolution continues with advancements in machine learning-based predictive recovery, continuous validation techniques, and sophisticated correlation analysis for identifying causality in complex failure scenarios. Organizations implementing comprehensive resilience practices report significant improvements in availability metrics while simultaneously enhancing development velocity through reduced operational complexity. As cloud adoption accelerates across industries, resilience capabilities increasingly determine competitive positioning in the digital marketplace, driving the need for dedicated teams responsible for developing cross-functional resilience frameworks that span development, operations, and business continuity domains.
APA, Harvard, Vancouver, ISO, and other styles
21

Santosh, Pashikanti. "Designing Resilient Cloud Architectures: A Practical Guide to High Availability and Disaster Recovery." International Journal on Science and Technology 12, no. 2 (2021): 1–5. https://doi.org/10.5281/zenodo.14631522.

Full text
Abstract:
Resilience in cloud architecture is crucial to ensuring that business-critical applications remain available and recoverable, even during major failures. This white paper offers a practical, technical guide to designing cloud systems that achieve high availability (HA) and disaster recovery (DR), emphasizing fault tolerance, scalability, and automation. It includes architecture patterns, tools, and detailed implementation strategies suitable for enterprises and industries of all sizes.
APA, Harvard, Vancouver, ISO, and other styles
22

Shaik, Vaheedbasha, and Natarajan Kalyanasundaram. "Assimilating sense into disaster recovery databases and judgement framing proceedings for the fastest recovery." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 4 (2023): 4234. http://dx.doi.org/10.11591/ijece.v13i4.pp4234-4245.

Full text
Abstract:
The replication between the primary and secondary (standby) databases can be configured in either synchronous or asynchronous mode. It is referred to as out-of-sync in either mode if there is any lag between the primary and standby databases. In the previous research, the advantages of the asynchronous method were demonstrated over the synchronous method on highly transactional databases. The asynchronous method requires human intervention and a great deal of manual effort to configure disaster recovery database setups. Moreover, in existing setups there was no accurate calculation process for estimating the lag between the primary and standby databases in terms of sequences and time factors with intelligence. To address these research gaps, the current work has implemented a self-image looping database link process and provided decision-making capabilities at standby databases. Those decisions from standby are always in favor of selecting the most efficient data retrieval method and being in sync with the primary database. The purpose of this paper is to add intelligence and automation to the standby database to begin taking decisions based on the rate of concurrency in transactions at primary and out-of-sync status at standby.
APA, Harvard, Vancouver, ISO, and other styles
23

Asis Jamal, Sarah Javed, Arslan Akram, and Shahzaib Jamal. "Recovery Method for Disasters of Network Servers by Using POX controller in Software defined Networks." Lahore Garrison University Research Journal of Computer Science and Information Technology 3, no. 4 (2019): 45–52. http://dx.doi.org/10.54692/lgurjcsit.2019.030492.

Full text
Abstract:
The devices we used to automation made up of electric bunch called IoT devices so the more complex task is to manage them effectively. If the devices cannot connect or share anything correctly then these devices will be considered as useless. So the diversity of these devices will be increase the chance of survival. When we talk about the disaster the main difference between networks software and hardware needed to be overcome . for this we have to control the data traffic smartly so the software defined networks make this thing possible to give more programmability because in SDN the data plane is separated from control plane. When IoT devices got disconnected because of internet lack at that time these devices have to respond quickly. So for this purpose, software defined networks used to search another path for information transfer just to get that connection back we can say SDN provides reroute based on the routing information and routing flows they have already and they also have better understanding of pathways for communication. So in this paper our main focus is on this problem that will occur because of disaster and we intend to recover server and also multiple servers from this disaster of link failure , traffic engineering , power outage and rerouting of packets. For this we proposed a systematic approach for recovering these servers from disaster by using software defined networks. The separation of control plane from data plane provides programmability and also make the system flexible for getting back the connection soon. So for this recovery from disaster we are going to use OpenFlow protocol used by SDN and we using Mininet to implementation. The controller will be POX and also we are using Lipsflow mapping for disaster management and recovery.
APA, Harvard, Vancouver, ISO, and other styles
24

Researcher. "LEVERAGING AI IN DISASTER RECOVERY: THE FUTURE OF BUSINESS CONTINUITY." International Journal of Research In Computer Applications and Information Technology (IJRCAIT) 7, no. 2 (2024): 1675–87. https://doi.org/10.5281/zenodo.14244192.

Full text
Abstract:
The revolutionary significance of artificial intelligence in contemporary business continuity and catastrophe recovery planning is examined in this thorough article. As organizations face increasingly complex digital infrastructures and evolving cyber threats, AI technologies are revolutionizing how businesses approach disaster prevention, recovery orchestration, and data protection. The article examines how AI-driven solutions improve data backup plans, automate recovery procedures, improve predictive analytics for early threat identification, and guarantee regulatory compliance. Real-world case studies from the financial and healthcare sectors demonstrate how AI integration significantly improves recovery times, reduces operational costs, and strengthens overall organizational resilience. The article also addresses implementation best practices, common pitfalls, and emerging trends in AI-powered disaster recovery, providing insights into the future of business continuity management.
APA, Harvard, Vancouver, ISO, and other styles
25

Uchechukwu, Emejeamara, Nwoduh Udochukwu, and Madu Andrew. "EFFECTIVE METHOD FOR MANAGING AUTOMATION AND MONITORING IN MULTI-CLOUD COMPUTING: PANACEA FOR MULTI-CLOUD SECURITY SNAGS." International Journal of Network Security & Its Applications (IJNSA) 12, no. 4 (2020): 39–44. https://doi.org/10.5281/zenodo.3975757.

Full text
Abstract:
Multi-cloud is an advanced version of cloud computing that allows its users to utilize different cloud systems from several Cloud Service Providers (CSPs) remotely. Although it is a very efficient computing facility, threat detection, data protection, and vendor lock-in are the major security drawbacks of this infrastructure. These factors act as a catalyst in promoting serious cyber-crimes of the virtual world. Privacy and safety issues of a multi-cloud environment have been overviewed in this research paper. The objective of this research is to analyze some logical automation and monitoring provisions, such as monitoring Cyber-physical Systems (CPS), home automation, automation in Big Data Infrastructure (BDI), Disaster Recovery (DR), and secret protection. The Results of this research investigation indicate that it is possible to avoid security snags of a multi-cloud interface by adopting these scientific solutions methodically.
APA, Harvard, Vancouver, ISO, and other styles
26

Researcher. "LEVERAGING KUBERNETES AND AI FOR IMPROVED DISASTER RECOVERY IN CLOUD COMPUTING." International Journal of Computer Engineering and Technology (IJCET) 15, no. 6 (2024): 1160–67. https://doi.org/10.5281/zenodo.14330367.

Full text
Abstract:
This article presents a groundbreaking approach to disaster recovery in cloud computing by integrating Artificial Intelligence (AI) capabilities with Kubernetes container orchestration. The article introduces a novel multi-layered architecture that combines deep learning-based predictive analytics, automated recovery mechanisms, and intelligent resource optimization algorithms to enhance system resilience and minimize downtime. Our framework demonstrated remarkable improvements in key performance metrics through extensive testing across geographically distributed clusters, achieving a 73% reduction in Recovery Time Objective (RTO) and maintaining Recovery Point Objective (RPO) under 10 seconds for critical workloads. The implementation resulted in a 94% reduction in false positive failure predictions and a 78% increase in successful automated recoveries while reducing operational costs by 45%. The system's hybrid AI approach, combining supervised and unsupervised learning techniques, achieved 89% accuracy in failure prediction with a 15-minute warning window. This article provides comprehensive evidence that AI-enhanced Kubernetes orchestration represents a significant advancement in cloud infrastructure resilience, offering practical solutions for organizations requiring robust disaster recovery capabilities. The article demonstrates that this integrated approach improves system reliability and provides a cost-effective, scalable foundation for next-generation cloud computing disaster recovery strategies.
APA, Harvard, Vancouver, ISO, and other styles
27

Sumanth Kadulla. "The evolution of cloud automation: From DevOps to autonomous infrastructure." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 496–503. https://doi.org/10.30574/wjaets.2025.15.2.0626.

Full text
Abstract:
The evolution of cloud automation represents a transformative journey from manual operations to autonomous systems capable of self-configuration and self-healing. This technical article explores the progression from early DevOps practices through infrastructure as code, container orchestration, and toward AI-driven autonomous operations. The DevOps revolution established the foundation through cultural transformation and basic automation, breaking down traditional silos between development and operations teams while introducing standardized processes. Infrastructure as Code further advanced this paradigm by bringing software development practices to infrastructure management, enabling version control, peer review, and automated testing of infrastructure changes. Container orchestration platforms emerged to manage the increasing complexity of distributed applications, with Kubernetes becoming the dominant solution for managing containerized workloads across hybrid environments. Cloud-native resilience practices introduced sophisticated disaster recovery, comprehensive observability, and reliability engineering principles that enable systems to recover from failures with minimal human intervention. Looking toward the future, machine learning and AIOps platforms promise to transform operations from reactive to predictive, with systems that can anticipate problems before they impact users and automatically implement remediation strategies. This technological progression reflects a fundamental shift in how organizations manage cloud infrastructure, moving from manual interventions to intelligent, autonomous systems that optimize themselves while freeing technical teams to focus on innovation.
APA, Harvard, Vancouver, ISO, and other styles
28

Sainath, Muvva. "Dual ETL – Hadoop Cluster Auto Failover." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 7, no. 5 (2019): 1–5. https://doi.org/10.5281/zenodo.14280339.

Full text
Abstract:
This paper examines the design of data infrastructure for high-speed delivery, focusing on the 4 V's of big data and the importance of geographically separated primary and Disaster Recovery clusters. It explores the complexities of the failover process of Hadoop clusters, identifying challenges such as manual metadata updates and data quality checks. The research proposes automation solutions, including the use of DistCp for data replication and Hive commands for metadata updates, aiming to enhance data infrastructure resilience and reduce manual intervention during critical events.
APA, Harvard, Vancouver, ISO, and other styles
29

Priyanka Verma and Dr. Lalit Kumar. "Automation of Database Administration Tasks Using Ansible." International Journal for Research Publication and Seminar 16, no. 1 (2025): 481–501. https://doi.org/10.36676/jrps.v16.i1.210.

Full text
Abstract:
The automation of database administration (DBA) functions has become increasingly significant as organizations look to streamline their operations, minimize human errors, and maximize efficiency in database administration. Ansible, the open-source automation platform, has attracted considerable attention due to its capability to automate a broad spectrum of database administration functions, including installation, configuration, performance tuning, and disaster recovery. Nevertheless, despite the increasing body of literature, there are still gaps in comprehending the overall potential of Ansible for predictive database administration, integration with cloud-native databases, and artificial intelligence-based automation. Early research primarily concentrated on routine DBA functions such as backups, user management, and replication, whereas recent research has pointed to the potential of Ansible in cloud environments and its capability to communicate with machine learning models for predictive database administration. This paper endeavors to fill these research gaps through an analysis of the application of Ansible in contemporary database administration, with emphasis on its scalability, predictive features, and its utility in multi-cloud environments. Existing literature indicates that Ansible's integration with diverse technologies—such as cloud platforms, version control systems, and artificial intelligence—enables the automation of increasingly sophisticated DBA functions, thereby providing not only operational efficiency but also a decrease in errors and system downtime. The findings indicate that although Ansible has proven to be effective in automating conventional DBA functions, more research on its predictive features and integration with cloud-native solutions is necessary in order to fully leverage its potential in the rapidly evolving field of database administration. This research underscores the necessity of ongoing innovation in automation tools such as Ansible to keep pace with the evolving needs of contemporary database environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Annam, Sri Nikhil. "Optimizing IT Infrastructure for Business Continuity." Stallion Journal for Multidisciplinary Associated Research Studies 1, no. 5 (2022): 31–42. https://doi.org/10.55544/sjmars.1.5.7.

Full text
Abstract:
IT infrastructure is a critical value to ensuring business continuity. It describes the means to maximize IT infrastructure for both resilience, scalability, security, and cost-effectiveness. Based on the current adoption trends in hybrid clouds, edge computing, and automation, the research outlines approaches for improving disaster recovery, fault-tolerant systems, and emerging technologies in AI and IoT. It gives concrete recommendations and also emphasizes performance metrics for sustainable IT infrastructure planning. The study relies on the latest technical information, as well as tables and code snippets to illustrate optimization methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Hanuma, Suresh Kotha Naga Venkata. "Automated Infrastructure Provisioning: An Integrated Approach to Cloud Resource Management and Monitoring." European Journal of Computer Science and Information Technology 13, no. 34 (2025): 40–48. https://doi.org/10.37745/ejcsit.2013/vol13n344048.

Full text
Abstract:
The integration of DevOps practices with Infrastructure-as-Code principles has revolutionized cloud resource management through automated provisioning and monitoring solutions. This comprehensive automation framework addresses critical challenges in infrastructure deployment, security compliance, and operational efficiency across enterprise environments. By incorporating advanced monitoring capabilities and intelligent alert systems, the framework enables proactive incident prevention and rapid response to potential issues. The implementation demonstrates substantial improvements in deployment consistency, disaster recovery capabilities, and compliance adherence across multiple industries, particularly in healthcare and financial sectors. Through automated resource provisioning and configuration management, organizations have achieved significant reductions in operational overhead while maintaining stringent security standards. The framework's integration with specialized monitoring tools and security platforms ensures comprehensive visibility across cloud environments while automating routine tasks and enforcement of organizational policies. These advancements have proven particularly valuable in regulated industries where system availability and compliance are paramount to operational success.
APA, Harvard, Vancouver, ISO, and other styles
32

Oluwafemi Oloruntoba. "Business continuity in database systems: The role of data guard and oracle streams." World Journal of Advanced Research and Reviews 22, no. 3 (2024): 2266–85. https://doi.org/10.30574/wjarr.2024.22.3.1756.

Full text
Abstract:
In today’s data-driven business landscape, ensuring business continuity in database systems is critical for maintaining operational resilience, preventing downtime, and safeguarding enterprise data. Disruptions caused by hardware failures, cyber threats, and system crashes can lead to significant financial losses, reputational damage, and regulatory non-compliance. Traditional backup strategies, while essential, often fail to provide real-time data availability and rapid failover mechanisms necessary for modern, high-availability environments. To address these challenges, organizations are increasingly leveraging Oracle Data Guard and Oracle Streams, two powerful technologies designed to enhance database redundancy, fault tolerance, and disaster recovery capabilities. Oracle Data Guard provides automated standby database management, ensuring real-time synchronization and failover between primary and secondary systems, minimizing data loss during outages. It supports both physical and logical replication models, offering high availability, disaster recovery, and data integrity. Meanwhile, Oracle Streams enables multi-directional replication, facilitating real-time data distribution, transformation, and conflict resolution across geographically dispersed systems. By integrating these technologies, businesses can establish a robust continuity strategy, ensuring seamless transaction consistency, load balancing, and minimal disruption during database failures. This study explores the comparative advantages, implementation strategies, and best practices for deploying Oracle Data Guard and Streams to achieve business continuity, optimize disaster recovery, and enhance database performance. Additionally, key challenges such as latency, data consistency, and security vulnerabilities are examined, along with emerging trends in AI-driven automation for database resilience. The findings provide valuable insights for IT managers, database administrators, and business leaders seeking to fortify their database infrastructure against operational disruptions and cyber threats.
APA, Harvard, Vancouver, ISO, and other styles
33

Baladari, Venkata. "Cloud Resiliency Engineering: Best Practices for Ensuring High Availability in Multi-Cloud Architectures." International Journal of Science and Research (IJSR) 11, no. 6 (2022): 2062–67. https://doi.org/10.21275/SR220610115023.

Full text
Abstract:
Ensuring cloud resiliency through engineering is essential for maintaining high availability, fault tolerance, and disaster recovery within contemporary cloud infrastructures. As more businesses move towards multi - cloud environments, maintaining system reliability and efficiency while also controlling costs takes centre stage. This study delves into optimal strategies for bolstering cloud reliability via automated failover systems, real - time data duplication, load distribution, and self - restoring networks. The analysis focuses on strategies for disaster recovery, cost - effective resource management, and enhancing security resilience to minimize potential risks.The report draws attention to the difficulties involved in integrating multiple cloud systems, maintaining data consistency, and dealing with cyber threats. It also explores the development of new technologies like AI - powered automation, edge computing, and predictive analytics for identifying potential failures. The study offers valuable insights into how to optimally configure cloud infrastructure to achieve the highest levels of efficiency and dependability. Future developments in autonomous cloud systems, quantum encryption, and eco-friendly computing models to enhance cloud robustness. This paper provides a detailed guide for companies seeking to construct reliable cloud infrastructure that maintains operational stability and reduces the frequency of service interruptions.
APA, Harvard, Vancouver, ISO, and other styles
34

Venkata, Baladari. "Cloud Without Borders: Software Development Strategies for Multi-Regional Applications." European Journal of Advances in Engineering and Technology 9, no. 3 (2022): 193–200. https://doi.org/10.5281/zenodo.15044485.

Full text
Abstract:
Multi-region cloud applications have become crucial for delivering uninterrupted performance, dependability, and adherence to local regulatory requirements. Deploying applications across multiple cloud regions boosts availability, disaster recovery, and user experience but raises complexities including data synchronization, network optimization, security, and cost management. This research examines the responsibilities of software developers in designing and overseeing multi-region cloud-based applications. The main goal is to identify and examine the essential factors and most effective methods that developers should adhere to when constructing reliable, expandable, and high-performance cloud-based applications, approaches to tackle these complexities, such as Infrastructure as Code (IaC) for automating resource allocation, Continuous Integration and Continuous Deployment (CI/CD) for streamlined software updates, and global load balancing for efficient traffic management. The importance of artificial intelligence, automation, and edge computing in streamlining cloud operations is also emphasized. By implementing these strategies, companies can establish cloud applications that are scalable, secure, and highly effective, and run smoothly in various geographic locations.
APA, Harvard, Vancouver, ISO, and other styles
35

Tripathi, Shailja. "Determinants of Digital Transformation in the Post-Covid-19 Business World." IJRDO - Journal of Business Management 7, no. 6 (2021): 75–83. http://dx.doi.org/10.53555/bm.v7i6.4312.

Full text
Abstract:
Digital technologies are playing a major role in overcoming the impact of the Covid-19 pandemic in business and society. In other words, the pandemic also brings a significant opportunity for digital technologies. Organizations are focusing on managing and adopting strategies for business resiliency and recovery in the post-Covid-19 situation through digital transformation. Business resilience needs processes and people to maintain unremitting business operations. Digital transformation ensures business continuity and helps in running core business operations with resiliency and disaster recovery. This study develops a conceptual framework to understand the determinants of digital transformation in the post-Covid-19 business world. The determinants are categorized based on human, organizational and technology-related factors. These are employee health and safety, virtual collaboration, remote working, business resilience and recovery, business process automation, technology readiness and, cybersecurity risks. This study contributes to applying a conceptual framework of digital transformation in the business during the post-Covid-19 period based on the inclusion of human, organizational and technological factors.
APA, Harvard, Vancouver, ISO, and other styles
36

Sandhya, Guduru. "Automated Vulnerability Scanning & Runtime Protection for DockerKubernetes: Integrating Trivy, Falco, and OPA." Journal of Scientific and Engineering Research 6, no. 2 (2019): 216–20. https://doi.org/10.5281/zenodo.15234550.

Full text
Abstract:
Securing Docker and Kubernetes environments is a critical challenge. Automated vulnerability scanning and runtime protection are essential to mitigate security risks while maintaining performance and compliance. Tools like Trivy, Falco, and Open Policy Agent (OPA) provide a powerful, automated security framework for detecting vulnerabilities, monitoring runtime behavior, and enforcing security policies in Kubernetes environments. This paper explores the security challenges inherent in containerized deployments, highlighting common vulnerabilities, compliance gaps, and runtime threats. It evaluates the role of Trivy for continuous vulnerability scanning, Falco for real-time threat detection, and OPA for policy enforcement within Kubernetes clusters. Additionally, the study assesses infrastructure-as-code (IaC) frameworks such as Terraform for state management, Ansible for automated recovery, and AWS CloudFormation for disaster recovery automation. The integration of Chaos Engineering tools like Gremlin enables testing of recovery point objectives (RPO) and recovery time objectives (RTO) under real-world failure conditions, while real-time replication technologies like DRBD and Ceph enhance system resilience. We propose a comprehensive security framework that integrates these tools to create a robust, automated security posture for Kubernetes environments.
APA, Harvard, Vancouver, ISO, and other styles
37

Sreeja Reddy Challa. "Infrastructure as Code (IaC) in Cloud Migration: Enhancing Automation, Security and Scalability in AWS." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 3304–14. https://doi.org/10.30574/wjarr.2025.26.2.1989.

Full text
Abstract:
This article examines the strategic implementation of Infrastructure as Code (IaC) methodologies in enterprise AWS cloud migrations, demonstrating how organizations can enhance automation, security, and scalability throughout their infrastructure lifecycle. Through analysis of implementation patterns across diverse industry sectors, the article identifies critical success factors for effective IaC adoption, including standardization through Git-based repositories, embedding security controls directly in templates, integration with CI/CD pipelines, and implementation of resilient scaling and disaster recovery mechanisms. The article combines qualitative assessment of organizational practices with quantitative metrics analysis to provide a comprehensive evaluation of IaC benefits, challenges, and emerging trends. The article reveals that organizations implementing mature IaC approaches achieve substantial improvements in deployment efficiency, configuration consistency, security posture, and operational costs while establishing foundations for future automation capabilities. The article further explores emerging technologies including AI-assisted template generation, automated drift detection, policy-as-code frameworks, and predictive scaling algorithms that represent the evolving frontier of infrastructure automation. These insights provide technology leaders with actionable guidance for implementing IaC as a strategic capability that aligns infrastructure management with broader digital transformation objectives while maintaining the governance and compliance requirements essential in enterprise environments.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Donald P. "Inspection Technologies and Crisis Management: Field Automation lessons from the field." International Oil Spill Conference Proceedings 2014, no. 1 (2014): 300246. http://dx.doi.org/10.7901/2169-3358-2014-1-300246.1.

Full text
Abstract:
The task of capturing accurate information from the field and sharing it with response teams, incident commanders, command posts, regional offices, and internal/external agencies in a timely manner is a goal that has been difficult to achieve, especially on large-scale events. Growing fiscal constraints necessitate that solutions be part of a stable reusable system that is easily used by responders/inspectors, and also easily expanded when additional support and complexity become necessary. However, the various types of events/activities can be very challenging for both responders and inspectors. Government agencies are not always homogenous; their regional branches may collect different details on the same object, leading to incompatibilities between regional information. If the various agencies' engaged in data collection and dissemination does not standardize their efforts, then, information meltdown occurs. Ultimately, the data that were collected and disseminated become untrustworthy and unreliable. In some cases, this can compromise enforcement actions. Once information is collected, it also requires processing and distribution to different internal, external, non-profit, and private agencies. Recent mobile technologies that are available through smart phones and tablets offer solutions that allow quick customizations, scalability, and low-cost alternatives to data collection/dissemination. The concept of ‘Bring Your Own Device’ (BYOD) helps agencies to utilize existing equipment/infrastructure and to standardize policies and minimize training/software needs. These technologies can be utilized during all phases of Disaster Management – mitigation, preparedness, response, recovery, and day-to-day inspection activities. This shifting paradigm offers opportunities for all field-deployed personnel to share data in a near real-time environment. Over the past few decades, we have been actively involved in developing field inspection applications and establishing an infrastructure for Disaster Response/inspection programs that has allowed EPA to successfully manage field data collection. This poster intends to share these efforts and to demonstrate their effectiveness. The following is a list of issues that will be covered by the application of field automation: cost sharing, rapid application deployment, ease of training, Internet sharing, data quality, and data dissemination.
APA, Harvard, Vancouver, ISO, and other styles
39

Gaurav Malik. "Business Continuity & Incident Response." Journal of Information Systems Engineering and Management 10, no. 45s (2025): 451–73. https://doi.org/10.52783/jisem.v10i45s.8891.

Full text
Abstract:
Today’s businesses must contend with increasing cybersecurity threats that continue to grow in a connected world and require sufficient business continuity (BC) and incident response (IR) strategies. This paper discusses the importance of BC and IR in an organization’s cybersecurity governance framework and, eventually, operational resilience and speedy turnaround time in response to disruptive events. IR focuses on cyber incident management, and BC looks at the organization’s capacity to operate during and after a disruption and to perform key critical functions. The integration of NIST frameworks, or ISO 27001, allows organizations to quantify and control the challenges posed by cybersecurity risks. Crucial topics of BC include responding to cyberattacks and natural disasters, disaster recovery, establishing crisis management, and contingency planning. The article highlights the rising demand for cybersecurity governance and so on to ensure that security activities craft the organization’s objectives. AI, automation, and other such trends are changing business continuity and incident response practice trends as they develop over time. This article advocates for proactive planning, regulatory compliance, and continued improvement that helps to guard organizations from evolving cyber threats and secure themselves in the future.
APA, Harvard, Vancouver, ISO, and other styles
40

Murali Natti. "Enhancing PostgreSQL Availability with Auto Failover: Implementing repmgr to Achieve Seamless Database Recovery." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 1097–101. https://doi.org/10.32628/cseit251112102.

Full text
Abstract:
In today’s high-availability environments, ensuring minimal database downtime is critical for maintaining uninterrupted business operations and ensuring customer satisfaction. For businesses relying on PostgreSQL, manual failover procedures—typically used to handle primary node failures—are often slow, error-prone, and require significant time for detection, diagnosis, and resolution. These delays can result in service disruptions ranging from 30 minutes to several hours, impacting both system reliability and user experience. As organizations increasingly depend on real-time data processing and mission-critical applications, the need for a more efficient, reliable, and automated failover mechanism has never been greater. This paper explores the impact of automating the failover process using repmgr, an open-source tool specifically designed to enhance PostgreSQL replication[10] and high availability. By implementing auto failover with repmgr, we were able to transform our organization’s failover strategy, reducing recovery time from several hours to just 30 seconds, even during complex failure events. This automation not only minimizes downtime but also ensures a faster, more consistent recovery process, which is crucial for maintaining high availability in modern enterprise environments. Finally, this paper emphasizes the broader benefits of automated failover for PostgreSQL environments, including improved disaster recovery, reduced operational costs, and greater scalability. By eliminating the need for manual intervention and reducing the potential for human error, repmgr offers a robust solution for maintaining high system uptime and ensuring seamless transitions during failover events. Through a detailed case study and performance metrics, we demonstrate how automated failover can drastically improve PostgreSQL database availability and resilience, empowering organizations to meet the high demands of today’s fast-paced business landscape.
APA, Harvard, Vancouver, ISO, and other styles
41

Lilechi, Melvine Chepkoech. "Integrating Resilience, Agility, and Sustainability into Future Validated Supply Chains." Journal of Global Economy, Business and Finance 7, no. 1 (2025): 47–48. https://doi.org/10.53469/jgebf.2025.07(01).11.

Full text
Abstract:
Futuristic supply chain officers now have a unique opportunity to future - proof their supply chains by recognizing and integrating three new priorities: resilience, agility, and sustainability. Traditionally focused on cost, capital, quality, and service, supply chains must now also address the challenges of disruption, meet rapidly evolving customer needs, and support a clean, socially just economy. Historical examples, such as Toyotas rapid recovery post - disaster and Nikes technological advancements during the COVID - 19 pandemic, illustrate the importance of these new priorities. Agile supply chains leverage digital technologies, smart automation, and skilled, flexible teams to enhance responsiveness and adaptability. Successful integration of these priorities requires a foundational redesign of supply chain strategies, with a shift in mindset towards comprehensive risk management, agility, and sustainability metrics alongside traditional performance indicators.
APA, Harvard, Vancouver, ISO, and other styles
42

Clement Praveen Xavier Pakkam Isaac. "Cloud digital twins: Redefining enterprise infrastructure management with predictive analytics and automation." World Journal of Advanced Engineering Technology and Sciences 15, no. 1 (2025): 1496–515. https://doi.org/10.30574/wjaets.2025.15.1.0341.

Full text
Abstract:
Cloud Digital Twins (CDTs) represent a paradigm shift in enterprise infrastructure management, offering organizations a revolutionary approach to simulate, optimize, and automate complex multi-cloud and hybrid environments. This comprehensive framework creates AI-powered virtual replicas of cloud infrastructure that mirror the behavior, configuration, and performance characteristics of production systems. Through a three-tier architecture encompassing Infrastructure Digital Twins, Policy Digital Twins, and Operational Digital Twins, organizations can anticipate system failures, optimize resource allocation, conduct pre-deployment impact analysis, and simulate disaster recovery scenarios without risking live environments. The significance of this model lies in its novel integration of technical, governance, and operational aspects into a unified framework, addressing the full spectrum of cloud management challenges that traditional approaches handle in isolation. The implementation methodology follows a structured approach: environment assessment, twin creation, integration with existing workflows, and continuous improvement. While offering significant operational benefits, CDTs introduce new security considerations around data synchronization and model drift prevention. As the technology matures, future directions include cross-provider optimization, autonomous operations through reinforcement learning, and edge-to-cloud continuity for unified management of distributed infrastructure. CDTs are becoming essential components of cloud governance strategies for enterprises seeking enhanced resilience, compliance, and operational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
43

Kandula, Nagababu. "Innovative Fabrication of Advanced Robots Using The Waspas Method A New Era In Robotics Engineering." International Journal of Robotics and Machine Learning Technologies 1, no. 1 (2025): 1–13. https://doi.org/10.55124/ijrml.v1i1.235.

Full text
Abstract:
Introduction: The fabrication of advanced robots represents a pivotal intersection of cutting-edge materials science, artificial intelligence, and innovative manufacturing techniques. These robots are designed to perform complex tasks autonomously, from industrial automation and healthcare assistance to space exploration and disaster response. With breakthroughs in AI, 3D printing, and nanotechnology, modern robots are becoming more intelligent, agile, and capable than ever before. However, the rise of these machines also raises important questions about societal impacts, ethical considerations, and job displacement. The ongoing advancements in robot fabrication promise to reshape industries and redefine the role of automation in human life. Research significance: The significance of research in the fabrication of advanced robots lies in its transformative potential across numerous sectors. It drives innovation in automation, improving efficiency and precision in industries like manufacturing, healthcare, and logistics. Advanced robots can address complex societal challenges, such as providing personalized healthcare, performing dangerous tasks, and enhancing disaster recovery. Research also enables the integration of cutting-edge technologies like AI, nanotechnology, and materials science, pushing the boundaries of robotics capabilities. Furthermore, it tackles ethical, social, and economic implications, guiding responsible innovation to ensure positive societal impact while mitigating job displacement and other risks. Methodology: The fabrication of advanced robots involves a multi-disciplinary methodology combining materials science, manufacturing techniques, and artificial intelligence (AI). The process starts with designing robot structures using lightweight, durable materials like composites and metals. Additive manufacturing (3D printing) and precision machining are employed to create complex components. Sensors, actuators, and processors are integrated to enable movement and functionality. AI and machine learning models are embedded for autonomous decision-making, adapting robot behaviors to dynamic environments. Testing and iterative prototyping ensure performance, reliability, and safety. Finally, robots undergo optimization for energy efficiency, user interaction, and task-specific capabilities in their intended applications. Alternative: R-Alpha, R-Beta, R-Gamma, R-Delta, R-Epsilon Evaluation preference: Precision (B1), Speed (B2), Durability (B3), Energy Efficiency (B4)
APA, Harvard, Vancouver, ISO, and other styles
44

Varshini Choudary Nuvvula. "Scaling Cloud-Based Transaction Systems: How Modern Architectures Handle Growing Demand." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 1427–38. https://doi.org/10.32628/cseit241061189.

Full text
Abstract:
This comprehensive article examines how modern cloud architectures handle the scaling challenges of transaction-based systems in today's digital economy. It explores the evolution from traditional monolithic architectures to sophisticated cloud-based solutions, analyzing various scaling approaches, including vertical scaling, horizontal scaling, and autoscaling mechanisms. The article investigates key components such as load balancing, distributed databases, and implementation strategies that enable organizations to maintain high performance and reliability under growing demands. Through a detailed examination of real-world implementations, the article demonstrates how advanced cloud architectures leverage intelligent automation, predictive analytics, and distributed processing to overcome traditional scaling limitations. Special attention is given to the financial technology and e-commerce sectors, where system performance directly impacts business success. The article also addresses critical aspects of fault tolerance, disaster recovery, and cost optimization in scaled cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
45

Nagarajan, Fnu. "Ensuring Platform Reliability and Scaling Customer Support Infrastructure in Ride-Hailing Services." International Journal of Multidisciplinary Research and Growth Evaluation 5, no. 2 (2024): 1028–30. https://doi.org/10.54660/.ijmrge.2024.5.2.1028-1030.

Full text
Abstract:
The rapid expansion of ride-hailing services has led to significant challenges in maintaining efficient and scalable customer support systems. Ensuring platform reliability is critical to handling surges in customer support requests, particularly during peak hours, major events, and service disruptions. This paper explores the methodologies, technologies, and frameworks that enable ride-hailing platforms to scale customer support while maintaining service reliability. Case studies from leading ride-hailing companies, including Lyft, Uber, Grab, Didi, and Bolt, demonstrate best practices in implementing artificial intelligence (AI), machine learning (ML), predictive analytics, and automation to optimize support operations. The paper further examines the impact of surge pricing models on support demand, the role of AI-powered chatbots in enhancing customer experience, and strategies for managing workforce scalability. A discussion on infrastructure resilience and disaster recovery strategies provides insights into maintaining operational efficiency under high-demand conditions. The research concludes with future trends in AI-driven customer support and recommendations for ensuring sustained scalability and reliability in the ride-hailing industry.
APA, Harvard, Vancouver, ISO, and other styles
46

Krishna Anumula. "Enhancing Oracle Database Migration with Cutting-Edge Techniques." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (2025): 1022–34. https://doi.org/10.32628/cseit25112424.

Full text
Abstract:
This article presents a comprehensive framework for large-scale Oracle database migrations, focusing on enterprise environments with complex requirements exceeding traditional migration approaches. Through examination of advanced techniques, including Oracle Golden Gate real-time replication, optimized RMAN backup strategies, and cascaded standby methodologies for databases exceeding 65TB, we establish practical patterns for successful implementations. The article addresses critical challenges, including SAN to ASM transitions, high availability preservation during migration, and robust data integrity validation techniques. We further explore automation frameworks that enhance migration reliability while reducing operational overhead, alongside security considerations essential for maintaining regulatory compliance throughout the transition process. Performance analysis demonstrates that properly executed migrations yield measurable improvements in both system performance and operational resilience, particularly when implementing structured disaster recovery testing methodologies. By integrating technical excellence with business continuity planning, organizations can achieve successful migrations while minimizing risk and disruption. The article findings provide database administrators and system architects with actionable strategies applicable across diverse industry sectors undertaking large-scale database modernization initiatives.
APA, Harvard, Vancouver, ISO, and other styles
47

Myrhodskyy, A. V., O. V. Romanyuk, O. N. Romanyuk, and N. V. Titova. "Development of a high availability method for configuration management software." Optoelectronic Information-Power Technologies 46, no. 2 (2023): 64–75. http://dx.doi.org/10.31649/1681-7893-2023-46-2-64-75.

Full text
Abstract:
The article proposes its own method of providing high availability for configuration management software. The current state of the electronic resources management sphere was examined, the reasons for the use of automation tools were provided. The advantages of using configuration management software were analyzed, examples of using Infrastructure as Code and GitOps approaches to automate the deployment and scaling of electronic resources were given. The existing methods of ensuring high availability were analyzed. The development of our own method of ensuring high availability was carried out. The resulting method of providing high availability is based on the Raft consensus algorithm and the software system clustering approach and extends them with its own solutions. The algorithm of the proposed method was developed, the resulting flowchart of the algorithm and individual steps of its implementation were described in detail. The efficiency of the developed method was evaluated. An a priori ranking of a number of factors that evaluate the effectiveness of automatic recovery strategies and methods was conducted. The analysis of the results has shown that the proposed method implements the most important factors for experts, and in terms of RTO and RPO, the method can work on a par with existing popular disaster recovery strategies.
APA, Harvard, Vancouver, ISO, and other styles
48

Yenugula, Manideep, Sushil Kumar Sahoo, and Shankha Shubhra Goswami. "Cloud computing in supply chain management: Exploring the relationship." Management Science Letters 13, no. 3 (2023): 193–210. http://dx.doi.org/10.5267/j.msl.2023.4.003.

Full text
Abstract:
This research study addresses the advantages and difficulties of Cloud Computing (CC) in Supply Chain Management (SCM). An overview of the current state of SCM and the difficulties businesses in this sector confront is presented at the beginning of the article. It then explores how cloud-based solutions can address these challenges, such as through the use of real-time data analytics, collaborative platforms, and intelligent automation. Additionally, the paper investigates the potential risks and challenges associated with cloud-based SCM, including data security and privacy concerns, vendor lock-in, and the need for robust disaster recovery plans. To provide a comprehensive understanding of the topic, the paper includes a case study that illustrates how a company successfully implemented cloud-based SCM solutions to improve their operations. The paper concludes by highlighting the key takeaways and insights from the research, and by identifying potential future directions for research in this field. Overall, this study delivers insightful information about the function of CC in SCM and offers useful suggestions for companies looking to use this technology to enhance their supply chain operations.
APA, Harvard, Vancouver, ISO, and other styles
49

Diwan, Piyush Dhar. "Multi-Region Networking and Global Traffic Management for AWS." International Journal of Network and Communication Research 9, no. 1 (2025): 36–51. https://doi.org/10.37745/ijncr.16/vol9n13651.

Full text
Abstract:
Multi-region cloud networking on AWS has become essential for organizations building resilient, high-performance applications with global reach. This article explores the architecture, benefits, and implementation strategies for creating robust multi-region deployments on AWS. By distributing workloads and data across geographically diverse locations, businesses can enhance availability, reduce latency, ensure regulatory compliance, and strengthen disaster recovery capabilities. The article examines AWS's global networking foundation, including CloudWAN, Transit Gateway, and Global Accelerator, which form the backbone for multi-region architectures. It discusses global traffic management strategies through Route 53 Traffic Flow and CloudFront content delivery. The challenges of data consistency and replication are addressed through various database replication options and S3 Cross-Region Replication. The article emphasizes the importance of automation and observability through infrastructure as code and comprehensive health monitoring. Finally, it outlines best practices for implementing effective multi-region architectures, including establishing clear regional boundaries, implementing consistent tagging, centralizing identity management, designing for eventual consistency, testing failover scenarios, monitoring cross-region metrics, and optimizing for cost efficiency.
APA, Harvard, Vancouver, ISO, and other styles
50

Shamshuddin Shaik. "SAP on AWS in Education: Transforming Digital Learning Environments." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 1560–68. https://doi.org/10.32628/cseit251112168.

Full text
Abstract:
Integrating SAP Enterprise Resource Planning (ERP) systems with Amazon Web Services (AWS) is transforming educational institutions to change the face of operational competency and ways of knowledge delivery. It will discuss the cloud-based infrastructure and how it benefits educational organizations in terms of scalability, cost optimization, and robust disaster recovery strategies. This article will assess how SAP on AWS implements automation processes in administrative work, data-driven decision-making, and personalized learning experiences. The article addresses critical security and compliance considerations indispensable for protecting sensitive educational data while maintaining regulatory adherence. It addresses best practices around implementation, from architecture planning down to performance-optimization strategy and change-management approach. To top it off, the article analyses the future scope of educational technology, which is very much concerned with the emerging themes of artificial intelligence, the Internet of Things, blockchain, and extended reality, leading to a redrawing of educational landscapes. The article provides insight into how educational institutions can use these technological advancements to enhance operational efficiency while maintaining security and compliance in an increasingly digital educational environment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography