To see the other types of publications on this topic, follow the link: ML-Enhanced Encryption.

Journal articles on the topic 'ML-Enhanced Encryption'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'ML-Enhanced Encryption.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Researcher. "AI/ML FOR DATA PRIVACY AND ENCRYPTION IN CLOUD COMPUTING." International Journal of Research In Computer Applications and Information Technology (IJRCAIT) 7, no. 2 (2024): 27–43. https://doi.org/10.5281/zenodo.13324415.

Full text
Abstract:
As cloud computing becomes increasingly pervasive, ensuring data privacy and security remains a critical concern. Artificial intelligence (AI) and machine learning (ML) offer promising solutions for enhancing data privacy and developing advanced encryption techniques in cloud environments. This review article explores how AI and ML are applied to improve data privacy, including the development of intelligent encryption methods, privacy-preserving algorithms, and automated data protection mechanisms. We examine various approaches, such as homomorphic encryption, secure multi-party computation, and differential privacy, and assess their integration with AI/ML technologies.  The article provides an overview of current research, evaluates the effectiveness of different techniques, and discusses the trade-offs involved. It concludes with a discussion on future trends and potential areas for further research in leveraging AI/ML for data privacy and encryption in cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Mohamed, Tasnem Magdi Hassin, Bander Ali Saleh Al-rimy, and Sultan Ahmed Almalki. "A Ransomware Early Detection Model based on an Enhanced Joint Mutual Information Feature Selection Method." Engineering, Technology & Applied Science Research 14, no. 4 (2024): 15400–15407. http://dx.doi.org/10.48084/etasr.7092.

Full text
Abstract:
Crypto ransomware attacks pose a significant threat by encrypting users' data and demanding ransom payments, causing permanent data loss if not detected and mitigated before encryption occurs. The existing studies have faced challenges in the pre-encryption phase due to elusive attack patterns, insufficient data, and the lack of comprehensive information, often confusing the current detection techniques. Selecting appropriate features that effectively indicate an impending ransomware attack is a critical challenge. This research addresses this challenge by introducing an Enhanced Joint Mutual Information (EJMI) method that effectively assigns weights and ranks features based on their relevance while conducting contextual data analysis. The EJMI method employs a dual ranking system—TF for crypto APIs and TF-IDF for non-crypto APIs—to enhance the detection process and select the most significant features for training various Machine Learning (ML) classifiers. Furthermore, grid search is utilized for optimal classifier parameterization, aiming to detect ransomware efficiently and accurately in its pre-encryption phase. The proposed EJMI method has demonstrated a 4% improvement in detection accuracy compared to previous methods, highlighting its effectiveness in identifying and preventing crypto-ransomware attacks before data encryption occurs.
APA, Harvard, Vancouver, ISO, and other styles
3

Guman Singh Chauhan and Rahul Jadon. "AI and ML-Powered CAPTCHA and advanced graphical passwords: Integrating the DROP methodology, AES encryption and neural network-based authentication for enhanced security." World Journal of Advanced Engineering Technology and Sciences 1, no. 1 (2020): 121–32. https://doi.org/10.30574/wjaets.2020.1.1.0027.

Full text
Abstract:
Background Information: Advanced automated attacks and unauthorized access are frequently not prevented by traditional CAPTCHA and password procedures. Combining encryption, graphical passwords, AI, and ML provides a strong solution to today's cybersecurity issues, improving security and usability. Objective: To create a thorough multi-layered authentication system that efficiently combats advanced cyberthreats by integrating AI-powered CAPTCHA, graphical passwords using the DROP approach, AES encryption, and neural network-based authentication. Methods: The solution incorporates neural networks for behavioral analysis and real-time threat detection, graphical passwords based on DROP for dynamic engagement, AES encryption for safe data transport, and AI-driven CAPTCHA for human verification. Results: The suggested approach outperforms conventional techniques in terms of speed, accuracy, and resistance to automated and brute-force attacks, achieving 96.8% accuracy, a false positive rate of 0.01%, and a security level of 9.5. Conclusion The multi-layered strategy greatly improves authentication security, effectively thwarting sophisticated cyberthreats while maintaining a flawless user experience, which qualifies it for high-security settings.
APA, Harvard, Vancouver, ISO, and other styles
4

Gowda, V. Dankan, Shivoham Singh, Pullela SVVSR Kumar, Krishna Kant Dave, Hemant Kothari, and T. Thiruvenkadam. "Optimizing homomorphic encryption for machine learning operations in cloud computing." Journal of Information and Optimization Sciences 46, no. 4-A (2025): 915–25. https://doi.org/10.47974/jios-1817.

Full text
Abstract:
This paper will establish how integration of machine learning operation in to cloud computing has greatly enhanced data processing and Analysis. However, data privacy and security has been tricky to achieve as stated earlier. This paper gives a new approach to enhance homomorphic encryption for MLO processes that are carried out in cloud systems. The proposed strategy of solving the problem is effective in restoring computational speed and, as a result, achieving data protection. Based on the results of the experimental assessment it can be stated that the features covered in this paper positively contribute to the reduction of the time required for data processing and the number of sources used with the authenticity of the encrypted information being maintained. Therefore, this work serves the goal of advancing the subject of safe cloud ML by offering an efficient solution for outsourced encryption.
APA, Harvard, Vancouver, ISO, and other styles
5

Gujarathi, Yash, and Yash Potekar. "Machine Learning in Network Traffic Analysis: Classification, Optimization, and Security." International Journal for Research in Applied Science and Engineering Technology 13, no. 4 (2025): 455–59. https://doi.org/10.22214/ijraset.2025.68216.

Full text
Abstract:
Abstract: With the rapid expansion of digital networks, ensuring efficient and secure network traffic management has become a significant challenge. Traditional rule-based approaches struggle to handle evolving traffic patterns, particularly with the increasing use of encryption. Machine learning (ML) has emerged as a powerful alternative, providing enhanced capabilities for traffic classification, anomaly detection, and optimization. This paper presents a comprehensive review of ML-based techniques, including supervised learning, unsupervised learning, deep learning, and graph-based learning. Key challenges such as data imbalance, real-time processing, and computational overhead are explored. The study consolidates findings from multiple research papers, emphasizing the role of AI-driven models in improving cybersecurity, traffic prediction, and quality of service (QoS). Future research directions include hybrid models, federated learning, and the integration of ML with emerging networking paradigms such as Software-Defined Networking (SDN) and 5G
APA, Harvard, Vancouver, ISO, and other styles
6

Kalaiselvi, R. Caroline, and M. Suriakala. "Machine Learning-Based Reconnaissance Bee Colony with Custom AES & AE for Robust IoT Data Security and Transmission." Indian Journal Of Science And Technology 17, no. 46 (2024): 4828–41. https://doi.org/10.17485/ijst/v17i46.3644.

Full text
Abstract:
Objectives: To propose a robust method to enhance data security and transmission in IoT to address the challenges in real-time data transmission and create a comprehensive security solution. The new method takes credit for the machine learning-based bioinspired method to minimize data loss and maximize end-to-end data transmission rate. Methods: Custom-AES with Avalanche Effect (AE) and Machine Learning-based Reconnaissance Bee Colony are employed to secure the data with enhanced encryption by creating a multi-layered IoT security solution. Key stretching is used to provide high resistance to cryptographic attacks. In order to optimize threat detection, Pareto-based selection and dynamic parameter tuning with ML-RBC are used. To improve attack resilience, rank, and tournament selection methods are employed to target DDoS and MMIT attacks with a minimized encryption time. The NSL-KDD IoT network dataset is used to test the proposed CAES-AE with the ML-RBC model using a network simulator. To assess the performance of the proposed work, the results are compared with prevailing IoT data transmission techniques such as ERSA, QoS-BFT, AES, Custom AES, and RBC. Findings: The suggested CAES-AE with ML-RBC data security and transmission method attains the promising results of 97% authentication success rate, 6 seconds response time to security actions, 96% vulnerability detection speed, 97% data encryption strength, 5 seconds incident response time, and 97.8% network performance (energy efficiency & lifetime), which is higher than earlier methods. Novelty: This research provides a secured data transmission method in an IoT environment to address the real-time data transmission challenges faced in sensor-based networks. This will function as a multi-layered IoT security model that optimizes real-time attack detection and resilience. The performance metrics are evaluated with multiple packet transfer rates to boost the network performance. Keywords: Machine Learning, Advanced Networking, Data Security, Secured Data Transmission, Bio­Inspired Algorithm
APA, Harvard, Vancouver, ISO, and other styles
7

R, Caroline Kalaiselvi, and Suriakala M. "Machine Learning-Based Reconnaissance Bee Colony with Custom AES & AE for Robust IoT Data Security and Transmission." Indian Journal of Science and Technology 17, no. 46 (2024): 4828–41. https://doi.org/10.17485/IJST/v17i46.3644.

Full text
Abstract:
Abstract <strong>Objectives:</strong>&nbsp;To propose a robust method to enhance data security and transmission in IoT to address the challenges in real-time data transmission and create a comprehensive security solution. The new method takes credit for the machine learning-based bioinspired method to minimize data loss and maximize end-to-end data transmission rate.&nbsp;<strong>Methods:</strong>&nbsp;Custom-AES with Avalanche Effect (AE) and Machine Learning-based Reconnaissance Bee Colony are employed to secure the data with enhanced encryption by creating a multi-layered IoT security solution. Key stretching is used to provide high resistance to cryptographic attacks. In order to optimize threat detection, Pareto-based selection and dynamic parameter tuning with ML-RBC are used. To improve attack resilience, rank, and tournament selection methods are employed to target DDoS and MMIT attacks with a minimized encryption time. The NSL-KDD IoT network dataset is used to test the proposed CAES-AE with the ML-RBC model using a network simulator. To assess the performance of the proposed work, the results are compared with prevailing IoT data transmission techniques such as ERSA, QoS-BFT, AES, Custom AES, and RBC.&nbsp;<strong>Findings:</strong>&nbsp;The suggested CAES-AE with ML-RBC data security and transmission method attains the promising results of 97% authentication success rate, 6 seconds response time to security actions, 96% vulnerability detection speed, 97% data encryption strength, 5 seconds incident response time, and 97.8% network performance (energy efficiency &amp; lifetime), which is higher than earlier methods.&nbsp;<strong>Novelty:</strong>&nbsp;This research provides a secured data transmission method in an IoT environment to address the real-time data transmission challenges faced in sensor-based networks. This will function as a multi-layered IoT security model that optimizes real-time attack detection and resilience. The performance metrics are evaluated with multiple packet transfer rates to boost the network performance. <strong>Keywords:</strong> Machine Learning, Advanced Networking, Data Security, Secured Data Transmission, Bio&shy;Inspired Algorithm
APA, Harvard, Vancouver, ISO, and other styles
8

Ayobami, Miss Adedeji. "Global Cybersecurity Resilience: Advanced Strategies and Emerging Technologies for Protecting Critical Digital Infrastructure." International Journal of Research and Innovation in Social Science VIII, no. VIII (2024): 1547–53. http://dx.doi.org/10.47772/ijriss.2024.8080112.

Full text
Abstract:
Ensuring the protection of those strategic systems and networks from constantly emerging threats in the contemporary context of globalization is crucial to maintaining states’ security and economies. This paper focuses on how organizational cybersecurity readiness can be improved and ways of putting up good defense mechanisms. These are Threat Intelligence Systems including AI &amp; ML Systems for detecting advanced threats as well as Endpoint Solutions including Endpoint Security and Networks that are fortified through Firewalls, VPN &amp; Zero Trust Architecture. Pivotal to these activities are information protection measures such as encryption and Data Loss Prevention (DLP), to safeguard the confidentiality of the data. IAM solutions that incorporate MFA and RBAC also help reduce threats of unauthorized access by controlling user account privileges. That is why adherence to the national and international cybersecurity standards with the help of governmental directions and legal laws such as NIST or GDPR makes organizational security positions more stable. Currently, new technology solutions like Artificial Intelligence, Quantum computing, and Blockchain provide better threat analytical systems and enhanced encryption methods which are necessary for safeguarding main digital assets in today’s environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Hamid, Dalia Ebrahim, Hanan M. Amer, Hossam El-Din Salah Moustafa, and Hanaa Salem Marie. "Empowering health data protection: machine learning-enabled diabetes classification in a secure cloud-based IoT framework." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 2 (2024): 1110. http://dx.doi.org/10.11591/ijeecs.v34.i2.pp1110-1121.

Full text
Abstract:
Smart medical devices and the internet of things (IoT) have enhanced healthcare systems by allowing remote monitoring of patient's health. Because of the unexpected increase in the number of diabetes patients, it is critical to regularly evaluate patients' health conditions before any significant illness occurs. As a result of transmitting a large volume of sensitive medical data, dealing with IoT data security issues remains a difficult challenge. This paper presents a secure remote diabetes monitoring (SR-DM) model that uses hybrid encryption, combining the advanced encryption standard and elliptic curve cryptography (AES-ECC), to ensure the patients' sensitive data is protected in IoT platforms based on the cloud. The health statuses of patients are determined in this model by predicting critical situations using machine learning (ML) algorithms for analyzing medical data sensed by smart health IoT devices. The results reveal that the AES-ECC approach has a significant influence on cloud-based IoT systems and the random forest (RF) classification method outperforms with a high accuracy of 91.4%. As a consequence of the outcomes obtained, the proposed model effectively establishes a secure and efficient system for remote health monitoring.
APA, Harvard, Vancouver, ISO, and other styles
10

Hamid, Dalia Ebrahim, Hanan M. Amer, Hossam El-Din Salah Moustafa, and Hanaa Salem Marie. "Empowering health data protection: machine learning-enabled diabetes classification in a secure cloud-based IoT framework." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 2 (2024): 1110–21. https://doi.org/10.11591/ijeecs.v34.i2.pp1110-1121.

Full text
Abstract:
Smart medical devices and the internet of things (IoT) have enhanced healthcare systems by allowing remote monitoring of patient's health. Because of the unexpected increase in the number of diabetes patients, it is critical to regularly evaluate patients' health conditions before any significant illness occurs. As a result of transmitting a large volume of sensitive medical data, dealing with IoT data security issues remains a difficult challenge. This paper presents a secure remote diabetes monitoring (SR-DM) model that uses hybrid encryption, combining the advanced encryption standard and elliptic curve cryptography (AES-ECC), to ensure the patients' sensitive data is protected in IoT platforms based on the cloud. The health statuses of patients are determined in this model by predicting critical situations using machine learning (ML) algorithms for analyzing medical data sensed by smart health IoT devices. The results reveal that the AES-ECC approach has a significant influence on cloud-based IoT systems and the random forest (RF) classification method outperforms with a high accuracy of 91.4%. As a consequence of the outcomes obtained, the proposed model effectively establishes a secure and efficient system for remote health monitoring.
APA, Harvard, Vancouver, ISO, and other styles
11

Bharati, Subrato, and Prajoy Podder. "Machine and Deep Learning for IoT Security and Privacy: Applications, Challenges, and Future Directions." Security and Communication Networks 2022 (August 27, 2022): 1–41. http://dx.doi.org/10.1155/2022/8951961.

Full text
Abstract:
The integration of the Internet of Things (IoT) connects a number of intelligent devices with minimum human interference that can interact with one another. IoT is rapidly emerging in the areas of computer science. However, new security problems are posed by the cross-cutting design of the multidisciplinary elements and IoT systems involved in deploying such schemes. Ineffective is the implementation of security protocols, i.e., authentication, encryption, application security, and access network for IoT systems and their essential weaknesses in security. Current security approaches can also be improved to protect the IoT environment effectively. In recent years, deep learning (DL)/machine learning (ML) has progressed significantly in various critical implementations. Therefore, DL/ML methods are essential to turn IoT system protection from simply enabling safe contact between IoT systems to intelligence systems in security. This review aims to include an extensive analysis of ML systems and state-of-the-art developments in DL methods to improve enhanced IoT device protection methods. On the other hand, various new insights in machine and deep learning for IoT securities illustrate how it could help future research. IoT protection risks relating to emerging or essential threats are identified, as well as future IoT device attacks and possible threats associated with each surface. We then carefully analyze DL and ML IoT protection approaches and present each approach’s benefits, possibilities, and weaknesses. This review discusses a number of potential challenges and limitations. The future works, recommendations, and suggestions of DL/ML in IoT security are also included.
APA, Harvard, Vancouver, ISO, and other styles
12

Aliyu Enemosah and Ogbonna George Ifeanyi. "Cloud security frameworks for protecting IoT devices and SCADA systems in automated environments." World Journal of Advanced Research and Reviews 22, no. 3 (2024): 2232–52. https://doi.org/10.30574/wjarr.2024.22.3.1485.

Full text
Abstract:
As automation increasingly relies on the Internet of Things (IoT) and Supervisory Control and Data Acquisition (SCADA) systems, cloud security frameworks have emerged as critical components for safeguarding data integrity and operational resilience. IoT devices and SCADA systems, widely deployed in industrial automation, energy management, and critical infrastructure, generate vast amounts of data and depend on real-time communication. However, their integration into cloud-based systems introduces significant cybersecurity challenges, including unauthorized access, data breaches, and vulnerabilities in communication protocols. Cloud security frameworks provide robust solutions by offering scalable and adaptive tools to protect data and system operations in automated environments. These frameworks leverage encryption, access control, and real-time monitoring to ensure secure data transmission and storage. Advanced solutions integrate machine learning (ML) and artificial intelligence (AI) for proactive threat detection, anomaly detection, and rapid response to cyberattacks. By analysing system behaviours and historical patterns, ML-driven security systems enhance the ability to identify vulnerabilities and prevent breaches before they escalate. This paper explores the role of cloud security in protecting IoT devices and SCADA systems, focusing on innovative security measures such as zero-trust architectures, intrusion detection systems, and ML-enhanced cybersecurity protocols. The paper also examines the challenges of implementing these frameworks, including scalability, compliance with regulatory standards, and maintaining operational efficiency in automated environments. Addressing these issues is essential for building resilient, secure, and efficient automated ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
13

Aliyu, Enemosah, and George Ifeanyi Ogbonna. "Cloud security frameworks for protecting IoT devices and SCADA systems in automated environments." World Journal of Advanced Research and Reviews 22, no. 3 (2024): 2232–52. https://doi.org/10.5281/zenodo.14772695.

Full text
Abstract:
As automation increasingly relies on the Internet of Things (IoT) and Supervisory Control and Data Acquisition (SCADA) systems, cloud security frameworks have emerged as critical components for safeguarding data integrity and operational resilience. IoT devices and SCADA systems, widely deployed in industrial automation, energy management, and critical infrastructure, generate vast amounts of data and depend on real-time communication. However, their integration into cloud-based systems introduces significant cybersecurity challenges, including unauthorized access, data breaches, and vulnerabilities in communication protocols. Cloud security frameworks provide robust solutions by offering scalable and adaptive tools to protect data and system operations in automated environments. These frameworks leverage encryption, access control, and real-time monitoring to ensure secure data transmission and storage. Advanced solutions integrate machine learning (ML) and artificial intelligence (AI) for proactive threat detection, anomaly detection, and rapid response to cyberattacks. By analysing system behaviours and historical patterns, ML-driven security systems enhance the ability to identify vulnerabilities and prevent breaches before they escalate. This paper explores the role of cloud security in protecting IoT devices and SCADA systems, focusing on innovative security measures such as zero-trust architectures, intrusion detection systems, and ML-enhanced cybersecurity protocols. The paper also examines the challenges of implementing these frameworks, including scalability, compliance with regulatory standards, and maintaining operational efficiency in automated environments. Addressing these issues is essential for building resilient, secure, and efficient automated ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
14

Kamala Kannan Munusamy Ethirajan. "ServiceNow: Boosting Productivity and Innovation in Healthcare, Manufacturing, and Research." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 982–94. https://doi.org/10.32628/cseit251112101.

Full text
Abstract:
ServiceNow has emerged as a transformative force in enterprise digital transformation across healthcare, manufacturing, and research sectors. The platform's multi-instance cloud architecture enables robust workflow automation and system integration capabilities, fundamentally changing how organizations operate and deliver services. In healthcare, ServiceNow optimizes patient care through automated scheduling, EHR integration, and clinical pathway management, while ensuring regulatory compliance. The manufacturing sector benefits from enhanced production workflows, quality control automation, and predictive maintenance through IoT integration and real-time analytics. In research institutions, the platform streamlines grant management, resource allocation, and regulatory compliance while enabling secure collaboration across organizations. The implementation of comprehensive security measures, including end-to-end encryption and multi-factor authentication, ensures data protection across all sectors. Through phased implementation approaches and continuous improvement processes, organizations achieve significant operational efficiencies and cost reductions. Future developments in AI/ML integration and advanced connectivity options promise to further enhance the platform's capabilities in process automation and decision support.
APA, Harvard, Vancouver, ISO, and other styles
15

Usiabulu, Ehizokhale Jude, Abel Onolunosen Abhadionmhen, and Husseni Iduku. "ML-Powered Privacy Preservation in Biomedical Data Sharing." African Journal of Medicine, Surgery and Public Health Research 2, no. 3 (2025): 389–407. https://doi.org/10.58578/ajmsphr.v2i3.6143.

Full text
Abstract:
The sharing of biomedical data is essential for accelerating healthcare research, fostering medical innovation, and improving patient outcomes. Such data encompasses a wide range of sensitive information, including electronic health records, genomic sequences, and clinical trial results. Despite its value, biomedical data sharing poses significant privacy risks, such as patient re-identification, unauthorized access, and regulatory non-compliance. These concerns necessitate advanced techniques that balance the need for data utility with stringent privacy protection. Machine learning (ML) has emerged as a powerful tool to facilitate privacy-preserving biomedical data sharing. This manuscript presents a comprehensive review of state-of-the-art ML-based privacy preservation methods, including differential privacy, federated learning, homomorphic encryption, secure multi-party computation, and synthetic data generation through generative models. Each technique offers unique mechanisms to protect sensitive information while enabling collaborative analysis and predictive modeling. These methods have been applied practically across various biomedical domains, including collaborative disease risk prediction and genomic research, clinical trial data analysis, remote patient monitoring, and public health surveillance. Additionally, we evaluate relevant privacy and utility metrics that assess the effectiveness of privacy guarantees and the impact on model performance. The review further examines limitations and challenges—including computational overhead, data heterogeneity, privacy-utility trade-offs, and ethical considerations—that must be addressed to ensure robust and scalable solutions. Looking forward, the manuscript highlights promising future directions, such as hybrid privacy frameworks, enhanced synthetic data generation, real-time privacy-preserving analytics, standardization of evaluation protocols, and interdisciplinary policy development. By integrating these advancements, biomedical research can achieve safer and more effective data sharing, ultimately fostering innovation while respecting patient confidentiality and trust.
APA, Harvard, Vancouver, ISO, and other styles
16

Manne, Tirumala Ashish Kumar. "Enhancing Security in Cloud Computing Using Artificial Intelligence (AI) Techniques." International Journal of Computing and Engineering 3, no. 1 (2022): 45–53. https://doi.org/10.47941/ijce.2764.

Full text
Abstract:
Cloud computing has revolutionized data storage, processing, and accessibility, but it also introduces significant security challenges, including data breaches, insider threats, unauthorized access, and distributed denial-of-service (DDoS) attacks. Traditional security approaches, such as rule-based firewalls and static access control mechanisms, struggle to counter increasingly sophisticated cyber threats. Artificial Intelligence (AI) has emerged as a transformative solution, leveraging machine learning (ML), deep learning (DL), and natural language processing (NLP) to enhance cloud security. AI-driven threat detection systems analyze vast datasets in real time, identifying anomalies and predicting potential attacks with high accuracy. AI-powered automated incident response mechanisms help mitigate security risks by proactively addressing vulnerabilities and adapting to evolving threats. The integration of AI techniques into cloud security frameworks, highlighting applications such as intelligent intrusion detection, adaptive authentication, AI-enhanced encryption, and automated compliance monitoring. The advantages AI brings in reducing response time, improving threat intelligence, and optimizing resource allocation. AI’s application in cybersecurity also poses challenges, including adversarial AI attacks, data bias, and computational overhead. By leveraging AI, organizations can achieve a more resilient and proactive defense against emerging cyber threats in cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
17

Abdel Hakeem, Shimaa A., Hanan H. Hussein, and HyungWon Kim. "Security Requirements and Challenges of 6G Technologies and Applications." Sensors 22, no. 5 (2022): 1969. http://dx.doi.org/10.3390/s22051969.

Full text
Abstract:
After implementing 5G technology, academia and industry started researching 6th generation wireless network technology (6G). 6G is expected to be implemented around the year 2030. It will offer a significant experience for everyone by enabling hyper-connectivity between people and everything. In addition, it is expected to extend mobile communication possibilities where earlier generations could not have developed. Several potential technologies are predicted to serve as the foundation of 6G networks. These include upcoming and current technologies such as post-quantum cryptography, artificial intelligence (AI), machine learning (ML), enhanced edge computing, molecular communication, THz, visible light communication (VLC), and distributed ledger (DL) technologies such as blockchain. From a security and privacy perspective, these developments need a reconsideration of prior security traditional methods. New novel authentication, encryption, access control, communication, and malicious activity detection must satisfy the higher significant requirements of future networks. In addition, new security approaches are necessary to ensure trustworthiness and privacy. This paper provides insights into the critical problems and difficulties related to the security, privacy, and trust issues of 6G networks. Moreover, the standard technologies and security challenges per each technology are clarified. This paper introduces the 6G security architecture and improvements over the 5G architecture. We also introduce the security issues and challenges of the 6G physical layer. In addition, the AI/ML layers and the proposed security solution in each layer are studied. The paper summarizes the security evolution in legacy mobile networks and concludes with their security problems and the most essential 6G application services and their security requirements. Finally, this paper provides a complete discussion of 6G networks’ trustworthiness and solutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Aravindsundeep Musunuri, (Dr.) Punit Goel, and A Renuka. "Innovations in Multicore Network Processor Design for Enhanced Performance." Innovative Research Thoughts 9, no. 3 (2023): 177–90. http://dx.doi.org/10.36676/irt.v9.i3.1460.

Full text
Abstract:
The rapid expansion of network traffic, driven by the proliferation of internet-connected devices and the growing demand for high-speed data transmission, has intensified the need for advanced network processing capabilities. Multicore network processors have emerged as a pivotal solution to address these challenges, offering significant enhancements in performance, scalability, and efficiency. This paper explores the innovations in multicore network processor design, focusing on the architectural advancements and optimization techniques that have been instrumental in elevating their performance. One of the key innovations in multicore network processor design is the shift from traditional single-core processors to multicore architectures. This transition has allowed for parallel processing, where multiple cores can simultaneously execute different tasks, significantly increasing throughput and reducing latency. The adoption of multicore architectures has also facilitated the handling of diverse and complex workloads, which is essential in modern networking environments that demand high performance and low power consumption. A major focus of recent innovations is the optimization of core interconnects and memory hierarchies. Efficient inter-core communication is critical for maintaining high performance in multicore processors. The development of advanced interconnect technologies, such as network-on-chip (NoC) and high-bandwidth interconnects, has minimized communication bottlenecks, enabling faster data exchange between cores. Additionally, improvements in memory hierarchies, including the integration of larger caches and the use of intelligent memory management techniques, have further enhanced data access speeds and reduced memory latency, contributing to overall performance gains. Another significant area of innovation is the implementation of specialized cores within multicore processors. These specialized cores are designed to handle specific network functions, such as encryption, compression, and deep packet inspection, more efficiently than general-purpose cores. By offloading these tasks to specialized cores, the overall processing load is balanced, leading to better performance and energy efficiency. Furthermore, the integration of hardware accelerators, such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs), has been a critical development, providing dedicated processing power for complex tasks and further enhancing the performance of multicore network processors. Power efficiency has also been a major consideration in the design of multicore network processors. Innovations in dynamic voltage and frequency scaling (DVFS) and power gating have enabled processors to adjust their power consumption based on workload demands, reducing energy usage without compromising performance. Additionally, advances in thermal management techniques, such as improved heat dissipation methods and adaptive cooling technologies, have ensured that multicore processors can operate at peak performance levels without overheating. The integration of machine learning (ML) and artificial intelligence (AI) in multicore network processor design represents another frontier of innovation. ML algorithms can optimize resource allocation, predict traffic patterns, and dynamically adjust processing tasks to enhance performance and efficiency. AI-driven management of network processors allows for more intelligent decision-making, enabling the processors to adapt to changing network conditions in real-time, which is crucial for maintaining high performance in dynamic environments. Moreover, the increasing complexity of network security has driven innovations in multicore network processor design. Enhancements in security features, such as hardware-based encryption and real-time threat detection, have been integrated into modern processors to safeguard against evolving cyber threats. The ability to handle security tasks at the hardware level not only improves performance but also provides a robust defense mechanism against attacks, ensuring the integrity and confidentiality of data. In conclusion, the ongoing innovations in multicore network processor design are pivotal in meeting the growing demands of modern networking environments. By advancing processor architectures, optimizing interconnects and memory hierarchies, integrating specialized cores, enhancing power efficiency, and incorporating AI and security features, these processors are well-equipped to deliver superior performance and efficiency. As network demands continue to evolve, these innovations will play a crucial role in shaping the future of network processing, enabling faster, more reliable, and secure data transmission across increasingly complex networks.
APA, Harvard, Vancouver, ISO, and other styles
19

Amos Nyombi, Wycliff Nagalila, Babrah Happy, Mark Sekinobe, and Jimmy Ampe. "Enhancing cybersecurity protocols in tax accounting practices: Strategies for protecting taxpayer information." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1788–98. http://dx.doi.org/10.30574/wjarr.2024.23.3.2838.

Full text
Abstract:
This paper highlights the importance of enhancing cybersecurity measures in tax accounting to protect taxpayer data. It proposes strategies to ensure compliance and enhance data security in the context of evolving cyber threats. This study investigated the cybersecurity landscape within the tax accounting sector, focusing on prevalent threats, the effectiveness of existing security measures, and areas for improvement. Utilizing a mixed-methods approach, the research combined qualitative interviews with cybersecurity experts and tax accounting professionals, quantitative surveys of 200 tax accounting firms, and detailed case studies of firms that experienced significant cyberattacks. The findings reveal that phishing attacks, ransomware, data breaches, malware, and insider threats are the most common cybersecurity challenges faced by tax accounting practices. While measures such as encryption, multi-factor authentication (MFA), firewalls, intrusion detection systems (IDS), regular security audits, and comprehensive employee training prove effective, their inconsistent implementation across the industry highlighted the need for standardized protocols. The study identified significant gaps in resource allocation, particularly for smaller firms and non-profits, and underscores the necessity for formal incident response plans. Recommendations include enhanced training programs, development of standardized security protocols, resource support for smaller firms, regular security audits, comprehensive incident response plans, and adoption of advanced technologies. The study calls for further exploration of emerging threats, cost-effective solutions for smaller firms, the impact of artificial intelligence (AI) and machine learning (ML), longitudinal studies on cybersecurity practices, and analysis of policy and regulatory impacts. These insights aim to enhance the cybersecurity posture of tax accounting practices, ensuring the protection of sensitive taxpayer information and overall industry resilience.
APA, Harvard, Vancouver, ISO, and other styles
20

Aktas, Fatma, Ibraheem Shayea, Mustafa Ergen, et al. "Routing Challenges and Enabling Technologies for 6G–Satellite Network Integration: Toward Seamless Global Connectivity." Technologies 13, no. 6 (2025): 245. https://doi.org/10.3390/technologies13060245.

Full text
Abstract:
The capabilities of 6G networks surpass those of existing networks, aiming to enable seamless connectivity between all entities and users at any given time. A critical aspect of achieving enhanced and ubiquitous mobile broadband, as promised by 6G networks, is merging satellite networks with land-based networks, which offers significant potential in terms of coverage area. Advanced routing techniques in next-generation network technologies, particularly when incorporating terrestrial and non-terrestrial networks, are essential for optimizing network efficiency and delivering promised services. However, the dynamic nature of the network, the heterogeneity and complexity of next-generation networks, and the relative distance and mobility of satellite networks all present challenges that traditional routing protocols struggle to address. This paper provides an in-depth analysis of 6G networks, addressing key enablers, technologies, commitments, satellite networks, and routing techniques in the context of 6G and satellite network integration. To ensure 6G fulfills its promises, the paper emphasizes necessary scenarios and investigates potential bottlenecks in routing techniques. Additionally, it explores satellite networks and identifies routing challenges within these systems. The paper highlights routing issues that may arise in the integration of 6G and satellite networks and offers a comprehensive examination of essential approaches, technologies, and visions required for future advancements in this area. 6G and satellite networks are associated with technical terms such as AI/ML, quantum computing, THz communication, beamforming, MIMO technology, ultra-wide band and multi-band antennas, hybrid channel models, and quantum encryption methods. These technologies will be utilized to enhance the performance, security, and sustainability of future networks.
APA, Harvard, Vancouver, ISO, and other styles
21

Amos, Nyombi, Nagalila Wycliff, Happy Babrah, Sekinobe Mark, and Ampe Jimmy. "Enhancing cybersecurity protocols in tax accounting practices: Strategies for protecting taxpayer information." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1788–98. https://doi.org/10.5281/zenodo.14950283.

Full text
Abstract:
This paper highlights the importance of enhancing cybersecurity measures in tax accounting to protect taxpayer data. It proposes strategies to ensure compliance and enhance data security in the context of evolving cyber threats. This study investigated the cybersecurity landscape within the tax accounting sector, focusing on prevalent threats, the effectiveness of existing security measures, and areas for improvement. Utilizing a mixed-methods approach, the research combined qualitative interviews with cybersecurity experts and tax accounting professionals, quantitative surveys of 200 tax accounting firms, and detailed case studies of firms that experienced significant cyberattacks. The findings reveal that phishing attacks, ransomware, data breaches, malware, and insider threats are the most common cybersecurity challenges faced by tax accounting practices. While measures such as encryption, multi-factor authentication (MFA), firewalls, intrusion detection systems (IDS), regular security audits, and comprehensive employee training prove effective, their inconsistent implementation across the industry highlighted the need for standardized protocols. The study identified significant gaps in resource allocation, particularly for smaller firms and non-profits, and underscores the necessity for formal incident response plans. Recommendations include enhanced training programs, development of standardized security protocols, resource support for smaller firms, regular security audits, comprehensive incident response plans, and adoption of advanced technologies. The study calls for further exploration of emerging threats, cost-effective solutions for smaller firms, the impact of artificial intelligence (AI) and machine learning (ML), longitudinal studies on cybersecurity practices, and analysis of policy and regulatory impacts. These insights aim to enhance the cybersecurity posture of tax accounting practices, ensuring the protection of sensitive taxpayer information and overall industry resilience.
APA, Harvard, Vancouver, ISO, and other styles
22

Omar, Omar, and Muhammad Jawad Ikram. "Advanced Threat Detection in Cyber-Physical Systems using Lemurs Optimization Algorithm with Deep Learning." Journal of Cybersecurity and Information Management 15, no. 02 (2025): 87–99. http://dx.doi.org/10.54216/jcim.150208.

Full text
Abstract:
Cyber-physical systems (CPS) are significant to main organizations like Smart Grids and water conduct and are gradually helpless to an extensive range of developing threats. Identifying threats to CPS is of greatest significance, owing to their progressive frequent usage in numerous critical assets. Traditional safety devices like firewalls and encryption are frequently insufficient for CPS designs; the execution of Intrusion Detection Systems (IDSs) personalized for CPS is a crucial plan for safeguarding them. Artificial intelligence (AI) techniques have shown abundant probability in numerous areas of network security, mainly in network traffic observation and in the recognition of unauthorized access, misuse, or denial of network resources. IDS in CPSs and other fields namely the Internet of Things, is regularly considered through deep learning (DL) and machine learning (ML). This manuscript offers the design of an Advanced Threat Detection utilizing the Lemurs Optimization Algorithm with Deep Learning (ATD-LOADL) methodology in the CPS platform. The primary of the ATD-LOADL methodology is to focus on the recognition and classification of cyber threats in CPS. In the preliminary phase, the pre-processing of the CPS data takes place using a min-max scaler. To select an optimum set of features, the ATD-LOADL technique uses LOA as a feature selection approach. For threat detection, the ATD-LOADL algorithm uses a multi-head attention-based long short-term memory (MHA-LSTM) classifier. At last, the detection results of the MHA-LSTM method are boosted by the use of the shuffled frog leap algorithm (SFLA). The experimentation outcomes of the ATD-LOADL approach can be widely investigated on a benchmark CPS dataset. An experimentation outcome stated the enhanced threat detection results of the ATD-LOADL technique over other existing approaches
APA, Harvard, Vancouver, ISO, and other styles
23

Ali, Syed Afraz. "Designing Secure and Robust E-Commerce Plaform for Public Cloud." Asian Bulletin of Big Data Management 3, no. 1 (2023): 164–89. http://dx.doi.org/10.62019/abbdm.v3i1.56.

Full text
Abstract:
&#x0D; &#x0D; &#x0D; &#x0D; The migration of e-commerce platforms to the public cloud has become a pivotal strategy for businesses seeking enhanced scalability, performance, and cost-efficiency. This paper explores the multifaceted design considerations critical to deploying robust e-commerce systems within the public cloud infrastructure. It delves into scalability, ensuring that platforms can handle varying loads with elasticity and grace. Security is examined as a paramount concern, addressing the need for stringent data protection, compliance with industry standards, and the implementation of best practices such as encryption and identity access management. Performance optimization is discussed, with a focus on leveraging content delivery networks and optimizing database operations to ensure swift customer experiences. The paper also covers reliability and availability, emphasizing the necessity of multi-regional deployment and sophisticated disaster recovery plans to guarantee uninterrupted service. Cost management is analyzed, highlighting the importance of understanding cloud pricing models and employing cost-effective resource utilization strategies. Data management is scrutinized, considering secure storage, privacy, and efficient data handling. User experience is identified as a critical component, with personalization and session management being key to customer satisfaction. The role of DevOps and automation in achieving efficient deployment cycles through continuous integration and delivery is outlined. The benefits of a microservices architecture are presented, along with the challenges of managing such distributed systems. Multi-tenancy and isolation are discussed in the context of security and resource optimization. Integration and APIs are explored for their role in facilitating extensibility and seamless third-party service incorporation. Compliance and legal considerations are addressed, underscoring the importance of data sovereignty and regular audits. Lastly, the paper touches on the incorporation of emerging technologies such as AI, ML, and IoT to stay at the forefront of innovation, and concludes with a discussion on environmental considerations for sustainable cloud practices. This comprehensive analysis provides a roadmap for businesses to navigate the complexities of cloud-based e-commerce, ensuring robust, secure, and efficient online retail operations.&#x0D; &#x0D; &#x0D; &#x0D;
APA, Harvard, Vancouver, ISO, and other styles
24

Temitope Oluwatosin Fatunmbi. "Advanced frameworks for fraud detection leveraging quantum machine learning and data science in fintech ecosystems." World Journal of Advanced Engineering Technology and Sciences 12, no. 1 (2024): 495–513. https://doi.org/10.30574/wjaets.2024.12.1.0057.

Full text
Abstract:
The rapid expansion of the fintech sector has brought with it an increasing demand for robust and sophisticated fraud detection systems capable of managing large volumes of financial transactions. Conventional machine learning (ML) approaches, while effective, often encounter limitations in terms of computational efficiency and the ability to model complex, high-dimensional data structures. Recent advancements in quantum computing have given rise to a promising paradigm known as quantum machine learning (QML), which leverages quantum mechanical principles to solve problems that are computationally infeasible for classical computers. The integration of QML with data science has opened new avenues for enhancing fraud detection frameworks by improving the accuracy and speed of transaction pattern analysis, anomaly detection, and risk mitigation strategies within fintech ecosystems. This paper aims to explore the potential of quantum-enhanced data science methodologies to bolster fraud detection and prevention mechanisms, providing a comparative analysis of QML techniques against classical ML models in the context of their application to financial data analysis. Fraud detection in fintech relies heavily on data-driven models to identify suspicious activities and prevent financial crimes such as identity theft, money laundering, and fraudulent transactions. Traditional ML approaches, such as decision trees, support vector machines, and deep learning, have laid the foundation for these systems. However, these approaches often fall short when faced with the challenges posed by high-dimensional, noisy, and complex financial data. Quantum machine learning, by leveraging quantum bits or qubits, possesses the unique ability to represent and process data in an exponentially larger state space, allowing for more efficient pattern recognition and computationally intensive analysis. Quantum algorithms such as the Quantum Support Vector Machine (QSVM), Quantum Principal Component Analysis (QPCA), and Quantum Neural Networks (QNNs) have been studied for their potential to outperform classical counterparts in specific problem domains, including fraud detection. This research delves into the theoretical foundations of quantum computing, outlining how quantum superposition, entanglement, and quantum interference can be harnessed to perform operations that exponentially accelerate data processing. Quantum algorithms are presented as capable of achieving faster data transformations and more nuanced pattern recognition through their ability to process all potential combinations of data simultaneously. The implementation of QML algorithms on quantum hardware, although still in its nascent stages, is beginning to demonstrate tangible benefits in terms of the speed and complexity of computations for fraud detection tasks. For example, quantum-enhanced anomaly detection can lead to the identification of rare, complex patterns that classical ML might overlook, contributing to a more proactive approach to fraud prevention. The paper also examines the integration of data science techniques with quantum-enhanced fraud detection, considering data preprocessing, feature engineering, and the application of quantum-enhanced statistical methods. Data preprocessing, a crucial step in building effective fraud detection models, involves the transformation and normalization of financial data to ensure that models can learn from relevant features without overfitting or underfitting. Quantum data structures offer the potential to represent data with a higher degree of complexity and interrelations, which is critical for capturing the multifaceted nature of financial transactions and detecting subtle signs of fraudulent activity. Quantum data encoding schemes such as Quantum Random Access Memory (QRAM) enable efficient storage and retrieval of data, providing a scalable solution for processing large datasets in real-time. A comprehensive analysis of case studies demonstrates the real-world applicability of quantum machine learning frameworks in fintech. The research highlights projects where quantum algorithms have been tested in controlled environments to detect anomalies in simulated transaction data, showcasing improvements in the identification of complex fraud scenarios over classical ML approaches. For instance, Quantum Support Vector Machines have been utilized to perform higher-dimensional classification tasks that are essential for distinguishing between legitimate and fraudulent transactions based on transaction history and user behavior. Furthermore, quantum algorithms that operate on hybrid systems, combining quantum and classical resources, are also explored to mitigate the limitations imposed by current quantum hardware, which is still constrained by issues such as noise and qubit coherence time. The paper also addresses key challenges and limitations associated with the integration of QML into practical fraud detection systems. Quantum hardware, although advancing rapidly, still faces significant challenges, including the need for error correction, qubit stability, and hardware scalability. Quantum computers with sufficient qubits and coherence time are necessary to implement complex algorithms for fraud detection effectively. Additionally, a practical approach to harnessing QML would require the development of quantum software frameworks and quantum programming languages that can operate in tandem with existing fintech systems and data infrastructure. Another area of focus is the synergy between quantum machine learning and classical machine learning models in creating hybrid systems that leverage the strengths of both methodologies. Quantum-enhanced feature extraction and dimensionality reduction can be combined with classical algorithms for final decision-making processes. This allows for a more comprehensive approach where quantum algorithms handle the computationally intensive parts of data analysis, while classical systems can be utilized for integrating real-time data and refining output for human interpretation. The paper discusses potential pathways for integrating these hybrid models, including considerations for API development, data interoperability, and the standardization of quantum-classical workflows. The discussion extends to the practical implications of implementing quantum-based fraud detection systems, particularly in terms of security and privacy. The use of quantum encryption and quantum key distribution can complement QML by ensuring that the data fed into fraud detection models is protected from external tampering. Quantum-resistant cryptography solutions are also explored, providing a comprehensive view of how quantum technologies could enhance the overall security posture of fintech ecosystems while promoting trust and compliance.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Jue, Usha Pendurthi, and L. Vijaya Mohan Rao. "Alterations in Sphingomyelin Metabolism Influences Tissue Factor Procoagulant Activity and the Release of TF-Positive Microvesicles." Blood 134, Supplement_1 (2019): 1103. http://dx.doi.org/10.1182/blood-2019-125933.

Full text
Abstract:
Tissue factor (TF), an integral membrane glycoprotein, is a cofactor for coagulation factor VIIa (FVIIa) and primary cellular initiator of the coagulation. Upon vascular injury or in disease conditions, blood comes in contact with TF, and the formation of TF-FVIIa complex initiates activation of the coagulation cascade. While TF is critical for the maintenance of hemostasis, aberrant expression of TF activity could lead to thrombotic disorders. Typically, most of TF on cell surfaces exist in a cryptic, coagulant inactive state, and an "activation" step (decryption) is essential for the transformation of cryptic TF to prothrombotic TF. Our recent studies showed that sphingomyelin (SM) in the outer leaflet of the plasma membrane is responsible for maintaining TF in an encrypted state in resting cells. The hydrolysis of SM, by either bacterial sphingomyelinase (bSMase) or acid-sphingomyelinase (ASMase) translocated from lysosomes to the outer leaflet in response to ATP, LPS or cytokine stimulation, increased TF activity on intact cells without altering TF protein levels. SM hydrolysis also led to the release of TF+ microvesicles (MVs). Inhibition of ASMase by functional inhibitors blocked LPS-induced TF procoagulant activity without impairing LPS-induced TF antigen levels in both in vitro and in vivo model systems. SM levels in the plasma membrane are regulated primarily by SM synthesizing enzymes, such as sphingomyelin synthases (SMS) 1 and 2 or SM hydrolyzing enzymes, such as ASMase and neutral SMases (nSMase). Many disease conditions, including diabetes, ischemia/hypoxia, and cancer, alter SM metabolism by altering the activities of the above enzymes. These diseases are also known to have increased thrombotic risk. To investigate the importance of SM metabolism in regulating TF procoagulant activity through TF encryption and decryption, we either overexpressed or silenced the enzymes involved in SM metabolism and determined their effect on TF procoagulant activity on intact cells and the release of TF+ MVs. Human monocyte-derived macrophages (MDMs) or human embilical vein endothelial cells (HUVEC) were chosen as cell model systems. In the first set of experiments, MDMs were transfected with adenovirus encoding SMS1, SMS2, or both to overexpress SMS. Analysis of SM levels in the outer leaflet by confocal microscopy and flow cytometry using SM specific binding protein (lysenin) revealed that overexpression of SMS1 or SMS2 increased SM levels in the outer leaflet. Measurement of TF activity on intact cells showed that overexpression of either SMS1 or SMS2 reduced both basal TF activity and the extent of increased TF activity following ATP or bSMase treatment. Overexpression of SMS1 or SMS2 also decreased the release of TF+ MVs. Overexpression of SMS1 or SMS2 had no significant effect on TF antigen levels. In the next set of experiments, MDMS were transfected with control scrambled RNA (scRNA) or siRNA specific for ASMase, nSMase1, nSMase2, or nSMase3. As expected from our earlier studies, ASMase silencing attenuated both basal and ATP-induced increased TF activity in MDMs. In case of nSMases, the knock-down of nSMase2 or nSMase3, but not nSMase1, reduced basal TF activity as well as ATP-induced TF decryption in MDMs. Analysis of SM levels in the outer leaflet showed that silencing of ASMase, nSMase2, or nSMase3 enhanced the SM content. The knock-down of either ASMase or nSMases did not affect TF antigen levels. In additional studies, HUVECs were transfected with control scRNA or siRNA specific for nSMase1, nSMAse2, or nSMase3. Forty eight hour post-transfection, HUVECs were stimulated with TNFα (10 ng/ml) plus IL-1β (10 ng/ml) for 6 h to induce TF expression. Analysis of cell surface TF activity showed that silencing nSMase2 or nSMase3, but not nSMase1, attenuated TNFα+IL-1β-induced TF procoagulant activity without decreasing TNFα+IL-1β-induced TF antigen levels. Overall, our data support the hypothesis that alterations in SM metabolism regulate TF procoagulant activity through encryption and decryption. Disclosures Rao: Takeda: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
26

Deepthi Kamidi. "Leveraging Artificial Intelligence for Enhanced Data Protection: A Comprehensive Review of Cloud Security amid Emerging threats." Journal of Information Systems Engineering and Management 10, no. 43s (2025): 16–26. https://doi.org/10.52783/jisem.v10i43s.8291.

Full text
Abstract:
Cloud computing offers scalable, adaptable, and affordable solutions that spur innovation across multiple industries, it has fundamentally changed how industries function. However, with this widespread adoption comes the growing challenge of protecting sensitive data, especially as more sophisticated cyberattacks become common. Advanced Persistent challenges (APTs), insider assaults, data breaches, and Distributed Denial of Service (DDoS) attacks are just a few of the challenges that modern cloud environments must contend with. These threats highlight flaws in conventional security paradigmsThe integration of cutting-edge technologies like artificial intelligence (AI) and machine learning (ML) into cloud security is becoming more and more important in response to these issues. These technologies are proving to be effective instruments for increasing prediction accuracy, automating threat detection, and enabling real-time encryption protocol modifications. We can improve cloud security by utilising AI and ML to detect anomalies, find zero-day vulnerabilities, and employ predictive models that assist in addressing problems before they become more serious. A thorough analysis of the present uses of AI and ML in cloud security is provided in this work including how these tools are being used to enhance traditional methods like encryption and access control. It also evaluates the latest research in AI-driven threat detection, behavioral analysis, and adaptive encryption. Additionally, we highlight critical gaps in current AI/ML security frameworks, particularly in terms of scalability, false positive rates, and the challenges of real-time implementation. The primary goals of this review are threefold: first, to systematically analyze the emerging threats to cloud data security; second, to propose the development of more adaptive and robust algorithms that use AI and ML to enhance cloud protection; and third, to present a framework for integrating these algorithms into existing cloud security infrastructures. Ultimately, we hope this review contributes valuable insights that can shape the future of AI/ML-driven cloud security, helping to tackle the evolving challenges that come with modern cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
27

Sarker, Md Takbir Hossen, Ishtiaque Ahmed, and Md Atiqur Rahaman. "AI-BASED SMART TEXTILE WEARABLES FOR REMOTE HEALTH SURVEILLANCE AND CRITICAL EMERGENCY ALERTS: A SYSTEMATIC LITERATURE REVIEW." American Journal of Scholarly Research and Innovation 2, no. 02 (2023): 01–29. https://doi.org/10.63125/ceqapd08.

Full text
Abstract:
The integration of artificial intelligence (AI) in smart textile wearables has revolutionized healthcare by enabling real-time, non-invasive monitoring of physiological parameters, predictive analytics, and automated decision-making for early disease detection and intervention. This systematic review examines the advancements, challenges, and regulatory considerations surrounding AI-powered smart textile wearables by analyzing 244 peer-reviewed studies selected from an initial pool of 1,264 articles published before 2023. The study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, ensuring a structured and transparent review process. Findings indicate that AI-enhanced biosensors integrated into smart textiles have significantly improved the accuracy and efficiency of health monitoring systems, particularly in areas such as cardiovascular health, diabetes management, neurological disorder detection, respiratory health surveillance, maternal health monitoring, and occupational safety applications. The review highlights that machine learning (ML) and deep learning (DL) models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have increased biosignal classification accuracy by up to 18%, reducing false-positive rates and enhancing clinical decision support. Furthermore, federated learning techniques have addressed algorithmic bias issues, improving the generalizability of AI-driven health assessments while preserving patient data privacy. However, despite these advancements, 32% of the reviewed studies reported challenges related to motion artifacts, environmental variability, and sensor calibration issues, which continue to impact data reliability in wearable medical textiles. Regulatory compliance remains a significant barrier, with 64% of studies highlighting the complexity of obtaining FDA pre-market approval (PMA) for AI-integrated medical wearables due to the evolving nature of AI models. Cybersecurity concerns also persist, as 22 reviewed studies identified risks associated with biometric data transmission and unauthorized access, reinforcing the need for stronger encryption protocols and standardized privacy frameworks. Despite these challenges, AI-driven smart textiles have demonstrated their effectiveness in reducing hospital readmissions, improving patient adherence to long-term health monitoring, and lowering overall healthcare costs by 19% through early disease detection and proactive medical intervention. As AI-powered smart textiles continue to evolve, addressing challenges related to sensor accuracy, regulatory oversight, cybersecurity, and interoperability with existing healthcare systems will be crucial to unlocking their full potential. This review underscores the transformative role of AI-integrated smart textile wearables in shaping the future of digital healthcare, enabling innovative, personalized, and data-driven healthcare solutions that optimize clinical workflows, enhance patient outcomes, and drive forward the next generation of intelligent health monitoring technologies.
APA, Harvard, Vancouver, ISO, and other styles
28

Vaishali Raut , Prasanna Palsodkar , Mithun G Aush, Bhalchandra M. Hardas ,. "Machine Learning and Enhanced Encryption for Edge Computing in IoT and Wireless Networks." Journal of Electrical Systems 20, no. 1s (2024): 200–210. http://dx.doi.org/10.52783/jes.765.

Full text
Abstract:
Machine Learning (ML) and better encryption methods is a key part of fixing the security and speed problems that Edge Computing causes in the Internet of Things (IoT) and Wireless Networks. The goal of this study is to improve the security of edge devices and networks so that critical data created and processed at the edge stays private and secure. Anomaly detection, threat identification, and adaptable security mechanisms depend on machine learning algorithms in a big way. These algorithms allow for proactive defenses against cyber dangers that are always changing. The proposed system used homomorphic encryption and quantum-resistant cryptography, to make data more private and secure. Even in edge devices with limited resources, these security methods keep data transfer and storage safe. The combination of machine learning and stronger encryption not only protects the IoT environment but also makes the best use of resources by changing security measures on the fly as threats change. This study adds to the development of safe and effective edge computing models, which helps IoT and wireless networks become more popular. The results can be used in many situations, from smart cities to industrial robotics. This makes sure that the advantages of edge computing can be enjoyed without putting the safety and privacy of the linked systems at risk.
APA, Harvard, Vancouver, ISO, and other styles
29

J Merlin Florrence. "Integrating Cryptographic Techniques with Machine Learning Algorithms for Enhanced Data Privacy and Information Security: A Mathematical Framework." Communications on Applied Nonlinear Analysis 31, no. 8s (2024): 397–420. http://dx.doi.org/10.52783/cana.v31.1519.

Full text
Abstract:
Including machine learning in cryptographic schemes, notably homomorphic encryption(HE), is one prominent research direction that might hold the key to maintaining higher levels of data privacy and information security. However, almost all traditional data encryption models expect the payload to be decrypted first before any processing can take place. Such an additional decryption layer is a backdoor waiting to be breached — especially for industries, such as healthcare that have information safeguarding mandates related to sensitive data. The reports further point out that traditional cryptographic mechanisms are not adequate and hence suggest the use of lightweight cryptography in order to secure healthcare IoT enabled devices, as it trades off security with resource constraints associated with this class of small commodity sensors. HE allows encrypted data to be computed on without having to decrypt it, impede throughout processing from the sensitive. This is particularly the case in healthcare where Patient Health Records should naturally be kept private. While SVM and Random Forest are other family members of ML algorithms that can be used in HE area for a specific task like seizure detection or alcoholic predisposition from EEG signals as explored by this study. What these figures evidence are that the predictions on plaintext of data is almost as good and more often than not quicker, but computations involving encrypted date can be computationally expensive. This demonstrates the potential for privacy-preserving machine-learning applications, especially in healthcare systems such as implemented by this end-to-end so… Future work needs to look into integrating HE with DL methods for high-level data analysis and designing practical ML algorithms that can be deployed on resource limited IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
30

Rao, Ganga Rama Koteswara, Hayder M. A. Ghanimi, V. Ramachandran, Dokhyl Al-Qahtani, Pankaj Dadheech, and Sudhakar Sengan. "Enhanced security in federated learning by integrating homomorphic encryption for privacy-protected, collaborative model training." Journal of Discrete Mathematical Sciences and Cryptography 27, no. 2 (2024): 361–70. http://dx.doi.org/10.47974/jdmsc-1891.

Full text
Abstract:
A significant novel approach in distributed ML, Federated Learning (FL), enables multiple parties to work simultaneously on developing models while securing the confidentiality of their unique datasets. There are issues regarding privacy with FL, particularly for models that are being trained, because private information can be accessed from shared gradients or updates to the model. This investigation proposes SecureHE-Fed, a novel system that improves FL’s defense against attacks on privacy through the use of Homomorphic Encryption (HE) and Zero-Knowledge Proofs (ZKP). Before data from clients becomes involved in the learning procedure, SecureHE-Fed encrypts it. The following lets us determine encrypted messages without revealing the data as it is. As an additional security test, ZKP is employed to verify if modifications to models are valid without sharing the true nature of the information. By evaluating SecureHE-Fed with different FL techniques, researchers demonstrate that it enhances confidentiality while maintaining the precision of the model. The results of this work obtained validate SecureHE-Fed as a secure and scalable FL approach, and we recommend its use in applications where user confidentiality is essential.
APA, Harvard, Vancouver, ISO, and other styles
31

Chaganti, Krishna, and Pavan Paidy. "Strengthening Cryptographic Systems with AI-Enhanced Analytical Techniques." International Journal of Applied Mathematical Research 14, no. 1 (2025): 13–24. https://doi.org/10.14419/fh79gr07.

Full text
Abstract:
This study paper uses advanced Artificial Intelligence (AI) analytical tools to enhance cryptographic systems and counter evolving security threats. The proposed approach integrates traditional cryptographic techniques with Machine Learning (ML) to improve key management, encryption algorithms, and overall system security. This methodology is further strengthened by integrating the Cyber-Kill Chain (CKC) and the National Institute of Standards and Technology (NIST ) Cybersecurity Framework. In CKC’s stage model, Reconnaissance, Weaponization, and Exploitation are related to the NIST phases of Identifying, Protecting, Detecting, Responding, and Recovering as a comprehensive cybersecurity plan. Bayesian networks, Markov Decision Processes, and Partial Differential Equations (PDE) are referenced for threat detection, temporal modeling of vulnerabilities, and mathematical correctness, respectively. Introducing such optimizations promoted by AI into the CKC and NIST frameworks helps the proposed system achieve better flexibility, robustness, and extensibility. Additionally, reinforcement learning is explored to dynamically adjust security measures based on real-time threats. Experimental validation supports the efficiency of integrating AI-driven analytics into cryptographic frameworks. In this context, the work suggests a forward-looking plan for cybersecurity in contemporary society, mapping between theory development and applications that produce sound and secure cryptographic systems that neutralize cutting-edge security risks.
APA, Harvard, Vancouver, ISO, and other styles
32

Selvan, Thirumalai, S. Siva Shankar, S. Sri Nandhini Kowsalya, et al. "Modernizing cloud computing systems with integrating machine learning for multi-objective optimization in terms of planning and security." MATEC Web of Conferences 392 (2024): 01155. http://dx.doi.org/10.1051/matecconf/202439201155.

Full text
Abstract:
Cloud enterprises face challenges in managing large amounts of data and resources due to the fast expansion of the cloud computing atmosphere, serving a wide range of customers, from individuals to large corporations. Poor resource management reduces the efficiency of cloud computing. This research proposes an integrated resource allocation security with effective task planning in cloud computing utilizing a Machine Learning (ML) approach to address these issues. The suggested ML-based Multi-Objective Optimization Technique (ML-MOOT) is outlined below: An enhanced task planning, based on the optimization method, aims to reduce make-span time and increase throughput. An ML-based optimization is developed for optimal resource allocation considering various design limitations such as capacity and resource demand. A lightweight authentication system is suggested for encrypting data to enhance data storage safety. The proposed ML-MOOT approach is tested using a separate simulation setting and compared with state-of-the-art techniques to demonstrate its usefulness. The findings indicate that the ML-MOOT approach outperforms the present regarding resource use, energy utilization, reaction time, and other factors.
APA, Harvard, Vancouver, ISO, and other styles
33

Anulekshmi, S. "Enhanced WSN security based on Trust-Based Secure Intelligent Opportunistic Routing Protocol in agriculture data transmission sector." RESEARCH REVIEW International Journal of Multidisciplinary 9, no. 12 (2024): 228–44. https://doi.org/10.31305/rrijm.2024.v09.n12.028.

Full text
Abstract:
In recent years agriculture is a developing environment through remote sensing environment with support of Wireless Sensor Network (WSN). Increasing communication and Internet of things make a communication behind to share the information all over the monitoring fields However, due to the open nature of WSNs, they are vulnerable to various security attacks, which can lead to serious consequences, such as data integrity breaches, denial-of-service attacks, and identity theft. Machine Learning (ML) can be used to improve the security of WSNs to detect and prevent security attacks. ML algorithms analyze the normal behavior of WSNs and to detect anomalies that may indicate an attack. To propose a TBSIOP (Trust-Based Secure Intelligent Opportunistic Routing Protocol) for improving the security in WSN-agriculture environment. Initially the Weighted Trust Mechanism (WTM) is evaluated to find behaviors aspects of nodes in the communication medium. Then the data packets are secured through Elliptic Diffie-Hellman Encryption Integrated Cryptography (EDHEIC) on each handover to the communication nodes. In addition, the Substitutive Padding Key Verification (SPKV) is applied to securely handover the data by getting key authentication policy. Which prevents non-legitimate users from accessing the network. The security can be carried out by flooding the network with packets, by sending malicious code to the network, or by overloading the network's resources. Finally, the TBSIOP works on the principle of best candidate selection model to ensure the routing to improve the security. The proposed system achieves high performance to ensure the security in the level of trust-based routing, node authentication, secure data handover with proof of knowledge to deliver the data. This improves the security as well in higher data transmission rate, low level packet drop ratio with redundant time complexity compared to the other system.
APA, Harvard, Vancouver, ISO, and other styles
34

Ch., Sowjanya, and D. V. Nagarjana Devi Dr. "Designing an Enhanced Elliptic Curve Cryptography Model for IoT Security in Smart Cities: Integrating Machine Learning for Cyber Attack Detection." May 2, 2025. https://doi.org/10.5281/zenodo.15372412.

Full text
Abstract:
The proliferation of Internet of Things (IoT) devices in smart cities has led to an explosion of interconnected systems, resulting in increased vulnerabilities to cyber-attacks. These devices often handle sensitive personal data, making them prime targets for malicious attacks. Elliptic Curve Cryptography (ECC) has been recognized for its efficient and secure encryption mechanism in IoT applications, but it still faces challenges in adapting to dynamic environments with high computational demand and network congestion typical in smart cities. The core motivation of this research is to design an enhanced ECC model tailored to the security needs of IoT systems, focusing on optimizing security while maintaining low computational overhead. The primary objective of this study is to integrate ECC with machine learning (ML) techniques to develop a hybrid model that not only secures data communication but also detects and mitigates cyber-attacks in real-time. The proposed methodology combines ECC&rsquo;s cryptographic strength with machine learning algorithms to create an intelligent security framework capable of both defending against attacks and optimizing quality of service (QoS) in IoT-enabled smart cities. Key achievements include a significant reduction in the time taken to detect and respond to attacks, coupled with a notable improvement in system throughput and QoS metrics.Graphical Abstract Description:The graphical abstract will depict an IoT-enabled smart city with a network of interconnected devices, where secure communication is facilitated by an enhanced ECC model. The ECC encryption process is shown in combination with machine learning models used for cyber-attack detection. The diagram should illustrate the dynamic adjustment of ECC parameters based on network conditions and the continuous monitoring of network traffic by ML algorithms. The visual should also highlight the key benefits of the proposed system, such as attack detection, QoS optimization, and overall IoT security enhancement.
APA, Harvard, Vancouver, ISO, and other styles
35

S, Thumilvannan, and Balamanigandan R. "DI CVD TRI Layer CX Classifier for Secure IoT Enabled Risk Prediction Model." Journal of Machine and Computing, July 5, 2025, 1571–80. https://doi.org/10.53759/7669/jmc202505124.

Full text
Abstract:
This paper introduces a novel Di-CVD Tri-Layer CX Classifier, an IoT-integrated and machine learning (ML)-driven framework, to predict the individual and joint risk of diabetes (DB) and heart disease (HD). The proposed model comprises three phases: secure IoT-based data collection using Enhanced BGV encryption with Dynamic Distributed Hashing (DDH); a feature extraction (FE) phase leveraging (IGO) Information Gain Ratio and disease-specific ranking and a three-step classifier—Cm-Ro (FS) feature selection, hierarchical XGBoost classification, and synergistic prioritized risk scoring. By integrating multi-attribute features, rule-free optimization, and enhanced interoperability, the model addresses critical challenges such as heterogeneous data formats, poor feature relevance, and low interoperability in previous studies. When compared to conventional classifiers such as SVM and standard XGBoost, experimental evaluation on the NHANES dataset shows improved performance in terms of accuracy (ACC), recall (R), precision (P), and F1-score. The outcomes validate the framework’s effectiveness in early, secure, and individualized risk prediction, offering substantial support for timely interventions and enhanced patient care.
APA, Harvard, Vancouver, ISO, and other styles
36

"The Impact of Machine Learning Algorithms and Big Data on Privacy in Data Collection and Analysis." Canadian Journal of Business and Information Studies, October 11, 2024, 93–103. http://dx.doi.org/10.34104/ajeit.024.0930103.

Full text
Abstract:
In the era of rapid technological advancements, machine learning (ML) and big data analytics have become pivotal in harnessing vast amounts of data for insights, efficiency, and innovation across various sectors. However, the widespread collection and analysis of data raise significant privacy concerns, highlighting the delicate balance between leveraging technology for societal benefits and safeguarding individual privacy. This article delves into the complexities of data collection and analysis practices, emphasizing the potential for privacy breaches through methods such as location tracking, browsing habits analysis, and the creation of detailed personal profiles. It discusses the implications of ML algorithms capable of de-anonymizing data, despite measures like data anonymization and encryption aimed at protecting privacy. The article also examines the existing legal frameworks, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), designed to enhance privacy protection, alongside the ethical considerations for developers and companies in using ML and big data. Furthermore, it explores future outlooks, including developments in technologies like federated learning and differential privacy, that promise enhanced privacy protection. The conclusion calls for a concerted effort among policymakers, technologists, and the public to engage in ongoing dialogue and develop solutions that ensure the ethical use of ML and big data while upholding privacy rights.
APA, Harvard, Vancouver, ISO, and other styles
37

ISRAT JAHAN, SHOHONI MAHABUB, and MD RUSSEL HOSSAIN. "Optimizing Data analysis and security of Electronic Health Records (EHR): Role in Revolutionization of Machine Learning for Usability interface." Nanotechnology Perceptions, November 5, 2024, 4010–22. https://doi.org/10.62441/nano-ntp.vi.3713.

Full text
Abstract:
The digitization of healthcare has led to an exponential increase in the generation of Electronic Health Records (EHRs), presenting opportunities and challenges for optimizing data analysis and security. The integration of machine learning (ML) into EHR systems has revolutionized usability interfaces, enabling effective data management, enhanced patient care, and improved decision-making. However, the sensitivity of EHRs demands robust security mechanisms to protect against breaches and unauthorized access. This study investigates the optimization of data analysis and security in EHR systems, focusing on machine learning's role in usability interface enhancement. The proposed research develops a hybrid approach combining advanced ML models with encryption techniques to ensure data integrity and security without compromising user experience. The study emphasizes interpretability and scalability, addressing challenges in processing large datasets while ensuring compliance with regulatory frameworks such as HIPAA. By integrating real-time analytics, anomaly detection, and user-friendly interfaces, this research aims to bridge the gap between data usability and robust security measures, paving the way for secure and efficient EHR management systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Kang, Seongkweon, Doojin Hong, Biswajit Das, et al. "Ferroelectric Stochasticity in 2D CuInP2S6 and Its Application for True Random Number Generator." Advanced Materials, July 16, 2024. http://dx.doi.org/10.1002/adma.202406850.

Full text
Abstract:
AbstractTrue random number generators (TRNGs), which create cryptographically secure random bitstreams, hold great promise in addressing security concerns regarding hardware, communication, and authentication in the Internet of Things (IoT) realm. Recently, TRNGs based on nanoscale materials have gained considerable attention for avoiding conventional and predictable hardware circuitry designs that can be vulnerable to machine learning (ML) attacks. In this article, a low‐power and low‐cost TRNG developed by exploiting stochastic ferroelectric polarization switching in 2D ferroelectric CuInP2S6 (CIPS)‐based capacitive structures, is reported. The stochasticity arises from the probabilistic switching of independent electrical dipoles. The TRNG exhibits enhanced stochastic variability with near‐ideal entropy, uniformity, uniqueness, Hamming distance, and independence from autocorrelation variations. Its unclonability is systematically examined using device‐to‐device variations. The generated cryptographic bitstreams pass the National Institute of Standards and Technology (NIST) randomness tests. This nanoscale CIPS‐based TRNG is circuit‐integrable and exhibits potential for hardware security in edge devices with advanced data encryption.
APA, Harvard, Vancouver, ISO, and other styles
39

Ponniah, Krishna Kumar, and Bharathi Retnaswamy. "A novel deep learning based intrusion detection system for the IoT-Cloud platform with blockchain and data encryption mechanisms." Journal of Intelligent & Fuzzy Systems, October 10, 2023, 1–18. http://dx.doi.org/10.3233/jifs-221873.

Full text
Abstract:
The Internet of Things (IoT) integrated Cloud (IoT-Cloud) has gotten much attention in the past decade. This technology’s rapid growth makes it even more critical. As a result, it has become critical to protect data from attackers to maintain its integrity, confidentiality, protection, privacy, and the procedures required to handle it. Existing methods for detecting network anomalies are typically based on traditional machine learning (ML) models such as linear regression (LR), support vector machine (SVM), and so on. Although these methods can produce some outstanding results, they have low accuracy and rely heavily on manual traffic feature design, which has become obsolete in the age of big data. To overcome such drawbacks in intrusion detection (ID), this paper proposes a new deep learning (DL) model namely Morlet Wavelet Kernel Function included Long Short-Term Memory (MWKF-LSTM), to recognize the intrusions in the IoT-Cloud environment. Initially, to maintain a user’s privacy in the network, the SHA-512 hashing mechanism incorporated a blockchain authentication (SHABA) model is developed that checks the authenticity of every device/user in the network for data uploading in the cloud. After successful authentication, the data is transmitted to the cloud through various gateways. Then the intrusion detection system (IDS) using MWKF-LSTM is implemented to identify the type of intrusions present in the received IoT data. The MWKF-LSTM classifier comes up with the Differential Evaluation based Dragonfly Algorithm (DEDFA) optimal feature selection (FS) model for increasing the performance of the classification. After ID, the non-attacked data is encrypted and stored in the cloud securely utilizing Enhanced Elliptical Curve Cryptography (E2CC) mechanism. Finally, in the data retrieval phase, the user’s authentication is again checked to ensure user privacy and prevent the encrypted data in the cloud from intruders. Simulations and statistical analysis are performed, and the outcomes prove the superior performance of the presented approach over existing models.
APA, Harvard, Vancouver, ISO, and other styles
40

Patel, Vishva, Hitasvi Shukla, and Aashka Raval. "Enhancing Botnet Detection With Machine Learning And Explainable AI: A Step Towards Trustworthy AI Security." International Journal For Multidisciplinary Research 7, no. 2 (2025). https://doi.org/10.36948/ijfmr.2025.v07i02.39353.

Full text
Abstract:
The rapid proliferation of botnets, armies of compromised machines controlled by malicious actors remotely, has played a pivotal role in the increase in cyber-attacks, such as Distributed Denial-of-Service (DDoS) attacks, credential theft, data exfiltration, command-and-control (C2) activity, and automated exploitation of vulnerabilities. Legacy botnet detection methods, founded on signature matching and deep packet inspection (DPI), are rapidly becoming a relic of the past because of the prevalence of encryption schemes like TLS 1.3, DNS-over-HTTPS (DoH), and encrypted VPN tunneling. These encryption mechanisms conceal packet payloads, making traditional network monitoring technology unsuitable for botnet detection. Faced with the challenge, ML-based botnet detection mechanisms have risen to the top. Existing ML-based approaches, however, are marred by two inherent weaknesses: (1) Lack of granularity in detection because most models are based on binary classification, with no distinction of botnet attack variants, and (2) Uninterpretability, where high-performing AI models behave like black-box mechanisms, which limits trust in security automation and leads to high false positives, thereby making threat analysis difficult for security practitioners. To overcome these challenges, this study proposes an AI-based, multi-class classification botnet detection system for encrypted network traffic that includes Explainable AI (XAI) techniques for improving model explainability and decision transparency. Two datasets, CICIDS-2017 and CTU-NCC, are used in this study, where a systematic data preprocessing step was employed to maximise data quality, feature representation, and model performance. Preprocessing included duplicate record removal, missing and infinite value imputation, categorical feature transformation, and removal of highly correlated and zero-variance features to minimise model bias. Dimensionality reduction was performed using Principal Component Analysis (PCA), lowering features of CICIDS-2017 from 70 to 34 and those of CTU-NCC from 17 to 4 for maximizing computational efficiency. Additionally, to deal with skewed class distributions, Synthetic Minority Over-Sampling Technique (SMOTE) was employed to synthesise minority class samples to offer balanced representation of botnet attack types. For CICIDS-2017, we used three machine learning algorithms: Random Forest (RF) with cross-validation (0.98 accuracy, 100K samples per class), eXtreme Gradient Boosting (XGB) with Bayesian optimisation (0.997 accuracy, 180K samples per class), and our recently introduced Hybrid K-Nearest Neighbours(KNN) + Random Forest (RF) model, resulting in state-of-the-art accuracy of 0.99 (180K samples per class). The CTU-NCC dataset was divided across three network sensors and processed separately. Random Forest (RF), Decision Tree (DT), and KNN models were trained independently for each sensor, and to enhance performance, ensemble learning methods such as stacking and voting were applied to combine the results from each of the sensors. The resulting accuracies were as follows: (Random Forest Stacking: 99.38%, Random Forest Voting: 99.35% ), (Decision Tree Stacking: 99.68%, Decision Tree Voting: 91.65%), and (KNN Stacking: 97.53%, KNN Voting: 97.11%). Explainable AI (XAI) techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model agnostic Explanation) were integrated to provide enhanced interpretability in eXtreme Gradient Boosting and our Hybrid KNN+Random Forest model, which provided explanations for model decisions and enhanced analyst confidence in the system prediction. Our key contribution is the Hybrid KNN+Random Forest system with 0.99 accuracy and provision of explainability. We illustrate an accurate, scalable, and deployable AI-based solution for botnet attacks. Our experimentation shows that the multi-class classification method greatly assists in botnet attack discrimination, and Explainable AI (XAI) helps enhance clarity and is thus a strong, practical solution in the real case of botnet detection in an encrypted network scenario.
APA, Harvard, Vancouver, ISO, and other styles
41

BOPPANA, VENKAT RAVITEJA. "Future Trends in Cloud-based CRM Solutions for Healthcare." EPH-International Journal of Business & Management Science 9, no. 2 (2023). http://dx.doi.org/10.53555/eijbms.v9i2.177.

Full text
Abstract:
Cloud-based Customer Relationship Management (CRM) solutions have revolutionized numerous industries, and healthcare is no exception. As healthcare organizations strive to enhance patient care, streamline operations, and improve data management, the adoption of advanced CRM systems is becoming increasingly vital. This paper delves into the emerging trends shaping the future of cloud-based CRM solutions within the healthcare sector, offering insights into how these innovations are set to transform the landscape. One significant trend is the integration of artificial intelligence (AI) and machine learning (ML) into CRM platforms. These technologies enable healthcare providers to analyze vast amounts of patient data, predict health outcomes, and personalize patient interactions more effectively. AI-driven CRM systems can identify patterns and trends that might be missed by human analysts, leading to more proactive and preventive care strategies. Another trend is the emphasis on enhanced data security and compliance. With the growing concern over data breaches and stringent regulatory requirements, healthcare CRM solutions are increasingly incorporating robust security measures. These include advanced encryption, regular security audits, and compliance with standards such as HIPAA, ensuring that patient information remains confidential and secure. Interoperability is also a key focus area, as seamless integration with other healthcare systems becomes essential. Future CRM solutions will likely offer improved compatibility with electronic health records (EHRs), telehealth platforms, and other critical systems, facilitating a more cohesive and comprehensive approach to patient care. Additionally, the shift towards patient-centered care is driving the development of CRM features that enhance patient engagement and experience. Tools such as patient portals, personalized communication, and remote monitoring capabilities are becoming standard, empowering patients to take a more active role in their healthcare journey.
APA, Harvard, Vancouver, ISO, and other styles
42

Bellamkonda, Srikanth. "Enhancing Network Security in Healthcare Institutions: Addressing Connectivity and Data Protection Challenges." International Journal of Innovative Research in Computer and Communication Engineering 07, no. 02 (2019). https://doi.org/10.15680/ijircce.2019.0702165.

Full text
Abstract:
The rapid adoption of digital technologies in healthcare has revolutionized patient care, enabling seamless data sharing, remote consultations, and enhanced medical record management. However, this digital transformation has also introduced significant challenges to network security and data protection. Healthcare institutions face a dual challenge: ensuring uninterrupted connectivity for critical operations and safeguarding sensitive patient information from cyber threats. These challenges are exacerbated by the increased use of interconnected devices, electronic health records (EHRs), and cloud-based solutions, which, while enhancing efficiency, expand the attack surface for malicious actors. This research focuses on addressing the pressing need for robust network security in healthcare institutions. It examines the unique vulnerabilities of healthcare networks, including the risks posed by outdated infrastructure, insufficient encryption protocols, and the lack of standardized security practices across systems. The paper highlights the critical role of advanced cybersecurity frameworks, such as zero-trust architectures and realtime threat detection systems, in mitigating risks. Additionally, it explores the integration of artificial intelligence (AI) and machine learning (ML) in enhancing the predictive capabilities of security tools, enabling institutions to proactively identify and neutralize potential threats. Connectivity remains a cornerstone of modern healthcare operations, facilitating collaboration among healthcare providers and ensuring timely access to patient data. However, maintaining connectivity without compromising security is a complex task. This research delves into strategies for achieving this balance, such as implementing secure Virtual Private Networks (VPNs), multi-factor authentication (MFA), and end-to-end encryption. It also emphasizes the importance of regular network assessments, employee training programs, and the adoption of compliance standards like HIPAA to ensure a secure and resilient network environment. The study includes case analyses of healthcare institutions that successfully navigated cybersecurity challenges, providing actionable insights into effective practices and lessons learned. These examples underscore the importance of integrating technological solutions with a culture of security awareness, emphasizing collaboration between IT professionals and medical staff. As cyber threats continue to evolve, healthcare institutions must remain vigilant and adaptive, embracing innovative solutions to safeguard their networks and protect patient data. This paper contributes to the broader discourse on cybersecurity in healthcare by proposing a comprehensive approach that addresses connectivity and data protection challenges while fostering operational efficiency. By prioritizing network security, healthcare institutions can build trust with patients, comply with regulatory requirements, and ensure the uninterrupted delivery of quality care. This research provides a roadmap for healthcare organizations seeking to strengthen their cybersecurity posture, highlighting the necessity of adopting a proactive, multi-layered approach to combat emerging threats. It calls for continuous investment in advanced technologies and emphasizes the role of collaboration among stakeholders to create a secure and connected healthcare ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
43

Kabade, Satish, and Akshay Sharma. "Securing Pension Systems with AI-Driven Risk Analytics and Cloud-Native Machine Learning Architectures." April 15, 2024. https://doi.org/10.5281/zenodo.15596984.

Full text
Abstract:
Pension systems are vital financial facilities that must be adequately protected against fraud, information threats,&nbsp;and other financial perils for the benefit of the interested parties. Traditional methods of risk assessment are&nbsp;limited, especially in the cybersecurity threats, compliance processes and fraud prevention in a real-time manner.&nbsp;Thus, implementing AI and ML in cloud-based architectures can be an effective solution for improving the security&nbsp;of the pension system, automating the risk analysis of all kinds of threats, and detecting various types of fraud. &nbsp;By&nbsp;availing supervised and unsupervised learning approaches, this paper examines the potential of AI to identify&nbsp;fraudulent activities and evaluate the risks in pension systems. It also discusses architectures that allow scalability.&nbsp;Moreover, it explores current architectures in real-time monitoring of the OS and data encryption enhancement on&nbsp;the cloud. The traditional method of utilizing AI in pension administration focuses on effectively identifying new&nbsp;challenges and employing predictive analytics to prevent or address them before they harm the pension fund&nbsp;adversely. In this paper, real cases and experiments proving the feasibility of using Autoencoders and LSTMs in the&nbsp;identification of suspicious transactions and irregular pension transfers have also been discussed. In addition, some&nbsp;of the issues highlighted in this paper include data privacy issues, interpretability of results, and AI prejudice in&nbsp;generating the decision. We suggest future work based on the following directions: federated learning to train secure&nbsp;AI models and adopting ethical frameworks to improve the model's interpretability and fairness. The significance of&nbsp;the issue, the analyses made, the conclusions drawn, and the measures recommended all suggest that the application&nbsp;of AI in the pension fund and pension management presents both opportunities for enhanced pension security, fraud&nbsp;prevention, and legal compliance in terms of size and complexity in cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
44

Jia, Jingna, Dongyang Wang, Xuwen Gao, Yuqi Xu, Xiaoxuan Ren, and Guizheng Zou. "Alkaline-Earth-Metal-Ion Blending Enhanced Mechanoluminescence of Lanthanide Ion in MZnOS Host for Stress Sensing and Anticounterfeiting." Journal of Materials Chemistry C, 2023. http://dx.doi.org/10.1039/d2tc05541d.

Full text
Abstract:
Mechanoluminescence (ML) materials with highly-efficient and multicolored emission are strongly anticipated in the field of stress sensing and information encryption. Here, an alkaline-earth-metal-ion blending in host and lanthanide-ion doping-in-growth combined...
APA, Harvard, Vancouver, ISO, and other styles
45

T, Thilagam, and Aruna R. "LM-GA: A Novel IDS with AES and Machine Learning Architecture for Enhanced Cloud Storage Security." Journal of Machine and Computing, April 5, 2023, 69–79. http://dx.doi.org/10.53759/7669/jmc202303008.

Full text
Abstract:
Cloud Computing (CC) is a relatively new technology that allows for widespread access and storage on the internet. Despite its low cost and numerous benefits, cloud technology still confronts several obstacles, including data loss, quality concerns, and data security like recurring hacking. The security of data stored in the cloud has become a major worry for both Cloud Service Providers (CSPs) and users. As a result, a powerful Intrusion Detection System (IDS) must be set up to detect and prevent possible cloud threats at an early stage. Intending to develop a novel IDS system, this paper introduces a new optimization concept named Lion Mutated-Genetic Algorithm (LM-GA) with the hybridization of Machine Learning (ML) algorithms such as Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM). Initially, the input text data is preprocessed and balanced to avoid redundancy and vague data. The preprocessed data is then subjected to the hybrid Deep Learning (DL) models namely the CNN-LSTM model to get the IDS output. Now, the intruded are discarded and non-intruded data are secured using Advanced Encryption Standard (AES) encryption model. Besides, the optimal key selection is done by the proposed LM-GA model and the cipher text is further secured via the steganography approach. NSL-KDD and UNSW-NB15 are the datasets used to verify the performance of the proposed LM-GA-based IDS in terms of average intrusion detection rate, accuracy, precision, recall, and F-Score.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!