To see the other types of publications on this topic, follow the link: Cloud-Based Machine Learning.

Journal articles on the topic 'Cloud-Based Machine Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cloud-Based Machine Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar Thopalle, Praveen. "Enhancing Security in Cloud-Based Storage Systems Using Machine Learning." International Journal of Science and Research (IJSR) 12, no. 11 (2023): 2216–22. http://dx.doi.org/10.21275/sr24905010155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chmielecki, Przemysław. "Machine Learning Based on Cloud Solutions." Edukacja – Technika – Informatyka 27, no. 1 (2019): 132–38. http://dx.doi.org/10.15584/eti.2019.1.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Bo, and Rongli Zhang. "Virtual Machine Allocation Strategy Based on Statistical Machine Learning." Mathematical Problems in Engineering 2022 (July 5, 2022): 1–6. http://dx.doi.org/10.1155/2022/8190296.

Full text
Abstract:
At present, big data cloud computing has been widely used in many enterprises, and it serves tens of millions of users. One of the core technologies of big data cloud service is computer virtualization technology. The reasonable allocation of virtual machines on available hosts is of great significance to the performance optimization of cloud computing. We know that with the continuous development of information technology and the increasing number of computer users, different virtualization technologies and the increasing number of virtual machines in the network make the effective allocation of virtualization resources more and more difficult. In order to solve and optimize this problem, we propose a virtual machine allocation algorithm based on statistical machine learning. According to the resource requirements of each virtual machine in cloud service, the corresponding comprehensive performance analysis model is established, and the reasonable virtual machine allocation algorithm description of the host in the resource pool is realized according to the virtualization technology type or mode provided by the model. Experiments show that this method has the advantages of overall performance, load balancing, and supporting different types of virtualization.
APA, Harvard, Vancouver, ISO, and other styles
4

Davinder Pal Singh. "Cloud-Based Machine Learning : Opportunities and Challenges." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 264–70. http://dx.doi.org/10.32628/cseit24106177.

Full text
Abstract:
This comprehensive article explores the transformative impact of cloud-based machine learning (ML) on modern enterprises, examining both opportunities and challenges in implementation. The article investigates the rapidly growing cloud computing market and its ML segment, revolutionizing how organizations approach data analytics and business intelligence. Through detailed analysis of enterprise implementations, the article demonstrates how cloud ML solutions have democratized access to advanced analytics, significantly reducing operational costs while improving data processing efficiency. The article examines key aspects, including scalability advantages, cost efficiencies, and technical complexities, while providing evidence-based best practices for successful implementation. Drawing from multiple industry studies and real-world deployments, the article presents a framework for organizations to navigate challenges in data privacy, vendor dependencies, and skill requirements while maximizing the benefits of cloud-based ML solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Chkirbene, Zina, Aiman Erbad, Ridha Hamila, Ala Gouissem, Amr Mohamed, and Mounir Hamdi. "Machine Learning Based Cloud Computing Anomalies Detection." IEEE Network 34, no. 6 (2020): 178–83. http://dx.doi.org/10.1109/mnet.011.2000097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sengupta, Nandita, and Ramya chinnasamy. "Machine Learning Based Medicinal Care in Cloud." International Journal of Computer Trends and Technology 47, no. 4 (2017): 219–26. http://dx.doi.org/10.14445/22312803/ijctt-v47p135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Subramanian, E. K., and Latha Tamilselvan. "A focus on future cloud: machine learning-based cloud security." Service Oriented Computing and Applications 13, no. 3 (2019): 237–49. http://dx.doi.org/10.1007/s11761-019-00270-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Manish, Ihtiram Raza Khan, B. Gomathy, and Ansuman Samal. "Hybrid Multi-User Based Cloud Data Security for Medical Decision Learning Patterns." ECS Transactions 107, no. 1 (2022): 2559–73. http://dx.doi.org/10.1149/10701.2559ecst.

Full text
Abstract:
Machine learning plays a vital role in the real-time cloud based medical computing systems. However, most of the computing servers are independent of data security and recovery scheme in multiple virtual machines due to high computing cost and time. Also these cloud based medical applications require static security parameters for cloud data security. Cloud based medical applications require multiple servers in order to store medical records or machine learning patterns for decision making. Due to high computational memory and time, these cloud systems require an efficient data security framework in order to provide strong data access control among the multiple users. In this paper, a hybrid cloud data security framework is developed to improve the data security on the large machine learning patterns in real-time cloud computing environment. This work is implemented in two phases, data replication phase and multi-user data access security phase. Initially, machine decision patterns are replicated among the multiple servers for data recovering phase. In the multi-access cloud data security framework, a hybrid multi-access key based data encryption and decryption model is implemented on the large machine learning medical patterns for data recovery and security process. Experimental results proved that the present two-phase data recovering and security framework has better computational efficiency than the conventional approaches on large medical decision patterns.
APA, Harvard, Vancouver, ISO, and other styles
9

Majjaru, Chandrababu, and Senthil Kumar K. Dr. "Proficient Machine Learning Techniques for a Secured Cloud Environment." International Journal of Engineering and Advanced Technology (IJEAT) 11, no. 6 (2022): 74–81. https://doi.org/10.35940/ijeat.F3730.0811622.

Full text
Abstract:
<strong>Abstract: </strong>Many different checks, rules, processes, and technologies work together to keep cloud-based applications and infrastructure safe and secure against cyberattacks. Data security, customer privacy, regulatory enforcement, and device and user authentication regulations are all protected by these safety measures. Insecure Access Points, DDoS Attacks, Data Breach and Data Loss are the most pressing issues in cloud security. In the cloud computing context, researchers looked at several methods for detecting intrusions. Cloud security best practises such as host &amp; middleware security, infrastructure and virtualization security, and application system &amp; data security make up the bulk of these approaches, which are based on more traditional means of detecting abuse and anomalies. Machine Learning-based strategies for securing cloud infrastructure are the topic of this work, and ongoing research comprises research issues. There are a number of unresolved issues that will be addressed in the future.
APA, Harvard, Vancouver, ISO, and other styles
10

Talwani, Suruchi, Jimmy Singla, Gauri Mathur, et al. "Machine-Learning-Based Approach for Virtual Machine Allocation and Migration." Electronics 11, no. 19 (2022): 3249. http://dx.doi.org/10.3390/electronics11193249.

Full text
Abstract:
Due to its ability to supply reliable, robust and scalable computational power, cloud computing is becoming increasingly popular in industry, government, and academia. High-speed networks connect both virtual and real machines in cloud computing data centres. The system’s dynamic provisioning environment depends on the requirements of end-user computer resources. Hence, the operational costs of a particular data center are relatively high. To meet service level agreements (SLAs), it is essential to assign an appropriate maximum number of resources. Virtualization is a fundamental technology used in cloud computing. It assists cloud providers to manage data centre resources effectively, and, hence, improves resource usage by creating several virtualmachine (VM) instances. Furthermore, VMs can be dynamically integrated into a few physical nodes based on current resource requirements using live migration, while meeting SLAs. As a result, unoptimised and inefficient VM consolidation can reduce performance when an application is exposed to varying workloads. This paper introduces a new machine-learning-based approach for dynamically integrating VMs based on adaptive predictions of usage thresholds to achieve acceptable service level agreement (SLAs) standards. Dynamic data was generated during runtime to validate the efficiency of the proposed technique compared with other machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Yashwant Dongre. "Optimizing Resource Allocation in Cloud-Based Information Systems through Machine Learning Algorithms." Journal of Information Systems Engineering and Management 10, no. 1s (2024): 443–54. https://doi.org/10.52783/jisem.v10i1s.227.

Full text
Abstract:
Efficient resource sharing has become a must for improving system performance and lowering running costs as cloud-based information systems continue to serve a wide range of large-scale apps. Traditional ways of allocating resources don't always work well when tasks change, which wastes time and money. This article suggests a high-tech structure based on machine learning that can help make the best use of cloud resources. It focuses on features that can predict and adapt to changes in task in real time. Our method uses both controlled and unstructured learning to correctly predict resource needs and to find the best way to distribute resources across virtual machines with the least amount of delay and the highest cost-effectiveness. The study used real-world cloud task data to run a lot of models that compared standard heuristic methods to machine learning-based distribution. According to the results, the system is much more efficient now, with up to 35% less wasted resources and 25% faster response times. We talk about how choosing the right model (decision trees, neural networks, and support vector machines) affects the accuracy of predictions and the amount of work that needs to be done. The study also talks about how the machine learning system can be scaled up or down, showing that it can work with different cloud platforms and types of applications. The suggested method lowers the need for human work by automating resource sharing. This lets cloud companies better handle resources, which makes users happier overall. This study adds to the growing field of optimizing cloud resources and shows how important machine learning methods will be in designing future cloud infrastructure. The results show that machine learning is a good, scalable way to handle resources in cloud settings that are getting more complicated all the time.
APA, Harvard, Vancouver, ISO, and other styles
12

Bangar Raju Cherukuri. "Quantum machine learning: Transforming cloud-based AI solutions." International Journal of Science and Research Archive 1, no. 1 (2020): 110–22. https://doi.org/10.30574/ijsra.2020.1.1.0041.

Full text
Abstract:
This study examines the feasibility of placing quantum computing technology into cloud ML systems to make QML far faster and more scalable. Quantum computers tackle standard ML performance challenges through their special traits, including superposition and entanglement. Implementing QML on cloud-based platforms unlocks the specific advantages of scalability and accessibility while providing the required flexibility. Cloud-based systems can better predict results with faster performance when they use quantum algorithms to process machine learning tasks. This research examines how QML connects to cloud computing technology while showing how these industries can use it to handle limited processing power and improve overall system performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Amarnath, Raveendra N., and Gurumoorthi Gurulakshmanan. "Cloud-based machine learning algorithms for anomalies detection." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 1 (2024): 156. http://dx.doi.org/10.11591/ijeecs.v35.i1.pp156-164.

Full text
Abstract:
Gradient boosting machines harnesses the inherent capabilities of decision trees and meticulously corrects their errors in a sequential fashion, culminating in remarkably precise predictions. Word2Vec, a prominent word embedding technique, occupies a pivotal role in natural language processing (NLP) tasks. Its proficiency lies in capturing intricate semantic relationships among words, thereby facilitating applications such as sentiment analysis, document classification, and machine translation to discern subtle nuances present in textual data. Bayesian networks introduce probabilistic modeling capabilities, predominantly in contexts marked by uncertainty. Their versatile applications encompass risk assessment, fault diagnosis, and recommendation systems. Gated recurrent units (GRU), a variant of recurrent neural networks, emerges as a formidable asset in modeling sequential data. Both training and testing are crucial to the success of an intrusion detection system (IDS). During the training phase, several models are created, each of which can recognize typical from anomalous patterns within a given dataset. To acquire passwords and credit card details, "phishing" usually entails impersonating a trusted company. Predictions of student performance on academic tasks are improved by hyper parameter optimization of the gradient boosting regression tree using the grid search approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Amarnath, Raveendra N., and Gurumoorthi Gurulakshmanan. "Cloud-based machine learning algorithms for anomalies detection." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 1 (2024): 156–64. https://doi.org/10.11591/ijeecs.v35.i1.pp156-164.

Full text
Abstract:
Gradient boosting machines harnesses the inherent capabilities of decision trees and meticulously corrects their errors in a sequential fashion, culminating in remarkably precise predictions. Word2Vec, a prominent word embedding technique, occupies a pivotal role in natural language processing (NLP) tasks. Its proficiency lies in capturing intricate semantic relationships among words, thereby facilitating applications such as sentiment analysis, document classification, and machine translation to discern subtle nuances present in textual data. Bayesian networks introduce probabilistic modeling capabilities, predominantly in contexts marked by uncertainty. Their versatile applications encompass risk assessment, fault diagnosis, and recommendation systems. Gated recurrent units (GRU), a variant of recurrent neural networks, emerges as a formidable asset in modeling sequential data. Both training and testing are crucial to the success of an intrusion detection system (IDS). During the training phase, several models are created, each of which can recognize typical from anomalous patterns within a given dataset. To acquire passwords and credit card details, "phishing" usually entails impersonating a trusted company. Predictions of student performance on academic tasks are improved by hyper parameter optimization of the gradient boosting regression tree using the grid search approach.
APA, Harvard, Vancouver, ISO, and other styles
15

Jo, Sungbae, Kijun Han, and Dongkyun Kim. "Machine Learning Based FaaS Cloud Region Selection Method." KIISE Transactions on Computing Practices 27, no. 7 (2021): 325–30. http://dx.doi.org/10.5626/ktcp.2021.27.7.325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Nasser, Ahmed R., and Ali M. Mahmood. "Cloud-Based Parkinson’s Disease Diagnosis Using Machine Learning." Mathematical Modelling of Engineering Problems 8, no. 6 (2021): 915–22. http://dx.doi.org/10.18280/mmep.080610.

Full text
Abstract:
Parkinson’s disease (PD) harms the human brain's nervous system and can affect the patient's life. However, the diagnosis of PD diagnosis in the first stages can lead to early treatment and save costs. In this paper, a cloud-based machine learning diagnosing intelligent system is proposed for the PD with respect to patient voice. The proposed system is composed of two stages. In the first stage, two machine learning approaches, Random-Forest (RF) and Long-Short-Term-Memory (LSTM) are applied to generate a model that can be used for early treatment of PD. In this stage, a feature selection method is used to choose the minimum subset of the best features, which can be utilized later to generate the classification model. In the second stage, the best diagnosis model is deployed in cloud computing. In this stage, an Android application is also designed to provide the interface to the diagnosis model. The performance evaluation of the diagnosis model is conducted based on the F-score accuracy measurement. The result shows that the LTSM model has superior accuracy with 95% of the F-score compared with the RF model. Therefore, the LSTM model is selected for implementing a cloud-based PD diagnosing application using Python and Java.
APA, Harvard, Vancouver, ISO, and other styles
17

Jaiswal, Rahul, and Dr Gobi N. "Security Challenges and Solutions in Cloud-Based Machine Learning Systems for Big Data." International Journal of Research Publication and Reviews 5, no. 3 (2024): 928–31. http://dx.doi.org/10.55248/gengpi.5.0324.0635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mrs., K.S.Saraswathi Devi. "Machine Learning for Cloud Resource Allocation." Journal of Scholastic Engineering Science and Management 2, no. 9 (2023): 31–36. https://doi.org/10.5281/zenodo.8311288.

Full text
Abstract:
<strong>Cloud resource allocation is a critical challenge in cloud computing. Traditional resource allocation schemes are often inefficient and can lead to over-provisioning or under-provisioning of resources. Machine learning can be used to develop more intelligent resource allocation schemes that can predict workload demand and optimize resource allocation accordingly. In this paper, we review the recent advances in machine learning for cloud resource allocation. We discuss the different types of machine learning algorithms that have been used for cloud resource allocation, as well as the challenges and limitations of these algorithms. We also present a case study on the use of machine learning for cloud resource allocation in a real-world application. The results of our study show that machine learning can be a promising approach for improving the efficiency and effectiveness of cloud resource allocation. However, there are still a number of challenges that need to be addressed, such as the need for more accurate and reliable machine learning models, as well as the need to consider the security and privacy implications of machine learning-based resource allocation schemes.</strong>
APA, Harvard, Vancouver, ISO, and other styles
19

Purohit, Abhishek. "AI and Machine Learning in The Cloud: This Involves Using AI and Machine Learning in Cloud Computing." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40753.

Full text
Abstract:
Artificial intelligence (AI) and machine learning (ML) have emerged as disruptive technologies that are redefining cloud computing. The incorporation of AI and machine learning into cloud platforms improves productivity, scalability, and flexibility, allowing organizations to handle massive amounts of data and make intelligent choices in real time. This study investigates the symbiotic relationship between AI/ML and cloud computing, focusing on how cloud architecture provides the computational capacity needed for AI and ML models while AI improves cloud resource allocation. The article discusses key breakthroughs such as AI-driven automation in cloud operations, predictive analytics, and intelligent application deployment. The report emphasizes the democratization of AI capabilities via cloud services, which make them available to small and medium-sized organizations (SMEs) without requiring considerable infrastructure investment. It also investigates the security hurdles, ethical concerns, and compliance issues that come with data-intensive AI systems on the cloud. Real-world applications include AI-powered recommendation systems, fraud detection, and tailored consumer experiences, demonstrating the strength of this synergy. The report indicates that incorporating AI and ML into cloud computing is critical for expanding technological landscapes, driving creativity, and helping enterprises to remain competitive. However, addressing data privacy, ethical AI usage, and fair access to cloud-based AI resources are crucial for long-term progress. This study provides insights for researchers and practitioners on leveraging AI/ML in the cloud to meet evolving technological and business needs.
APA, Harvard, Vancouver, ISO, and other styles
20

Dr. Pradeep Laxkar and Dr. Nilesh Jain. "A Review of Scalable Machine Learning Architectures in Cloud Environments: Challenges and Innovations." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (2025): 2907–16. https://doi.org/10.32628/cseit25112764.

Full text
Abstract:
As the demand for machine learning (ML) and data analysis grows across industries, the need for scalable and efficient cloud-based architectures becomes critical. The increase in of data generation, along with the increasing demand for advanced analytics and machine learning (ML), has make necessary the development of scalable architectures in cloud environments. Cloud computing provides a flexible and scalable solution, allowing organizations to efficiently process large datasets and deploy complex ML models without traditional hardware limitations. The review paper explores the various cloud-based machine learning (ML) architectures, highlighting the scalability features of various cloud platforms such as AWS, Azure, and GCP. This study also discusses emerging technologies like serverless computing, automated machine learning AutoMLL), and microservices-based architectures that enhance the scalability of the cloud environment. Furthermore, challenges such as data security, talent gaps, and resource allocation inefficiencies are also considered. The paper concludes by evaluating innovative approaches that drive scalable ML in cloud environments, providing insights into the future landscape of cloud-based machine learning. In conclusion, this scalable cloud-based architecture provides a robust and flexible solution for organizations looking to implement machine learning and data analysis workflows. By leveraging distributed computing, containerization, and serverless technologies, the architecture can efficiently manage large datasets and complex models while maintaining cost-efficiency, security, and adaptability to future needs.
APA, Harvard, Vancouver, ISO, and other styles
21

Journal, of Global Research in Electronics and Communications. "A Review of Scalable Machine Learning Architectures in Cloud Environments: Challenges and Innovations." Journal of Global Research in Electronics and Communications 1, no. 4 (2025): 7–11. https://doi.org/10.5281/zenodo.15115138.

Full text
Abstract:
As the demand for machine learning (ML) and data analysis grows across industries, the need for scalable and efficient cloud-based architectures becomes critical. The increase in of data generation, along with the increasing demand for advanced analytics and machine learning (ML), has make necessary the development of scalable architectures in cloud environments. Cloud computing provides a flexible and scalable solution, allowing organizations to efficiently process large datasets and deploy complex ML models without traditional hardware limitations. The review paper explores the various cloud-based machine learning (ML) architectures, highlighting the scalability features of various cloud platforms such as AWS, Azure, and GCP. This study also discusses emerging technologies like serverless computing, automated machine learning AutoMLL), and microservices-based architectures that enhance the scalability of the cloud environment. Furthermore, challenges such as data security, talent gaps, and resource allocation inefficiencies are also considered. The paper concludes by evaluating innovative approaches that drive scalable ML in cloud environments, providing insights into the future landscape of cloud-based machine learning. In conclusion, this scalable cloud-based architecture provides a robust and flexible solution for organizations looking to implement machine learning and data analysis workflows.&nbsp; By leveraging distributed computing, containerization, and serverless technologies, the architecture can efficiently manage large datasets and complex models while maintaining cost-efficiency, security, and adaptability to future needs.
APA, Harvard, Vancouver, ISO, and other styles
22

Manne, Tirumala Ashish Kumar. "Machine Learning for Intrusion Detection in Cloud-Based Systems." International Journal of Computing and Engineering 3, no. 1 (2022): 54–62. https://doi.org/10.47941/ijce.2765.

Full text
Abstract:
The proliferation of cloud computing has transformed data storage and processing but also introduced complex security challenges. Traditional Intrusion Detection Systems (IDS) often struggle in dynamic cloud environments due to scalability, adaptability, and the high rate of false positives. Machine Learning (ML) has emerged as a powerful tool to enhance IDS by enabling systems to learn from vast datasets, identify anomalous behavior, and adapt to evolving threats. This paper investigates the application of ML techniques such as supervised, unsupervised, and deep learning to intrusion detection in cloud-based systems. It reviews key methodologies, evaluates performance across widely used benchmark datasets (NSL-KDD, CICIDS2017), and highlights real-world implementations in commercial cloud platforms. The study also addresses critical challenges including data privacy, adversarial ML, real-time detection, and scalability. Through a comprehensive analysis, we identify promising research directions such as federated learning, explainable AI, and hybrid cloud-edge IDS architectures.
APA, Harvard, Vancouver, ISO, and other styles
23

Tsai, Yao-Hong, Dong-Meau Chang, and Tse-Chuan Hsu. "Edge Computing Based on Federated Learning for Machine Monitoring." Applied Sciences 12, no. 10 (2022): 5178. http://dx.doi.org/10.3390/app12105178.

Full text
Abstract:
This paper focused on providing a general solution based on edge computing and cloud computing in IoT to machine monitoring in manufacturing of small and medium-sized factory. For real-time consideration, edge computing and cloud computing models were seamlessly cooperated to perform information capture, event detection, and adaptive learning. The proposed IoT system processed regional low-level features for detection and recognition in edge nodes. Cloud-computing including fog computing was responsible for mid- and high-level features by using the federated learning network. The system fully utilized all resources in the integrated deep learning network to achieve high performance operations. The edge node was implemented by a simple camera embedded on Terasic DE2-115 board to monitor machines and process data locally. Learning-based features were generated by cloud computing through the data sent from edge and the identification results could be obtained by combining mid- and high-level features with the nonlinear classifier. Therefore, each factory could monitor the real-time condition of machines without operators and keep its data privacy. Experimental results showed the efficiency of the proposed method when compared with other methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Yadav, Apurv Singh. "Keyword Recognition Device Cloud Based." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (2021): 87–89. http://dx.doi.org/10.22214/ijraset.2021.37296.

Full text
Abstract:
Over the past few decades speech recognition has been researched and developed tremendously. However in the past few years use of the Internet of things has been significantly increased and with it the essence of efficient speech recognition is beneficial more than ever. With the significant improvement in Machine Learning and Deep learning, speech recognition has become more efficient and applicable. This paper focuses on developing an efficient Speech recognition system using Deep Learning.
APA, Harvard, Vancouver, ISO, and other styles
25

Gong, Fanghai. "Workflow Scheduling Based on Mobile Cloud Computing Machine Learning." Wireless Communications and Mobile Computing 2021 (July 5, 2021): 1–13. http://dx.doi.org/10.1155/2021/9923326.

Full text
Abstract:
In recent years, cloud workflow task scheduling has always been an important research topic in the business world. Cloud workflow task scheduling means that the workflow tasks submitted by users are allocated to appropriate computing resources for execution, and the corresponding fees are paid in real time according to the usage of resources. For most ordinary users, they are mainly concerned with the two service quality indicators of workflow task completion time and execution cost. Therefore, how cloud service providers design a scheduling algorithm to optimize task completion time and cost is a very important issue. This paper proposes research on workflow scheduling based on mobile cloud computing machine learning, and this paper conducts research by using literature research methods, experimental analysis methods, and other methods. This article has deeply studied mobile cloud computing, machine learning, task scheduling, and other related theories, and a workflow task scheduling system model was established based on mobile cloud computing machine learning from different algorithms used in processing task completion time, task service costs, task scheduling, and resource usage The situation and the influence of different tasks on the experimental results are analyzed in many aspects. The algorithm in this paper speeds up the scheduling time by about 7% under a different number of tasks and reduces the scheduling cost by about 2% compared with other algorithms. The algorithm in this paper has been obviously optimized in time scheduling and task scheduling.
APA, Harvard, Vancouver, ISO, and other styles
26

Girish, L., and Sridhar K. N. Rao. "Quantifying Sensitivity and Performance Degradation of Virtual Machines Using Machine Learning." Journal of Computational and Theoretical Nanoscience 17, no. 9 (2020): 4055–60. http://dx.doi.org/10.1166/jctn.2020.9019.

Full text
Abstract:
Virtualized data centers bring lot of benefits with respect to the reducing the high usage of physical hardware. But nowadays, as the usage of cloud infrastructures are rapidly increasing in all the fields to provide proper services on demand. In cloud data center, achieving efficient resource sharing between virtual machine and physical machines are very important. To achieve efficient resource sharing performance degradation of virtual machine and quantifying the sensitivity of virtual machine must be modeled, predicted correctly. In this work we use machine learning techniques like decision tree, K nearest neighbor and logistic regression to calculate the sensitivity of virtual machine. The dataset used for the experiment was collected using collected from open stack cloud environment. We execute two scenarios in this experiment to evaluate performance of the three mentioned classifiers based on precision, recall, sensitivity and specificity. We achieved good results using decision tree classifier with precision 88.8%, recall 80% and accuracy of 97.30%.
APA, Harvard, Vancouver, ISO, and other styles
27

Meera, A., and S. Swamynathan. "Queue Based Q-Learning for Efficient Resource Provisioning in Cloud Data Centers." International Journal of Intelligent Information Technologies 11, no. 4 (2015): 37–54. http://dx.doi.org/10.4018/ijiit.2015100103.

Full text
Abstract:
Cloud Computing is a novel paradigm that offers virtual resources on demand through internet. Due to rapid demand to cloud resources, it is difficult to estimate the user's demand. As a result, the complexity of resource provisioning increases, which leads to the requirement of an adaptive resource provisioning. In this paper, the authors address the problem of efficient resource provisioning through Queue based Q-learning algorithm using reinforcement learning agent. Reinforcement learning has been proved in various domains for automatic control and resource provisioning. In the absence of complete environment model, reinforcement learning can be used to define optimal allocation policies. The proposed Queue based Q-learning agent analyses the CPU utilization of all active Virtual Machines (VMs) and detects the least loaded virtual machine for resource provisioning. It detects the least loaded virtual machines through Inter Quartile Range. Using the queue size of virtual machines it looks ahead by one time step to find the optimal virtual machine for provisioning.
APA, Harvard, Vancouver, ISO, and other styles
28

Hassan, M. K., A. Babiker, M. B. M. Amien, and Hamad. "SLA Management For Virtual Machine Live Migration Using Machine Learning with Modified Kernel and Statistical Approach." Engineering, Technology & Applied Science Research 8, no. 1 (2018): 2459–63. https://doi.org/10.5281/zenodo.1207258.

Full text
Abstract:
Application of cloud computing is rising substantially due to its capability to deliver scalable computational power. System attempts to allocate a maximum number of resources in a manner that ensures that all the service level agreements (SLAs) are maintained. Virtualization is considered as a core technology of cloud computing. Virtual machine (VM) instances allow cloud providers to utilize datacenter resources more efficiently. Moreover, by using dynamic VM consolidation using live migration, VMs can be placed according to their current resource requirements on the minimal number of physical nodes and consequently maintaining SLAs. Accordingly, non optimized and inefficient VMs consolidation may lead to performance degradation. Therefore, to ensure acceptable quality of service (QoS) and SLA, a machine learning technique with modified kernel for VMs live migrations based on adaptive prediction of utilization thresholds is presented. The efficiency of the proposed technique is validated with different workload patterns from Planet Lab servers<em>.</em>
APA, Harvard, Vancouver, ISO, and other styles
29

Kanaker, Hasan, Nader Abdel Karim, Samer A.B. Awwad, Nurul H.A. Ismail, Jamal Zraqou, and Abdulla M. F. Al ali. "Trojan Horse Infection Detection in Cloud Based Environment Using Machine Learning." International Journal of Interactive Mobile Technologies (iJIM) 16, no. 24 (2022): 81–106. http://dx.doi.org/10.3991/ijim.v16i24.35763.

Full text
Abstract:
Cloud computing technology is known as a distributed computing network, which consists of a large number of servers connected via the internet. This technology involves many worthwhile resources, such as applications, services, and large database storage. Users have the ability to access cloud services and resources through web services. Cloud computing provides a considerable number of benefits, such as effective virtualized resources, cost efficiency, self-service access, flexibility, and scalability. However, many security issues are present in cloud computing environment. One of the most common security challenges in the cloud computing environment is the trojan horses. Trojan horses can disrupt cloud computing services and damage the resources, applications, or virtual machines in the cloud structure. Trojan horse attacks are dangerous, complicated and very difficult to be detected. In this research, eight machine learning classifiers for trojan horse detection in a cloud-based environment have been investigated. The accuracy of the cloud trojan horses detection rate has been investigated using dynamic analysis, Cukoo sandbox, and the Weka data mining tool. Based on the conducted experiments, the SMO and Multilayer Perceptron have been found to be the best classifiers for trojan horse detection in a cloud-based environment. Although SMO and Multilayer Perceptron have achieved the highest accuracy rate of 95.86%, Multilayer Perceptron has outperformed SMO in term of Receiver Operating Characteristic (ROC) area.
APA, Harvard, Vancouver, ISO, and other styles
30

Ahmed, Qazi Omair. "Machine Learning for Intrusion Detection in Cloud Environments: A Comparative Study." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 550–63. https://doi.org/10.60087/jaigs.v6i1.287.

Full text
Abstract:
The rapid growth of cloud computing has led to an increased demand for security mechanisms to safeguard sensitive data and resources from cyber threats. Intrusion detection systems (IDS) play a crucial role in identifying unauthorized access or malicious activities within cloud environments. This paper presents a comparative study of machine learning (ML) techniques used in intrusion detection for cloud computing platforms. Various ML algorithms, including decision trees, support vector machines, k-nearest neighbors, and neural networks, are evaluated based on their performance in detecting different types of attacks. The study assesses the accuracy, efficiency, and scalability of these techniques in cloud environments, highlighting their strengths and limitations. The findings provide valuable insights into the selection of appropriate machine learning models for effective intrusion detection in dynamic and scalable cloud systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Jimmy, FNU. "Machine Learning for Intrusion Detection in Cloud Environments: A Comparative Study." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 5, no. 1 (2024): 501–12. https://doi.org/10.60087/jaigs.v5i1.283.

Full text
Abstract:
The rapid growth of cloud computing has led to an increased demand for security mechanisms to safeguard sensitive data and resources from cyber threats. Intrusion detection systems (IDS) play a crucial role in identifying unauthorized access or malicious activities within cloud environments. This paper presents a comparative study of machine learning (ML) techniques used in intrusion detection for cloud computing platforms. Various ML algorithms, including decision trees, support vector machines, k-nearest neighbors, and neural networks, are evaluated based on their performance in detecting different types of attacks. The study assesses the accuracy, efficiency, and scalability of these techniques in cloud environments, highlighting their strengths and limitations. The findings provide valuable insights into the selection of appropriate machine learning models for effective intrusion detection in dynamic and scalable cloud systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Mahajan, Seema, and Bhavin Fataniya. "Cloud detection methodologies: variants and development—a review." Complex & Intelligent Systems 6, no. 2 (2019): 251–61. http://dx.doi.org/10.1007/s40747-019-00128-0.

Full text
Abstract:
AbstractCloud detection is an essential and important process in satellite remote sensing. Researchers proposed various methods for cloud detection. This paper reviews recent literature (2004–2018) on cloud detection. Literature reported various techniques to detect the cloud using remote-sensing satellite imagery. Researchers explored various forms of Cloud detection like Cloud/No cloud, Snow/Cloud, and Thin Cloud/Thick Cloud using various approaches of machine learning and classical algorithms. Machine learning methods learn from training data and classical algorithm approaches are implemented using a threshold of different image parameters. Threshold-based methods have poor universality as the values change as per the location. Validation on ground-based estimates is not included in many models. The hybrid approach using machine learning, physical parameter retrieval, and ground-based validation is recommended for model improvement.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumar, Lingamallu Raghu, C. Ashokkumar, Purnendu Shekhar Pandey, Sathish Kumar Kannaiah, Balajee J, and M. I. Thariq Hussan. "Security Enhancement in Surveillance Cloud Using Machine Learning Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 3s (2023): 46–55. http://dx.doi.org/10.17762/ijritcc.v11i3s.6154.

Full text
Abstract:
Most industries are now switching from traditional modes to cloud environments and cloud-based services. It is essential to create a secure environment for the cloud space in order to provide consumers with a safe and protected environment for cloud-based transactions. Here, we discuss the suggested approaches for creating a reliable and safe environment for a surveillance cloud. When assessing the security of vital locations, surveillance data is crucial. We are implementing machine learning methods to improve cloud security to more precisely classify image pixels, we make use of Support Vector Machines (SVM) and Fuzzy C-means Clustering (FCM). We also extend the conventional two-tiered design by adding a third level, the CloudSec module, to lower the risk of potential disclosure of surveillance data.In our work we evaluates how well our proposed model (FCM-SVM) performed against contemporary models like ANN, KNN, SVD, and Naive Bayes. Comparing our model to other cutting-edge models, we found that it performed better, with an average accuracy of 94.4%.
APA, Harvard, Vancouver, ISO, and other styles
34

Thafzy, V. M. "A Review on the Integration of Machine Learning in Cloud Computing Resource Management." International Journal of Scientific Research and Technology 2, no. 1 (2025): 18–21. https://doi.org/10.5281/zenodo.14592545.

Full text
Abstract:
Efficiently managing resources in cloud computing poses a critical challenge. Over-provisioning inflates costs for both providers and customers, while under-provisioning spikes application latency and risks breaching service level agreements, leading providers to lose customers and revenue. Consequently, researchers are actively pursuing optimal resource management approaches in cloud environments, exploring container place- ment, job scheduling, and multi-resource scheduling. Machine learning plays a pivotal role in these endeavors. This paper offers an extensive survey of machine learning-based solutions for resource management in cloud computing projects, concluding with a comparative analysis of these initiatives. Additionally, it outlines future directions to steer researchers towards further advancements in this domain.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Huixi, Yinhao Xiao, and YongLuo Shen. "Learning-Based Virtual Machine Selection in Cloud Server Consolidation." Mathematical Problems in Engineering 2022 (September 22, 2022): 1–11. http://dx.doi.org/10.1155/2022/6853196.

Full text
Abstract:
In cloud data center (CDC), reducing energy consumption while maintaining performance has always been a hot issue. In server consolidation, the traditional solution is to divide the problem into multiple small problems such as host overloading detection, virtual machine (VM) selection, and VM placement and solve them step by step. However, the design of host overloading detection strategies and VM selection strategies cannot be directly linked to the ultimate goal of reducing energy consumption and ensuring performance. This paper proposes a learning-based VM selection strategy that selects appropriate VMs for migration without direct host overloading detection, thereby reducing the generation of SLAV, ensuring the performance, and reducing the energy consumption of CDC. Simulations driven by real VM workload traces show that our method outperforms the existing methods in reducing SLAV generation and CDC energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
36

Saran, Munish, Rajan Kumar Yadav, and Upendra Nath Tripathi. "Machine Learning based Security for Cloud Computing: A Survey." International Journal of Applied Engineering Research 17, no. 4 (2022): 338. http://dx.doi.org/10.37622/ijaer/17.4.2022.338-344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Talabani, Hardi Sabah. "Machine learning-based cloud computing and IOT: A review." International Journal of Cloud Computing and Database Management 5, no. 2 (2024): 77–84. http://dx.doi.org/10.33545/27075907.2024.v5.i2b.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kumar, Nand, Vishal Naranje, and Sachin Salunkhe. "Cement strength prediction using cloud-based machine learning techniques." Journal of Structural Integrity and Maintenance 5, no. 4 (2020): 244–51. http://dx.doi.org/10.1080/24705314.2020.1783122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Attou, Hanaa, Azidine Guezzaz, Said Benkirane, Mourade Azrour, and Yousef Farhaoui. "Cloud-Based Intrusion Detection Approach Using Machine Learning Techniques." Big Data Mining and Analytics 6, no. 3 (2023): 311–20. http://dx.doi.org/10.26599/bdma.2022.9020038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pintye, Istvan, Jozsef Kovacs, and Robert Lovas. "Enhancing Machine Learning-Based Autoscaling for Cloud Resource Orchestration." Journal of Grid Computing 22 (October 19, 2024): 68. https://doi.org/10.1007/s10723-024-09783-1.

Full text
Abstract:
Performance and cost-effectiveness are sustained by efficient management of resources in cloud computing. Current autoscaling approaches, when trying to balance between the consumption of resources and QoS requirements, usually fall short and end up being inefficient and leading to service disruptions. The existing literature has primarily focuses on static metrics and/or proactive scaling approaches which do not align with dynamically changing tasks, jobs or service calls. The key concept of our approach is the use of statistical analysis to select the most relevant metrics for the specific application being scaled. We demonstrated that different applications require different metrics to accurately estimate the necessary resources, highlighting that what is critical for an application may not be for the other. The proper metrics selection for control mechanism which regulates the requried recources of application are described in this study. Introduced selection mechanism enables us to improve previously designed autoscaler by allowing them to react more quickly to sudden load changes, use fewer resources, and maintain more stable service QoS due to the more accurate machine learning models. We compared our method with previous approaches through a carefully designed series of experiments, and the results showed that this approach brings significant improvements, such as reducing QoS violations by up to 80% and reducing VM usage by 3% to 50%. Testing and measurements were conducted on the Hungarian Research Network (HUN-REN) Cloud, which supports the operation of over 300 scientific projects.
APA, Harvard, Vancouver, ISO, and other styles
41

Pineng, Martina, Eko Suripto Pasinggi, Lantana Dioren Rumpa, and Exzelen Tri Suharpania. "Prediksi Kenaikan Awan Di Wisata Lolai Berbasis Machine Learning." Jurnal Mosfet 4, no. 1 (2024): 01–11. http://dx.doi.org/10.31850/jmosfet.v4i1.2907.

Full text
Abstract:
The increase in cloud cover is an important indicator in predicting upcoming weather. However, manual observations of cloud cover are still limited and time-consuming. Therefore, this research aims to develop a cloud cover classification model based on measurement data in Lolai using the Naive Bayes machine learning method. In this study, data on cloud cover, temperature, and humidity measurements were collected directly in Lolai for 30 days and using online BMKG data. Then, the data was processed and divided into training and testing datasets. The Naive Bayes model was applied to the training data and its accuracy was tested on the testing data. The research results show that the cloud cover classification model based on Naive Bayes has varying accuracy levels depending on the data source. For direct measurement data, the model achieved an accuracy rate of 63%, while for online BMKG data, the model achieved an accuracy rate of 80%. In testing on the testing data, the model successfully classified cloud cover based on temperature and humidity data. This research contributes to identifying the relationship between temperature, humidity, and cloud conditions and evaluates the performance of the Naive Bayes model in determining the influence of air temperature and humidity on cloud conditions. It is expected that this research can serve as a basis for the development of weather prediction systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
42

Kumar, Hemanth. "AI and Machine Learning Integration into Cloud-Based Fintech Platforms." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 10 (2024): 1–8. https://doi.org/10.55041/ijsrem37825.

Full text
Abstract:
The integration of Artificial Intelligence (AI) and Machine Learning (ML) in cloud-based Fintech platforms is transforming the financial industry, enhancing automation, security, scalability, and decision-making processes. AI-driven innovations such as predictive analytics, fraud detection, robo-advisory services, and risk assessment have significantly improved the efficiency and accuracy of financial transactions. Cloud computing further facilitates these advancements by offering on-demand infrastructure, storage, and computational power, making AI solutions more accessible and cost-effective for Fintech firms. However, despite its advantages, the adoption of AI in cloud-based Fintech platforms presents challenges related to data security, compliance, and interoperability. This paper explores the benefits, challenges, best practices, and case studies of AI and ML integration into cloud- based Fintech platforms, providing insights into how these technologies shape the future of digital finance. Keywords Artificial Intelligence, Machine Learning, Fintech, Cloud Computing, Financial Technology, Predictive Analytics, Fraud Detection, Risk Assessment, Blockchain, Cybersecurity
APA, Harvard, Vancouver, ISO, and other styles
43

Chandrababu, Majjaru, and Dr Senthil Kumar K. Moorthy. "Proficient Machine Learning Techniques for a Secured Cloud Environment." International Journal of Engineering and Advanced Technology 11, no. 6 (2022): 74–81. http://dx.doi.org/10.35940/ijeat.f3730.0811622.

Full text
Abstract:
Many different checks, rules, processes, and technologies work together to keep cloud-based applications and infrastructure safe and secure against cyberattacks. Data security, customer privacy, regulatory enforcement, and device and user authentication regulations are all protected by these safety measures. Insecure Access Points, DDoS Attacks, Data Breach and Data Loss are the most pressing issues in cloud security. In the cloud computing context, researchers looked at several methods for detecting intrusions. Cloud security best practises such as host &amp; middleware security, infrastructure and virtualization security, and application system &amp; data security make up the bulk of these approaches, which are based on more traditional means of detecting abuse and anomalies. Machine Learning-based strategies for securing cloud infrastructure are the topic of this work, and ongoing research comprises research issues. There are a number of unresolved issues that will be addressed in the future.
APA, Harvard, Vancouver, ISO, and other styles
44

Prof., Madhu B. R., K. R. Vaishnavi, and N. Gowda |. Tushar Jain |. Sohan Chopdekar Dushyanth. "IoT Based Home Automation System over Cloud." International Journal of Trend in Scientific Research and Development 3, no. 4 (2019): 966–68. https://doi.org/10.31142/ijtsrd24005.

Full text
Abstract:
Internet of Things IoT is a system of interrelated computing devices where all the things, including every physical object, can be connected making those objects intelligent, programmable and capable of interacting with humans. As more and more data are generated each day, IoT and its potential to transform how we communicate with machines and each other can change the world. The user operates the smart home devices year in year out, have produced mass operation data, but these data have not been utilized well in the past. This project focuses on the development of home automation system based on internet of things which allows the user to automate all the devices and appliances of home and merge them to provide seamless control over every side of their home. The data can be used to predict the user&#39;s behavior custom with the development of a machine learning algorithm, and then the prediction results can be employed to enhance the intelligence of a smart home system. The designed system not only gives the sensor data but also process it according to the requirement, for example switching on the light when it gets dark and it allows the user to control the household devices from anywhere. The cloud is used to send the sensor data through Wi Fi module and then a decision tree is implemented which decides the output of the electronic devices also, it is used to achieve the power control and local data exchanging which provide the user interface, store all the information corresponding to the specific house, and query the function information of an individual home appliance. Prof. Madhu B R | Vaishnavi K R | Dushyanth N Gowda | Tushar Jain | Sohan Chopdekar &quot;IoT Based Home Automation System over Cloud&quot; Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd24005.pdf
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Shuguang. "An Adaptive Deployment Algorithm for IaaS Cloud Virtual Machines Based on Q Learning Mechanism." Journal of Sensors 2021 (October 31, 2021): 1–7. http://dx.doi.org/10.1155/2021/2119993.

Full text
Abstract:
When deploying infrastructure as a service (IaaS) cloud virtual machines using the existing algorithms, the deployment process cannot be simplified, and the algorithm is difficult to be applied. This leads to the problems of high energy consumption, high number of migrations, and high average service-level agreement (SLA) violation rate. In order to solve the above problems, an adaptive deployment algorithm for IaaS cloud virtual machines based on Q learning mechanism is proposed in this research. Based on the deployment principle, the deployment characteristics of the IaaS cloud virtual machines are analyzed. The virtual machine scheduling problem is replaced with the Markov process. The multistep Q learning algorithm is used to schedule the virtual machines based on the Q learning mechanism to complete the adaptive deployment of the IaaS cloud virtual machines. Experimental results show that the proposed algorithm has low energy consumption, small number of migrations, and low average SLA violation rate.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, He, Quan Wang, Chao Liu, and Chen Zhou. "Optimal estimation of cloud properties from thermal infrared observations with a combination of deep learning and radiative transfer simulation." Atmospheric Measurement Techniques 17, no. 24 (2024): 7129–41. https://doi.org/10.5194/amt-17-7129-2024.

Full text
Abstract:
Abstract. While traditional thermal infrared retrieval algorithms based on radiative transfer models (RTMs) could not effectively retrieve the cloud optical thickness of thick clouds, machine-learning-based algorithms were found to be able to provide reasonable estimations for both daytime and nighttime. Nevertheless, stand-alone machine learning algorithms are occasionally criticized for the lack of explicit physical processes. In this study, RTM simulations and a machine learning algorithm are synergistically utilized using the optimal estimation (OE) method to retrieve cloud properties from thermal infrared radiometry measured by the Moderate Resolution Imaging Spectroradiometer (MODIS). In the new algorithm, retrievals from a machine learning algorithm are used to provide a priori states for the iterative process of the OE method, and an RTM is used to create radiance lookup tables that are used in the iteration processes. Compared with stand-alone OE, the cloud properties retrieved by the new algorithm show an overall better performance by using statistical a priori information obtained by the machine learning algorithm. Compared with the stand-alone machine-learning-based algorithm, the radiances simulated based on retrievals from the new method align more closely with observations, and physical radiative processes are handled explicitly in the new algorithm. Therefore, the new method combines the advantages of RTM-based cloud retrieval methods and machine learning models. These findings highlight the potential for machine-learning-based algorithms to enhance the efficacy of conventional remote sensing techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Srivallop, Atikom. "A Comparative Analysis of Cloud-Based Healthcare Platforms through Effective Machine Learning Approaches." September 2024 6, no. 3 (2024): 228–38. http://dx.doi.org/10.36548/jitdw.2024.3.002.

Full text
Abstract:
The integration of cloud computing and machine learning in healthcare platforms has revolutionized the delivery of medical services, offering scalable solutions for data storage, processing, and analysis. This study presents an overview of various cloud-based healthcare platforms, focusing on the effectiveness of machine learning approaches in enhancing patient care and operational efficiency, and compares the performance of different machine learning models employed in the platforms for diverse healthcare applications. The findings provide insights into the strengths and limitations of existing cloud-based healthcare solutions, guiding healthcare providers and policymakers in selecting optimal platforms for improved patient outcomes and resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
48

Siwan, Sabreen J., Waleed F. Shareef, and Ahmed R. Nasser. "Machine Learning in AWS for IoT-based Oil Pipeline Monitoring System." Webology 19, no. 1 (2022): 3169–83. http://dx.doi.org/10.14704/web/v19i1/web19209.

Full text
Abstract:
The world's economy is dominate by the oil export business, which is heavily reliant on oil pipelines. Due to the length of the pipes and the harsh environment through which they pass, continuous structural health monitoring of pipelines using normal methods is difficult and expensive. In this paper, an IoT system integrated with cloud services is propose for oil pipeline structure monitoring. The system is based on collecting data from sensor nodes attached to the pipeline structure, which collectively form a network of IoT devices connected to the AWS cloud. Measurements from sensor nodes are collect, stored, and filtered in AWS cloud. Measurements are also make accessible to users through the internet in real-time using Python web framework, Flask, and sending alarms via email in real-time. The performance of the system is evaluate by applying damaging events (hard knocking) on the oil pipeline at several distances. Analysis of IoT data by machine learning classification algorithms, apply and comparison between SVM, Random Forest Classifier, and Decision Tree to determine the best one, and then built in EC2 Linux in AWS to analyses the measurements and classify new events according to their distances from the sensor nodes. The proposed system is test on field measurements that were collect in Al-Mussaib Gas Turbine Power Station in Baghdad. Among the three classifiers, Random Forest achieved 90% classification rate.
APA, Harvard, Vancouver, ISO, and other styles
49

Mozo, Alberto, Amit Karamchandani, Luis de la Cal, Sandra Gómez-Canaval, Antonio Pastor, and Lluis Gifre. "A Machine-Learning-Based Cyberattack Detector for a Cloud-Based SDN Controller." Applied Sciences 13, no. 8 (2023): 4914. http://dx.doi.org/10.3390/app13084914.

Full text
Abstract:
The rapid evolution of network infrastructure through the softwarization of network elements has led to an exponential increase in the attack surface, thereby increasing the complexity of threat protection. In light of this pressing concern, European Telecommunications Standards Institute (ETSI) TeraFlowSDN (TFS), an open-source microservice-based cloud-native Software-Defined Networking (SDN) controller, integrates robust Machine-Learning components to safeguard its network and infrastructure against potential malicious actors. This work presents a comprehensive study of the integration of these Machine-Learning components in a distributed scenario to provide secure end-to-end protection against cyber threats occurring at the packet level of the telecom operator’s Virtual Private Network (VPN) services configured with that feature. To illustrate the effectiveness of this integration, a real-world emerging attack vector (the cryptomining malware attack) is used as a demonstration. Furthermore, to address the pressing challenge of energy consumption in the telecom industry, we harness the full potential of state-of-the-art Green Artificial Intelligence techniques to optimize the size and complexity of Machine-Learning models in order to reduce their energy usage while maintaining their ability to accurately detect potential cyber threats. Additionally, to enhance the integrity and security of TeraFlowSDN’s cybersecurity components, Machine-Learning models are safeguarded from sophisticated adversarial attacks that attempt to deceive them by subtly perturbing input data. To accomplish this goal, Machine-Learning models are retrained with high-quality adversarial examples generated using a Generative Adversarial Network.
APA, Harvard, Vancouver, ISO, and other styles
50

Alberto, Mozo, Karamchandani Amit, de la Cal Luis, Gómez-Canaval Sandra, Pastor Antonio, and Gifre Lluis. "A Machine-Learning-Based Cyberattack Detector for a Cloud-Based SDN Controller." Applied Sciences 13, no. 8 (2023): 4914. https://doi.org/10.3390/app13084914.

Full text
Abstract:
The rapid evolution of network infrastructure through the softwarization of network elements has led to an exponential increase in the attack surface, thereby increasing the complexity of threat protection. In light of this pressing concern, European Telecommunications Standards Institute (ETSI) TeraFlowSDN (TFS), an open-source microservice-based cloud-native Software-Defined Networking (SDN) controller, integrates robust Machine-Learning components to safeguard its network and infrastructure against potential malicious actors. This work presents a comprehensive study of the integration of these Machine-Learning components in a distributed scenario to provide secure end-to-end protection against cyber threats occurring at the packet level of the telecom operator&rsquo;s Virtual Private Network (VPN) services configured with that feature. To illustrate the effectiveness of this integration, a real-world emerging attack vector (the cryptomining malware attack) is used as a demonstration. Furthermore, to address the pressing challenge of energy consumption in the telecom industry, we harness the full potential of state-of-the-art Green Artificial Intelligence techniques to optimize the size and complexity of Machine-Learning models in order to reduce their energy usage while maintaining their ability to accurately detect potential cyber threats. Additionally, to enhance the integrity and security of TeraFlowSDN&rsquo;s cybersecurity components, Machine-Learning models are safeguarded from sophisticated adversarial attacks that attempt to deceive them by subtly perturbing input data. To accomplish this goal, Machine-Learning models are retrained with high-quality adversarial examples generated using a Generative Adversarial Network.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!