Academic literature on the topic 'Cloud-Based Machine Learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cloud-Based Machine Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cloud-Based Machine Learning"

1

Kumar Thopalle, Praveen. "Enhancing Security in Cloud-Based Storage Systems Using Machine Learning." International Journal of Science and Research (IJSR) 12, no. 11 (2023): 2216–22. http://dx.doi.org/10.21275/sr24905010155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chmielecki, Przemysław. "Machine Learning Based on Cloud Solutions." Edukacja – Technika – Informatyka 27, no. 1 (2019): 132–38. http://dx.doi.org/10.15584/eti.2019.1.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Bo, and Rongli Zhang. "Virtual Machine Allocation Strategy Based on Statistical Machine Learning." Mathematical Problems in Engineering 2022 (July 5, 2022): 1–6. http://dx.doi.org/10.1155/2022/8190296.

Full text
Abstract:
At present, big data cloud computing has been widely used in many enterprises, and it serves tens of millions of users. One of the core technologies of big data cloud service is computer virtualization technology. The reasonable allocation of virtual machines on available hosts is of great significance to the performance optimization of cloud computing. We know that with the continuous development of information technology and the increasing number of computer users, different virtualization technologies and the increasing number of virtual machines in the network make the effective allocation of virtualization resources more and more difficult. In order to solve and optimize this problem, we propose a virtual machine allocation algorithm based on statistical machine learning. According to the resource requirements of each virtual machine in cloud service, the corresponding comprehensive performance analysis model is established, and the reasonable virtual machine allocation algorithm description of the host in the resource pool is realized according to the virtualization technology type or mode provided by the model. Experiments show that this method has the advantages of overall performance, load balancing, and supporting different types of virtualization.
APA, Harvard, Vancouver, ISO, and other styles
4

Davinder Pal Singh. "Cloud-Based Machine Learning : Opportunities and Challenges." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 264–70. http://dx.doi.org/10.32628/cseit24106177.

Full text
Abstract:
This comprehensive article explores the transformative impact of cloud-based machine learning (ML) on modern enterprises, examining both opportunities and challenges in implementation. The article investigates the rapidly growing cloud computing market and its ML segment, revolutionizing how organizations approach data analytics and business intelligence. Through detailed analysis of enterprise implementations, the article demonstrates how cloud ML solutions have democratized access to advanced analytics, significantly reducing operational costs while improving data processing efficiency. The article examines key aspects, including scalability advantages, cost efficiencies, and technical complexities, while providing evidence-based best practices for successful implementation. Drawing from multiple industry studies and real-world deployments, the article presents a framework for organizations to navigate challenges in data privacy, vendor dependencies, and skill requirements while maximizing the benefits of cloud-based ML solutions.
APA, Harvard, Vancouver, ISO, and other styles
5

Chkirbene, Zina, Aiman Erbad, Ridha Hamila, Ala Gouissem, Amr Mohamed, and Mounir Hamdi. "Machine Learning Based Cloud Computing Anomalies Detection." IEEE Network 34, no. 6 (2020): 178–83. http://dx.doi.org/10.1109/mnet.011.2000097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sengupta, Nandita, and Ramya chinnasamy. "Machine Learning Based Medicinal Care in Cloud." International Journal of Computer Trends and Technology 47, no. 4 (2017): 219–26. http://dx.doi.org/10.14445/22312803/ijctt-v47p135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Subramanian, E. K., and Latha Tamilselvan. "A focus on future cloud: machine learning-based cloud security." Service Oriented Computing and Applications 13, no. 3 (2019): 237–49. http://dx.doi.org/10.1007/s11761-019-00270-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Manish, Ihtiram Raza Khan, B. Gomathy, and Ansuman Samal. "Hybrid Multi-User Based Cloud Data Security for Medical Decision Learning Patterns." ECS Transactions 107, no. 1 (2022): 2559–73. http://dx.doi.org/10.1149/10701.2559ecst.

Full text
Abstract:
Machine learning plays a vital role in the real-time cloud based medical computing systems. However, most of the computing servers are independent of data security and recovery scheme in multiple virtual machines due to high computing cost and time. Also these cloud based medical applications require static security parameters for cloud data security. Cloud based medical applications require multiple servers in order to store medical records or machine learning patterns for decision making. Due to high computational memory and time, these cloud systems require an efficient data security framework in order to provide strong data access control among the multiple users. In this paper, a hybrid cloud data security framework is developed to improve the data security on the large machine learning patterns in real-time cloud computing environment. This work is implemented in two phases, data replication phase and multi-user data access security phase. Initially, machine decision patterns are replicated among the multiple servers for data recovering phase. In the multi-access cloud data security framework, a hybrid multi-access key based data encryption and decryption model is implemented on the large machine learning medical patterns for data recovery and security process. Experimental results proved that the present two-phase data recovering and security framework has better computational efficiency than the conventional approaches on large medical decision patterns.
APA, Harvard, Vancouver, ISO, and other styles
9

Majjaru, Chandrababu, and Senthil Kumar K. Dr. "Proficient Machine Learning Techniques for a Secured Cloud Environment." International Journal of Engineering and Advanced Technology (IJEAT) 11, no. 6 (2022): 74–81. https://doi.org/10.35940/ijeat.F3730.0811622.

Full text
Abstract:
<strong>Abstract: </strong>Many different checks, rules, processes, and technologies work together to keep cloud-based applications and infrastructure safe and secure against cyberattacks. Data security, customer privacy, regulatory enforcement, and device and user authentication regulations are all protected by these safety measures. Insecure Access Points, DDoS Attacks, Data Breach and Data Loss are the most pressing issues in cloud security. In the cloud computing context, researchers looked at several methods for detecting intrusions. Cloud security best practises such as host &amp; middleware security, infrastructure and virtualization security, and application system &amp; data security make up the bulk of these approaches, which are based on more traditional means of detecting abuse and anomalies. Machine Learning-based strategies for securing cloud infrastructure are the topic of this work, and ongoing research comprises research issues. There are a number of unresolved issues that will be addressed in the future.
APA, Harvard, Vancouver, ISO, and other styles
10

Talwani, Suruchi, Jimmy Singla, Gauri Mathur, et al. "Machine-Learning-Based Approach for Virtual Machine Allocation and Migration." Electronics 11, no. 19 (2022): 3249. http://dx.doi.org/10.3390/electronics11193249.

Full text
Abstract:
Due to its ability to supply reliable, robust and scalable computational power, cloud computing is becoming increasingly popular in industry, government, and academia. High-speed networks connect both virtual and real machines in cloud computing data centres. The system’s dynamic provisioning environment depends on the requirements of end-user computer resources. Hence, the operational costs of a particular data center are relatively high. To meet service level agreements (SLAs), it is essential to assign an appropriate maximum number of resources. Virtualization is a fundamental technology used in cloud computing. It assists cloud providers to manage data centre resources effectively, and, hence, improves resource usage by creating several virtualmachine (VM) instances. Furthermore, VMs can be dynamically integrated into a few physical nodes based on current resource requirements using live migration, while meeting SLAs. As a result, unoptimised and inefficient VM consolidation can reduce performance when an application is exposed to varying workloads. This paper introduces a new machine-learning-based approach for dynamically integrating VMs based on adaptive predictions of usage thresholds to achieve acceptable service level agreement (SLAs) standards. Dynamic data was generated during runtime to validate the efficiency of the proposed technique compared with other machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cloud-Based Machine Learning"

1

You, Yantian. "Cloud Auto-Scaling Control Engine Based on Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239066.

Full text
Abstract:
With the development of modern data centers and networks, many service providers have moved most of their computing functions to the cloud.  Considering the limitation of network bandwidth and hardware or virtual resources, how to manage different virtual resources in a cloud environment so as to achieve better resource allocation is a big problem.  Although some cloud infrastructures provide simple default auto-scaling and orchestration mechanisms, such as OpenStack Heat service, they usually only depend on a single parameter, such as CPU utilization and cannot respond to the network changes in a timely manner.&lt;p&gt; This thesis investigates different auto-scaling mechanisms and designs an on-line control engine that cooperates with different OpenStack service APIs based on various network resource data.  Two auto-scaling engines, Heat orchestration based engine and machine learning based online control engine, have been developed and compared for different client requests patterns.  Two machine learning methods, neural network, and linear regression have been considered to generate a control signal based on real-time network data.  This thesis also shows the network’s non-linear behaviors for heavy traffic and proposes a scaling policy based on deep network analysis.&lt;p&gt; The results show that for offline training, the neural network and linear regression provide 81.5% and 84.8% accuracy respectively.  However, for online testing with different client request patterns, the neural network results are different than we expected, while linear regression provided us with much better results.  The model comparison showed that these two auto-scaling mechanisms have similar behavior for a SMOOTH-load Pattern.  However, for the SPIKEY-load Pattern, the linear regression based online control engine responded faster to network changes while heat orchestration service shows some delay.  Compared with the proposed scaling policy with fewer web servers in use and acceptable response latency, both of the two auto-scaling models waste network resources.<br>Med utvecklingen av moderna datacentraler och nätverk har många tjänsteleverant örer flyttat de flesta av sina datafunktioner till molnet. Med tanke på begränsningen av nätverksbandbredd och hårdvara eller virtuella resurser, är det ett stort problem att hantera olika virtuella resurser i en molnmiljö för att uppnå bättre resursallokering. även om vissa molninfrastrukturer tillhandahåller enkla standardskalnings- och orkestrationsmekanismer, till exempel OpenStack Heat service, beror de vanligtvis bara på en enda parameter, som CPU-utnyttjande och kan inte svara på nätverksändringarna i tid. Denna avhandling undersöker olika auto-skaleringsmekanismer och designar en online-kontrollmotor som samarbetar med olika OpenStack-service APIskivor baserat på olika nätverksresursdata. Två auto-skalermotorer, värmeorkestreringsbaserad motor- och maskininlärningsbaserad online-kontrollmotor, har utvecklats och jämförts för olika klientförfråg-ningsmönster. Två maskininl ärningsmetoder, neuralt nätverk och linjär regression har ansetts generera en styrsignal baserad på realtids nätverksdata. Denna avhandling visar också nätverkets olinjära beteenden för tung traffik och föreslår en skaleringspolitik baserad på djup nätverksanalys. Resultaten visar att för nätutbildning, ger neuralt nätverk och linjär regression 81,5% respektive 84,8% noggrannhet. För online-test med olika klientförfrågningsm önster är de neurala nätverksresultaten dock annorlunda än vad vi förväntade oss, medan linjär regression gav oss mycket bättre resultat. Modellen jämförelsen visade att dessa två auto-skala mekanismer har liknande beteende för ett SMOOTH-load mönster. För SPIKEY-load mönster svarade den linjära regressionsbaserade online-kontrollmotorn snabbare än nätverksförändringar medan värme-orkestrationstjänsten uppvisar viss fördröjning. Jämfört med den föreslagna skaleringspolitiken med färre webbservrar i bruk och acceptabel svarsfördröjning, slöser båda de två auto-skalande modellerna nätverksresurser.
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Yao. "Privacy Preservation for Cloud-Based Data Sharing and Data Analytics." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73796.

Full text
Abstract:
Data privacy is a globally recognized human right for individuals to control the access to their personal information, and bar the negative consequences from the use of this information. As communication technologies progress, the means to protect data privacy must also evolve to address new challenges come into view. Our research goal in this dissertation is to develop privacy protection frameworks and techniques suitable for the emerging cloud-based data services, in particular privacy-preserving algorithms and protocols for the cloud-based data sharing and data analytics services. Cloud computing has enabled users to store, process, and communicate their personal information through third-party services. It has also raised privacy issues regarding losing control over data, mass harvesting of information, and un-consented disclosure of personal content. Above all, the main concern is the lack of understanding about data privacy in cloud environments. Currently, the cloud service providers either advocate the principle of third-party doctrine and deny users' rights to protect their data stored in the cloud; or rely the notice-and-choice framework and present users with ambiguous, incomprehensible privacy statements without any meaningful privacy guarantee. In this regard, our research has three main contributions. First, to capture users' privacy expectations in cloud environments, we conceptually divide personal data into two categories, i.e., visible data and invisible data. The visible data refer to information users intentionally create, upload to, and share through the cloud; the invisible data refer to users' information retained in the cloud that is aggregated, analyzed, and repurposed without their knowledge or understanding. Second, to address users' privacy concerns raised by cloud computing, we propose two privacy protection frameworks, namely individual control and use limitation. The individual control framework emphasizes users' capability to govern the access to the visible data stored in the cloud. The use limitation framework emphasizes users' expectation to remain anonymous when the invisible data are aggregated and analyzed by cloud-based data services. Finally, we investigate various techniques to accommodate the new privacy protection frameworks, in the context of four cloud-based data services: personal health record sharing, location-based proximity test, link recommendation for social networks, and face tagging in photo management applications. For the first case, we develop a key-based protection technique to enforce fine-grained access control to users' digital health records. For the second case, we develop a key-less protection technique to achieve location-specific user selection. For latter two cases, we develop distributed learning algorithms to prevent large scale data harvesting. We further combine these algorithms with query regulation techniques to achieve user anonymity. The picture that is emerging from the above works is a bleak one. Regarding to personal data, the reality is we can no longer control them all. As communication technologies evolve, the scope of personal data has expanded beyond local, discrete silos, and integrated into the Internet. The traditional understanding of privacy must be updated to reflect these changes. In addition, because privacy is a particularly nuanced problem that is governed by context, there is no one-size-fit-all solution. While some cases can be salvaged either by cryptography or by other means, in others a rethinking of the trade-offs between utility and privacy appears to be necessary.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Lindeman, Victor. "An Analysis of Cloud-Based Machine Learning Models for Traffic-Sign Classification." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160022.

Full text
Abstract:
The machine learning method deep neural networks are commonly used for artificial intelligence applications such as speech recognition, robotics, and computer vision. Deep neural networks often have very good accuracy, the downside is the complexity of the computations. To be able to use deep neural network models on devices with less computing power, such as smart-phones e.g., can the model run on the cloud and send the results to the device. This thesis will evaluate the possibility to use a smart-phone as a camera unit with Google’s open source neural network called Inception, to identify traffic signs. The thesis analyzes the possibility to move the computation to the cloud and still use the system for real-time applications, and compare it to running the image model on the edge (the device itself). The accuracy of the model, as well as how estimations of future 5G mobile networks will affect the quality of service for the system is also analyzed. The result shows that the model achieved an accuracy of 88.0 % on the "German traffic sign benchmark" data set and 97.6 % on a newly created data set (data sets of images to test the neural network model on). The total time when using this system, from sending the image to receiving the result, is &gt; 2 s. Because of this can it not be used for any application affecting traffic safety. Estimated improvements from future 5G mobile networks could include reduced communication delay, ultra-reliable communication, and with the higher bandwidth available could the system achieve a higher capacity if that would be required e.g. sending higher quality images.
APA, Harvard, Vancouver, ISO, and other styles
4

Olsson, Fredrik. "Feature Based Learning for Point Cloud Labeling and Grasp Point Detection." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150785.

Full text
Abstract:
Robotic bin picking is the problem of emptying a bin of randomly distributedobjects through a robotic interface. This thesis examines an SVM approach to ex-tract grasping points for a vacuum-type gripper. The SVM is trained on syntheticdata and used to classify the points of a non-synthetic 3D-scanned point cloud aseither graspable or non-graspable. The classified points are then clustered intograspable regions from which the grasping points are extracted. The SVM models and the algorithm as a whole are trained and evaluated againstcubic and cylindrical objects. Separate SVM models are trained for each type ofobject in addition to one model being trained on a dataset containing both typesof objects. It is shown that the performance of the SVM in terms accuracy isdependent on the objects and their geometrical properties. Further, it is shownthat the algorithm is reasonably robust in terms of successfully picking objects,regardless of the scale of the objects.
APA, Harvard, Vancouver, ISO, and other styles
5

Nordlund, Fredrik Hans. "Enabling Network-Aware Cloud Networked Robots with Robot Operating System : A machine learning-based approach." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160877.

Full text
Abstract:
During the recent years, a new area called Cloud Networked Robotics (CNR) has evolved from conventional robotics, thanks to the increasing availability of cheap robot systems and steady improvements in the area of cloud computing. Cloud networked robots refers to robots with the ability to offload computation heavy modules to a cloud, in order to make use of storage, scalable computation power, and other functionalities enabled by a cloud such as shared knowledge between robots on a global level. However, these cloud robots face a problem with reachability and QoS of crucial modules that are offloaded to the cloud, when operating in unstable network environments. Under such conditions, the robots might lose the connection to the cloud at any moment; in worst case, leaving the robots “brain-dead”. This thesis project proposes a machine learning-based network aware framework for a cloud robot, that can choose the most efficient module placement based on location, task, and the network condition. The proposed solution was implemented upon a cloud robot prototype based on the TurtleBot 2 robot development kit, running Robot Operating System (ROS). A continuous experiment was conducted where the cloud robot was ordered to execute a simple task in the laboratory corridor under various network conditions. The proposed solution was evaluated by comparing the results from the continuous experiment with measurements taken from the same robot, with all modules placed locally, doing the same task. The results show that the proposed framework can potentially decrease the battery consumption by 10% while improving the efficiency of the task by 2.4 seconds (2.8%). However, there is an inherent bottleneck in the proposed solution where each new robot would need 2 months to accumulate enough data for the training set, in order to show good performance. The proposed solution can potentially benefit the area of CNR if connected and integrated with a shared-knowledge platform which can enable new robots to skip the training phase, by downloading the existing knowledge from the cloud.<br>Under de senaste åren har ett nytt forskningsområde kallat Cloud Networked Robotics (CNR) växt fram inom den konventionella robottekniken, tack vare den ökade tillgången på billiga robotsystem och stadiga framsteg inom området cloud computing. Molnrobotar syftar på robotar med förmågan att flytta resurstunga moduler till ett moln för att ta del av lagringskapaciteten, den skalbara processorkraften och andra tjänster som ett moln kan tillhandahålla, t.ex. en kunskapsdatabas för robotar över hela världen. Det finns dock ett problem med dessa sorters robotar gällande nåbarhet och QoS för kritiska moduler placerade på ett moln, när dessa robotar verkar i instabila nätverksmiljöer. I ett sådant scenario kan robotarna när som helst förlora anslutningen till molnet, vilket i värsta fall lämnar robotarna hjärndöda. Den här rapporten föreslår en maskininlärningsbaserad nätverksmedveten ramverkslösning för en molnrobot, som kan välja de mest effektiva modulplaceringarna baserat på robotens position, den givna uppgiften och de rådande nätverksförhållanderna. Ramverkslösningen implementerades på en molnrobotsprototyp, baserad på ett robot development kit kallat TurtleBot 2, som använder sig av ett middleware som heter Robot Operating System (ROS). Ett fortskridande experiment utfördes där molnroboten fick i uppgift att utföra ett enkelt uppdrag i laboratoriets korridor, under varierande nätverksförhållanden. Ramverkslösningen utvärderades genom att jämföra resultaten från det fortskridrande experimentet med mätningar som gjordes med samma robot som utförde samma uppgift, fast med alla moduler placerade lokalt på roboten. Resultaten visar att den föreslagna ramverkslösningen kan potentiellt minska batterikonsumptionen med 10%, samtidigt som tiden för att utföra en uppgift kan minskas med 2.4 sekunder (2.8%). Däremot uppstår en flaskhals i framtagna lösningen där varje ny robot kräver 2 månader för att samla ihop nog med data för att maskinilärningsalgoritmen ska visa bra prestanda. Den förlsagna lösningen kan dock vara fördelaktig för CNR om man integrerar den med en kunskapsdatabas för robotar, som kan möjliggöra för varje ny robot att kringå den 2 månader långa träningsperioden, genom att ladda ner existerande kunskap från molnet.
APA, Harvard, Vancouver, ISO, and other styles
6

Minelli, Michele. "Fully homomorphic encryption for machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE056/document.

Full text
Abstract:
Le chiffrement totalement homomorphe permet d’effectuer des calculs sur des données chiffrées sans fuite d’information sur celles-ci. Pour résumer, un utilisateur peut chiffrer des données, tandis qu’un serveur, qui n’a pas accès à la clé de déchiffrement, peut appliquer à l’aveugle un algorithme sur ces entrées. Le résultat final est lui aussi chiffré, et il ne peut être lu que par l’utilisateur qui possède la clé secrète. Dans cette thèse, nous présentons des nouvelles techniques et constructions pour le chiffrement totalement homomorphe qui sont motivées par des applications en apprentissage automatique, en portant une attention particulière au problème de l’inférence homomorphe, c’est-à-dire l’évaluation de modèles cognitifs déjà entrainé sur des données chiffrées. Premièrement, nous proposons un nouveau schéma de chiffrement totalement homomorphe adapté à l’évaluation de réseaux de neurones artificiels sur des données chiffrées. Notre schéma atteint une complexité qui est essentiellement indépendante du nombre de couches dans le réseau, alors que l’efficacité des schéma proposés précédemment dépend fortement de la topologie du réseau. Ensuite, nous présentons une nouvelle technique pour préserver la confidentialité du circuit pour le chiffrement totalement homomorphe. Ceci permet de cacher l’algorithme qui a été exécuté sur les données chiffrées, comme nécessaire pour protéger les modèles propriétaires d’apprentissage automatique. Notre mécanisme rajoute un coût supplémentaire très faible pour un niveau de sécurité égal. Ensemble, ces résultats renforcent les fondations du chiffrement totalement homomorphe efficace pour l’apprentissage automatique, et représentent un pas en avant vers l’apprentissage profond pratique préservant la confidentialité. Enfin, nous présentons et implémentons un protocole basé sur le chiffrement totalement homomorphe pour le problème de recherche d’information confidentielle, c’est-à-dire un scénario où un utilisateur envoie une requête à une base de donnée tenue par un serveur sans révéler cette requête<br>Fully homomorphic encryption enables computation on encrypted data without leaking any information about the underlying data. In short, a party can encrypt some input data, while another party, that does not have access to the decryption key, can blindly perform some computation on this encrypted input. The final result is also encrypted, and it can be recovered only by the party that possesses the secret key. In this thesis, we present new techniques/designs for FHE that are motivated by applications to machine learning, with a particular attention to the problem of homomorphic inference, i.e., the evaluation of already trained cognitive models on encrypted data. First, we propose a novel FHE scheme that is tailored to evaluating neural networks on encrypted inputs. Our scheme achieves complexity that is essentially independent of the number of layers in the network, whereas the efficiency of previously proposed schemes strongly depends on the topology of the network. Second, we present a new technique for achieving circuit privacy for FHE. This allows us to hide the computation that is performed on the encrypted data, as is necessary to protect proprietary machine learning algorithms. Our mechanism incurs very small computational overhead while keeping the same security parameters. Together, these results strengthen the foundations of efficient FHE for machine learning, and pave the way towards practical privacy-preserving deep learning. Finally, we present and implement a protocol based on homomorphic encryption for the problem of private information retrieval, i.e., the scenario where a party wants to query a database held by another party without revealing the query itself
APA, Harvard, Vancouver, ISO, and other styles
7

Wiren, Jakob. "Data Storage Cost Optimization Based on Electricity Price Forecasting with Machine Learning in a Multi-Geographical Cloud Environment." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152250.

Full text
Abstract:
As increased demand of cloud computing leads to increased electricity costs for cloud providers, there is an incentive to investigate in new methods to lower electricity costs in data centers. Electricity price markets suffer from sudden price spikes as well as irregularities between different geographical electricity markets. This thesis investigates in whether it is possible to leverage these volatilities and irregularities between different electricity price markets, to offload or move storage in order to reduce electricity price costs for data storage. By forecasting four different electricity price markets it was possible to predict sudden price spikes and leverage these forecasts in a simple optimization model to offload storage of data in data centers and successfully reduce electricity costs for data storage.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohammed, Bashir. "A Framework for Efficient Management of Fault Tolerance in Cloud Data Centres and High-Performance Computing Systems: An Investigation and Performance analysis of a Cloud Based Virtual Machine Success and Failure Rate in a typical Cloud Computing Environment and Prediction Methods." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/17400.

Full text
Abstract:
Cloud computing is increasingly attracting huge attention both in academic research and industry initiatives and has been widely used to solve advanced computation problem. As cloud datacentres continue to grow in scale and complexity, the risk of failure of Virtual Machines (VM) and hosts running several jobs and processing large amount of user request increases and consequently becomes even more difficult to predict potential failures within a datacentre. However, even though fault tolerance continues to be an issue of growing concern in cloud and HPC systems, mitigating the impact of failure and providing accurate predictions with enough lead time remains a difficult research problem. Traditional existing fault-tolerance strategies such as regular check-point/restart and replication are not adequate due to emerging complexities in the systems and do not scale well in the cloud due to resource sharing and distributed systems networks. In the thesis, a new reliable Fault Tolerance scheme using an intelligent optimal strategy is presented to ensure high system availability, reduced task completion time and efficient VM allocation process. Specifically, (i) A generic fault tolerance algorithm for cloud data centres and HPC systems in the cloud was developed. (ii) A verification process is developed to a fully dimensional VM specification during allocation in the presence of fault. In comparison to existing approaches, the results obtained shows an increase in success rate of the VMs, a reduction in response time of VM allocation and an improved overall performance. (iii) A failure prediction model is further developed, and the predictive capabilities of machine learning is explored by applying several algorithms to improve the accuracy of prediction. Experimental results indicate that the average prediction accuracy of the proposed model when predicting failure is about 90% accurate compared to existing algorithms, which implies that the approach can effectively predict potential system and application failures within the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Goutierre, Emmanuel. "Machine learning-based particle accelerator modeling." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG106.

Full text
Abstract:
Les accélérateurs de particules reposent sur des simulations de haute précision pour optimiser la dynamique du faisceau. Ces simulations sont coûteuses en ressources de calcul, rendant leur analyse en temps réel difficilement réalisable. Cette thèse propose de surmonter cette limitation en explorant le potentiel de l'apprentissage automatique pour développer des modèles de substitution des simulations d'accélérateurs de particules. Ce travail se concentre sur ThomX, une source Compton compacte, et introduit deux modèles de substitution : LinacNet et Implicit Neural ODE (INODE). Ces modèles sont entraînés sur une base de données développée dans le cadre de cette thèse, couvrant une grande variété de conditions opérationnelles afin d'assurer leur robustesse et leur capacité de généralisation. LinacNet offre une représentation complète du nuage de particules en prédisant les coordonnées de toutes les macro-particules du faisceau plutôt que de se limiter à ses observables. Cette modélisation détaillée, couplée à une approche séquentielle prenant en compte la dynamique cumulative des particules tout au long de l'accélérateur, garantit la cohérence des prédictions et améliore l'interprétabilité du modèle. INODE, basé sur le cadre des Neural Ordinary Differential Equations (NODE), vise à apprendre les dynamiques implicites régissant les systèmes de particules sans avoir à résoudre explicitement les équations différentielles pendant l'entraînement. Contrairement aux méthodes basées sur NODE, qui peinent à gérer les discontinuités, INODE est conçu théoriquement pour les traiter plus efficacement. Ensemble, LinacNet et INODE servent de modèles de substitution pour ThomX, démontrant leur capacité à approximer la dynamique des particules. Ce travail pose les bases pour développer et améliorer la fiabilité des modèles basés sur l'apprentissage automatique en physique des accélérateurs<br>Particle accelerators rely on high-precision simulations to optimize beam dynamics. These simulations are computationally expensive, making real-time analysis impractical. This thesis seeks to address this limitation by exploring the potential of machine learning to develop surrogate models for particle accelerator simulations. The focus is on ThomX, a compact Compton source, where two surrogate models are introduced: LinacNet and Implicit Neural ODE (INODE). These models are trained on a comprehensive database developed in this thesis that captures a wide range of operating conditions to ensure robustness and generalizability. LinacNet provides a comprehensive representation of the particle cloud by predicting all coordinates of the macro-particles, rather than focusing solely on beam observables. This detailed modeling, coupled with a sequential approach that accounts for cumulative particle dynamics throughout the accelerator, ensures consistency and enhances model interpretability. INODE, based on the Neural Ordinary Differential Equation (NODE) framework, seeks to learn the implicit governing dynamics of particle systems without the need for explicit ODE solving during training. Unlike traditional NODEs, which struggle with discontinuities, INODE is theoretically designed to handle them more effectively. Together, LinacNet and INODE serve as surrogate models for ThomX, demonstrating their ability to approximate particle dynamics. This work lays the groundwork for developing and improving the reliability of machine learning-based models in accelerator physics
APA, Harvard, Vancouver, ISO, and other styles
10

Bellafqira, Reda. "Chiffrement homomorphe et recherche par le contenu sécurisé de données externalisées et mutualisées : Application à l'imagerie médicale et l'aide au diagnostic." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0063.

Full text
Abstract:
La mutualisation et l'externalisation de données concernent de nombreux domaines y compris celui de la santé. Au-delà de la réduction des coûts de maintenance, l'intérêt est d'améliorer la prise en charge des patients par le déploiement d'outils d'aide au diagnostic fondés sur la réutilisation des données. Dans un tel environnement, la sécurité des données (confidentialité, intégrité et traçabilité) est un enjeu majeur. C'est dans ce contexte que s'inscrivent ces travaux de thèse. Ils concernent en particulier la sécurisation des techniques de recherche d'images par le contenu (CBIR) et de « machine learning » qui sont au c'ur des systèmes d'aide au diagnostic. Ces techniques permettent de trouver des images semblables à une image requête non encore interprétée. L'objectif est de définir des approches capables d'exploiter des données externalisées et sécurisées, et de permettre à un « cloud » de fournir une aide au diagnostic. Plusieurs mécanismes permettent le traitement de données chiffrées, mais la plupart sont dépendants d'interactions entre différentes entités (l'utilisateur, le cloud voire un tiers de confiance) et doivent être combinés judicieusement de manière à ne pas laisser fuir d'information lors d'un traitement.Au cours de ces trois années de thèse, nous nous sommes dans un premier temps intéressés à la sécurisation à l'aide du chiffrement homomorphe, d'un système de CBIR externalisé sous la contrainte d'aucune interaction entre le fournisseur de service et l'utilisateur. Dans un second temps, nous avons développé une approche de « Machine Learning » sécurisée fondée sur le perceptron multicouches, dont la phase d'apprentissage peut être externalisée de manière sûre, l'enjeu étant d'assurer la convergence de cette dernière. L'ensemble des données et des paramètres du modèle sont chiffrés. Du fait que ces systèmes d'aides doivent exploiter des informations issues de plusieurs sources, chacune externalisant ses données chiffrées sous sa propre clef, nous nous sommes intéressés au problème du partage de données chiffrées. Un problème traité par les schémas de « Proxy Re-Encryption » (PRE). Dans ce contexte, nous avons proposé le premier schéma PRE qui permet à la fois le partage et le traitement des données chiffrées. Nous avons également travaillé sur un schéma de tatouage de données chiffrées pour tracer et vérifier l'intégrité des données dans cet environnement partagé. Le message tatoué dans le chiffré est accessible que l'image soit ou non chiffrée et offre plusieurs services de sécurité fondés sur le tatouage<br>Cloud computing has emerged as a successful paradigm allowing individuals and companies to store and process large amounts of data without a need to purchase and maintain their own networks and computer systems. In healthcare for example, different initiatives aim at sharing medical images and Personal Health Records (PHR) in between health professionals or hospitals with the help of the cloud. In such an environment, data security (confidentiality, integrity and traceability) is a major issue. In this context that these thesis works, it concerns in particular the securing of Content Based Image Retrieval (CBIR) techniques and machine learning (ML) which are at the heart of diagnostic decision support systems. These techniques make it possible to find similar images to an image not yet interpreted. The goal is to define approaches that can exploit secure externalized data and enable a cloud to provide a diagnostic support. Several mechanisms allow the processing of encrypted data, but most are dependent on interactions between different entities (the user, the cloud or a trusted third party) and must be combined judiciously so as to not leak information. During these three years of thesis, we initially focused on securing an outsourced CBIR system under the constraint of no interaction between the users and the service provider (cloud). In a second step, we have developed a secure machine learning approach based on multilayer perceptron (MLP), whose learning phase can be outsourced in a secure way, the challenge being to ensure the convergence of the MLP. All the data and parameters of the model are encrypted using homomorphic encryption. Because these systems need to use information from multiple sources, each of which outsources its encrypted data under its own key, we are interested in the problem of sharing encrypted data. A problem known by the "Proxy Re-Encryption" (PRE) schemes. In this context, we have proposed the first PRE scheme that allows both the sharing and the processing of encrypted data. We also worked on watermarking scheme over encrypted data in order to trace and verify the integrity of data in this shared environment. The embedded message is accessible whether or not the image is encrypted and provides several services
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Cloud-Based Machine Learning"

1

Gift, Noah. Pragmatic AI: An Introduction to Cloud-Based Machine Learning. Pearson Education, Limited, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gift, Noah. Pragmatic AI: An Introduction to Cloud-Based Machine Learning. Pearson Education, Limited, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dickerson, Craig. Introduction to Artificial Intelligence: Age of Machine Learning and Cloud-Based Technologies. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gift, Noah. Pragmatic AI: An Introduction to Cloud-Based Machine Learning (Addison Wesley Data & Analytics). Addison-Wesley Professional, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Design and Deploy Microsoft Defender for IoT: Leveraging Cloud-Based Analytics and Machine Learning Capabilities. Apress L. P., 2024.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Cloud-Based Machine Learning"

1

Agarwal, Pankaj, Abolfazl Mehbodniya, Julian L. Webber, Radha Raman Chandan, Mohit Tiwari, and Dhiraj Kapila. "Machine Learning-based Cloud Optimization System." In Computer Science Engineering and Emerging Technologies. CRC Press, 2024. http://dx.doi.org/10.1201/9781003405580-111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rani Roopha Devi, K. G., R. Murugesan, and R. Mahendra Chozhan. "Cloud-Based CVD Identification for Periodontal Disease." In Machine Learning and Autonomous Systems. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7996-4_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bouchemal, Nardjes, and Naila Bouchemal. "Intelligent ERP Based Multi Agent Systems and Cloud Computing." In Machine Learning for Networking. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19945-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gaikwad, Vidya S., Nilesh P. Sable, Disha S. Wankhede, et al. "Securing cloud-based IoT." In Internet of Things enabled Machine Learning for Biomedical Application. CRC Press, 2024. http://dx.doi.org/10.1201/9781003487647-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kuvvarapu, Anil, Anusha Nagina Kunuku, and Gopi Krishna Saggurthi. "Data Mining on Cloud-Based Big Data." In Cybernetics, Cognition and Machine Learning Applications. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1632-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Youn, Chan-Hyun, Min Chen, and Patrizio Dazzi. "Machine-Learning Based Approaches for Cloud Brokering." In KAIST Research Series. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5071-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Kiran Deep, and Prabh Deep Singh. "Enhancing IoT with Cloud-Based Machine Learning." In Integration of Cloud Computing and IoT. Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781032656694-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rele, Mayur, and Dipti Patil. "Machine Learning-Powered Cloud-Based Text Summarization." In Evolutionary Artificial Intelligence. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8438-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

ElShawi, Radwa. "Distributed and Cloud-Based Automated Machine Learning." In Encyclopedia of Big Data Technologies. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-319-63962-8_338-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pranav, Prashant, Naela Rizvi, Naghma Khatoon, and Sharmistha Roy. "A Review on SLA-Based Resource Provisioning in Cloud." In Machine Learning and Information Processing. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1884-3_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cloud-Based Machine Learning"

1

Semwal, Arpit, Xiaofeng Yue, Yuzhe Shen, and Michal Aibin. "Cloud Resource Allocation Recommendation Based on Machine Learning." In 2024 24th International Conference on Transparent Optical Networks (ICTON). IEEE, 2024. http://dx.doi.org/10.1109/icton62926.2024.10647993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rachani, Raj Mahendrakumar, and Roopa Ravish. "Cloud Based IDS-Anomaly Detection Using Machine Learning." In 2025 1st International Conference on AIML-Applications for Engineering & Technology (ICAET). IEEE, 2025. https://doi.org/10.1109/icaet63349.2025.10932232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anas, Mohd, Shahadat Hussain, and Shahnawaz Ahmad. "Secure Cloud-Based Diabetes Classification with Machine Learning." In 2024 International Conference on Artificial Intelligence and Emerging Technology (Global AI Summit). IEEE, 2024. https://doi.org/10.1109/globalaisummit62156.2024.10947906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sathyanathan, S., S. Nithika Sree, F. Sophiya Theresa, S. Vaishali, and S. Vanathi. "Cloud based Private Authorisation Scheme Oriented Service Providers." In 2025 International Conference on Machine Learning and Autonomous Systems (ICMLAS). IEEE, 2025. https://doi.org/10.1109/icmlas64557.2025.10968046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Malviya, Ashwini, Raman Batra, and Ms Vyshnavi A. "Extreme Learning Machine for Cloud-Based Breast Cancer Diagnosis." In 2024 1st International Conference on Sustainable Computing and Integrated Communication in Changing Landscape of AI (ICSCAI). IEEE, 2024. https://doi.org/10.1109/icscai61790.2024.10866618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zeng, Shou-Zhen, Jia-Xing Gu, and Shyi-Ming Chen. "Optimizing Overseas Warehouse Site Selection Based on Cloud-Evidence Theory-Topsis Model." In 2024 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2024. https://doi.org/10.1109/icmlc63072.2024.10935135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sherpa, Lincoln, Sima Attar-Khorasani, Rajasekar Sankar, Ralph Müller-Pfefferkorn, and Siavash Ghiasvand. "KEENsight: Cloud Based Collaborative Environment for Streamlining Machine Learning Development." In 2024 15th International Conference on Information, Intelligence, Systems & Applications (IISA). IEEE, 2024. https://doi.org/10.1109/iisa62523.2024.10786688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Harish, G. N., and H. S. Annapurna. "Survey on Machine Learning Based Anomaly Detection in Cloud Networks." In 2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS). IEEE, 2024. http://dx.doi.org/10.1109/ickecs61492.2024.10617173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Salem, Mohammad Sadegh. "PREDICTING STUDENT ACADEMIC SUCCESS USING CLOUD-BASED MACHINE LEARNING ALGORITHMS." In 19th International Technology, Education and Development Conference. IATED, 2025. https://doi.org/10.21125/inted.2025.1944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dattangire, Rahul, Rushikesh Burle, Divya Biradar, and Leelkanth Dewangan. "Machine Learning-Based Security for Cloud Computing Challenges and Implications." In 2024 IEEE North Karnataka Subsection Flagship International Conference (NKCon). IEEE, 2024. https://doi.org/10.1109/nkcon62728.2024.10774633.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cloud-Based Machine Learning"

1

Papadakis, Stamatios, Арнольд Юхимович Ків, Hennadiy M. Kravtsov, et al. Revolutionizing education: using computer simulation and cloud-based smart technology to facilitate successful open learning. Криворізький державний педагогічний університет, 2023. http://dx.doi.org/10.31812/123456789/7375.

Full text
Abstract:
The article presents the proceedings of two workshops: Cloud-based Smart Technologies for Open Education Workshop (CSTOE 2022) and Illia O. Teplytskyi Workshop on Computer Simulation in Education (CoSinE 2022) held in Kyiv, Ukraine, on December 22, 2022. The CoSinE workshop focuses on computer simulation in education, including topics such as computer simulation in STEM education, AI in education, and modeling systems in education. The CSTOE workshop deals with cloud-based learning resources, platforms, and infrastructures, with topics including personalized learning and research environment design, big data and smart data in open education and research, machine learning for open education and research, and more. The article includes a summary of successful cases and provides directions for future research in each workshop’s respective topics of interest. The proceedings consist of several peer-reviewed papers that present a state-of-the-art overview and provide guidelines for future research. The joint program committee consisted of members from universities and research institutions worldwide.
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Securing AI-driven Infrastructure: Advanced Cybersecurity Frameworks for Cloud and Edge Computing Environments. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv225.

Full text
Abstract:
Abstract: The rapid adoption of artificial intelligence (AI) in cloud and edge computing environments has transformed industries by enabling large-scale automation, real-time analytics, and intelligent decision-making. However, the increasing reliance on AI-powered infrastructures introduces significant cybersecurity challenges, including adversarial attacks, data privacy risks, and vulnerabilities in AI model supply chains. This research explores advanced cybersecurity frameworks tailored to protect AI-driven cloud and edge computing environments. It investigates AI-specific security threats, such as adversarial machine learning, model poisoning, and API exploitation, while analyzing AI-powered cybersecurity techniques for threat detection, anomaly prediction, and zero-trust security. The study also examines the role of cryptographic solutions, including homomorphic encryption, federated learning security, and post-quantum cryptography, in safeguarding AI models and data integrity. By integrating AI with cutting-edge cybersecurity strategies, this research aims to enhance resilience, compliance, and trust in AI-driven infrastructures. Future advancements in AI security, blockchain-based authentication, and quantum-enhanced cryptographic solutions will be critical in securing next-generation AI applications in cloud and edge environments. Keywords: AI security, adversarial machine learning, cloud computing security, edge computing security, zero-trust AI, homomorphic encryption, federated learning security, post-quantum cryptography, blockchain for AI security, AI-driven threat detection, model poisoning attacks, anomaly prediction, cyber resilience, decentralized AI security, secure multi-party computation (SMPC).
APA, Harvard, Vancouver, ISO, and other styles
3

Hovakimyan, Naira, Hunmin Kim, Wenbin Wan, and Chuyuan Tao. Safe Operation of Connected Vehicles in Complex and Unforeseen Environments. Illinois Center for Transportation, 2022. http://dx.doi.org/10.36501/0197-9191/22-016.

Full text
Abstract:
Autonomous vehicles (AVs) have a great potential to transform the way we live and work, significantly reducing traffic accidents and harmful emissions on the one hand and enhancing travel efficiency and fuel economy on the other. Nevertheless, the safe and efficient control of AVs is still challenging because AVs operate in dynamic environments with unforeseen challenges. This project aimed to advance the state-of-the-art by designing a proactive/reactive adaptation and learning architecture for connected vehicles, unifying techniques in spatiotemporal data fusion, machine learning, and robust adaptive control. By leveraging data shared over a cloud network available to all entities, vehicles proactively adapted to new environments on the proactive level, thus coping with large-scale environmental changes. On the reactive level, control-barrier-function-based robust adaptive control with machine learning improved the performance around nominal models, providing performance and control certificates. The proposed research shaped a robust foundation for autonomous driving on cloud-connected highways of the future.
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Po Lun, and Panagiotis Stinis. Developing a simulator-based satellite dataset for using machine learning techniques to derive aerosol-cloud-precipitation interactions in models and observations in a consistent framework. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1984697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Kim, and Jonathan Hambur. Adoption of Emerging Digital General-purpose Technologies: Determinants and Effects. Reserve Bank of Australia, 2023. http://dx.doi.org/10.47688/rdp2023-10.

Full text
Abstract:
This paper examines the factors associated with the adoption of cloud computing and artificial intelligence/machine learning, two emerging digital general-purpose technologies (GPT), as well as firms' post-adoption outcomes. To do so we identify adoption of GPT based on references to these technologies in listed company reports, and merge this with data on their Board of Directors, their hiring activities and their financial performance. We find that firms that have directors with relevant technological backgrounds, or female representation on their Board, are more likely to profitably adopt GPT, with the former being particularly important. Worker skills also appear important, with firms that adopt GPT, particularly those that do so profitably, being more likely to hire skilled staff following adoption. Finally, while early adopters of GPT experience a dip in profitability following adoption, this is not evident for more recent adopters. This suggests that GPT may have become easier to adopt over time, potentially due to changes in the technologies or the availability of relevant skills, which is encouraging in terms of future productivity outcomes.
APA, Harvard, Vancouver, ISO, and other styles
6

Kong, Zhihao, and Na Lu. Field Implementation of Concrete Strength Sensor to Determine Optimal Traffic Opening Time. Purdue University, 2024. http://dx.doi.org/10.5703/1288284317724.

Full text
Abstract:
In the fast-paced and time-sensitive fields of construction and concrete production, real-time monitoring of concrete strength is crucial. Traditional testing methods, such as hydraulic compression (ASTM C 39) and maturity methods (ASTM C 1074), are often laborious and challenging to implement on-site. Building on prior research (SPR 4210 and SPR 4513), we have advanced the electromechanical impedance (EMI) technique for in-situ concrete strength monitoring, crucial for determining safe traffic opening times. These projects have made significant strides in technology, including the development of an IoT-based hardware system for wireless data collection and a cloud-based platform for efficient data processing. A key innovation is the integration of machine learning tools, which not only enhance immediate strength predictions but also facilitate long-term projections vital for maintenance and asset management. To bring this technology to practical use, we collaborated with third-party manufacturers to set up a production line for the sensor and datalogger assembly. The system was extensively tested in various field scenarios, including pavements, patches, and bridge decks. Our refined signal processing algorithms, benchmarked against a mean absolute percentage error (MAPE) of 16%, which is comparable to the ASTM C39 interlaboratory variance of 14%, demonstrate reliable accuracy. Additionally, we have developed a comprehensive user manual to aid field engineers in deploying, connecting, and maintaining the sensing system, paving the way for broader implementation in real-world construction settings.
APA, Harvard, Vancouver, ISO, and other styles
7

Kong, Zhihao, and Na Lu. Determining Optimal Traffic Opening Time Through Concrete Strength Monitoring: Wireless Sensing. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317613.

Full text
Abstract:
Construction and concrete production are time-sensitive and fast-paced; as such, it is crucial to monitor the in-place strength development of concrete structures in real-time. Existing concrete strength testing methods, such as the traditional hydraulic compression method specified by ASTM C 39 and the maturity method specified by ASTM C 1074, are labor-intensive, time consuming, and difficult to implement in the field. INDOT’s previous research (SPR-4210) on the electromechanical impedance (EMI) technique has established its feasibility for monitoring in-situ concrete strength to determine the optimal traffic opening time. However, limitations of the data acquisition and communication systems have significantly hindered the technology’s adoption for practical applications. Furthermore, the packaging of piezoelectric sensor needs to be improved to enable robust performance and better signal quality. In this project, a wireless concrete sensor with a data transmission system was developed. It was comprised of an innovated EMI sensor and miniaturized datalogger with both wireless transmission and USB module. A cloud-based platform for data storage and computation was established, which provides the real time data visualization access to general users and data access to machine learning and data mining developers. Furthermore, field implementations were performed to prove the functionality of the innovated EMI sensor and wireless sensing system for real-time and in-place concrete strength monitoring. This project will benefit the DOTs in areas like construction, operation, and maintenance scheduling and asset management by delivering applicable concrete strength monitoring solutions.
APA, Harvard, Vancouver, ISO, and other styles
8

Modlo, Yevhenii O., Serhiy O. Semerikov, Ruslan P. Shajda, Stanislav T. Tolmachev, and Oksana M. Markova. Methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3878.

Full text
Abstract:
The article describes the components of methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects: using various methods of representing models; solving professional problems using ICT; competence in electric machines and critical thinking. On the content of learning academic disciplines “Higher mathematics”, “Automatic control theory”, “Modeling of electromechanical systems”, “Electrical machines” features of use are disclosed for Scilab, SageCell, Google Sheets, Xcos on Cloud in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. It is concluded that it is advisable to use the following software for mobile Internet devices: a cloud-based spreadsheets as modeling tools (including neural networks), a visual modeling systems as a means of structural modeling of technical objects; a mobile computer mathematical system used at all stages of modeling; a mobile communication tools for organizing joint modeling activities.
APA, Harvard, Vancouver, ISO, and other styles
9

Modlo, Yevhenii O., Serhiy O. Semerikov, Stanislav L. Bondarevskyi, Stanislav T. Tolmachev, Oksana M. Markova, and Pavlo P. Nechypurenko. Methods of using mobile Internet devices in the formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3677.

Full text
Abstract:
An analysis of the experience of professional training bachelors of electromechanics in Ukraine and abroad made it possible to determine that one of the leading trends in its modernization is the synergistic integration of various engineering branches (mechanical, electrical, electronic engineering and automation) in mechatronics for the purpose of design, manufacture, operation and maintenance electromechanical equipment. Teaching mechatronics provides for the meaningful integration of various disciplines of professional and practical training bachelors of electromechanics based on the concept of modeling and technological integration of various organizational forms and teaching methods based on the concept of mobility. Within this approach, the leading learning tools of bachelors of electromechanics are mobile Internet devices (MID) – a multimedia mobile devices that provide wireless access to information and communication Internet services for collecting, organizing, storing, processing, transmitting, presenting all kinds of messages and data. The authors reveals the main possibilities of using MID in learning to ensure equal access to education, personalized learning, instant feedback and evaluating learning outcomes, mobile learning, productive use of time spent in classrooms, creating mobile learning communities, support situated learning, development of continuous seamless learning, ensuring the gap between formal and informal learning, minimize educational disruption in conflict and disaster areas, assist learners with disabilities, improve the quality of the communication and the management of institution, and maximize the cost-efficiency. Bachelor of electromechanics competency in modeling of technical objects is a personal and vocational ability, which includes a system of knowledge, skills, experience in learning and research activities on modeling mechatronic systems and a positive value attitude towards it; bachelor of electromechanics should be ready and able to use methods and software/hardware modeling tools for processes analyzes, systems synthesis, evaluating their reliability and effectiveness for solving practical problems in professional field. The competency structure of the bachelor of electromechanics in the modeling of technical objects is reflected in three groups of competencies: general scientific, general professional and specialized professional. The implementation of the technique of using MID in learning bachelors of electromechanics in modeling of technical objects is the appropriate methodic of using, the component of which is partial methods for using MID in the formation of the general scientific component of the bachelor of electromechanics competency in modeling of technical objects, are disclosed by example academic disciplines “Higher mathematics”, “Computers and programming”, “Engineering mechanics”, “Electrical machines”. The leading tools of formation of the general scientific component of bachelor in electromechanics competency in modeling of technical objects are augmented reality mobile tools (to visualize the objects’ structure and modeling results), mobile computer mathematical systems (universal tools used at all stages of modeling learning), cloud based spreadsheets (as modeling tools) and text editors (to make the program description of model), mobile computer-aided design systems (to create and view the physical properties of models of technical objects) and mobile communication tools (to organize a joint activity in modeling).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!