To see the other types of publications on this topic, follow the link: Cloud GPUs.

Journal articles on the topic 'Cloud GPUs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cloud GPUs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jo, Heeseung, Jinkyu Jeong, Myoungho Lee, and Dong Hoon Choi. "Exploiting GPUs in Virtual Machine for BioCloud." BioMed Research International 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/939460.

Full text
Abstract:
Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sha
APA, Harvard, Vancouver, ISO, and other styles
2

Munk, Rasmus, David Marchant, and Brian Vinter. "Cloud enabling educational platforms with corc." CTE Workshop Proceedings 8 (March 19, 2021): 438–57. http://dx.doi.org/10.55056/cte.299.

Full text
Abstract:
In this paper, it is shown how teaching platforms at educational institutions can utilize cloud platforms to scale a particular service, or gain access to compute instances with accelerator capability such as GPUs. Specifically at the University of Copenhagen (UCPH), it is demonstrated how the internal JupyterHub service, named Data Analysis Gateway (DAG), could utilize compute resources in the Oracle Cloud Infrastructure (OCI). This is achieved by utilizing the introduced Cloud Orchestrator (corc) framework, in conjunction with the novel JupyterHub spawner named MultipleSpawner. Through this
APA, Harvard, Vancouver, ISO, and other styles
3

Anzalone, Anna, Antonio Pagliaro, and Antonio Tutone. "An Introduction to Machine and Deep Learning Methods for Cloud Masking Applications." Applied Sciences 14, no. 7 (2024): 2887. http://dx.doi.org/10.3390/app14072887.

Full text
Abstract:
Cloud cover assessment is crucial for meteorology, Earth observation, and environmental monitoring, providing valuable data for weather forecasting, climate modeling, and remote sensing activities. Depending on the specific purpose, identifying and accounting for pixels affected by clouds is essential in spectral remote sensing imagery. In applications such as land monitoring and various remote sensing activities, detecting/removing cloud-contaminated pixels is crucial to ensuring the accuracy of advanced processing of satellite imagery. Typically, the objective of cloud masking is to produce
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, X. "ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4/W2 (October 19, 2017): 115–19. http://dx.doi.org/10.5194/isprs-annals-iv-4-w2-115-2017.

Full text
Abstract:
Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Yu-Shiang, Chun-Yuan Lin, Hsiao-Chieh Chi, and Yeh-Ching Chung. "Multiple Sequence Alignments with Regular Expression Constraints on a Cloud Service System." International Journal of Grid and High Performance Computing 5, no. 3 (2013): 55–64. http://dx.doi.org/10.4018/jghpc.2013070105.

Full text
Abstract:
Multiple sequence alignments with constraints are of priority concern in computational biology. Constrained sequence alignment incorporates the domain knowledge of biologists into sequence alignments such that the user-specified residues/segments are aligned together according to the alignment results. A series of constrained multiple sequence alignment tools have been developed in relevant literatures in the recent decade. GPU-REMuSiC is the most advanced method with the regular expression constraints, in which graphics processing units (GPUs) with CUDA are used. GPU-REMuSiC can achieve a spe
APA, Harvard, Vancouver, ISO, and other styles
6

Lahiff, Andrew, Shaun de Witt, Miguel Caballer, Giuseppe La Rocca, Stanislas Pamela, and David Coster. "Running HTC and HPC applications opportunistically across private, academic and public clouds." EPJ Web of Conferences 245 (2020): 07032. http://dx.doi.org/10.1051/epjconf/202024507032.

Full text
Abstract:
The Fusion Science Demonstrator in the European Open Science Cloud for Research Pilot Project aimed to demonstrate that the fusion community can make use of distributed cloud resources. We developed a platform, Prominence, which enables users to transparently exploit idle cloud resources for running scientific workloads. In addition to standard HTC jobs, HPC jobs such as multi-node MPI are supported. All jobs are run in containers to ensure they will reliably run anywhere and are reproduceable. Cloud infrastructure is invisible to users, as all provisioning, including extensive failure handlin
APA, Harvard, Vancouver, ISO, and other styles
7

Lulla, Karan, Reena Chandra, and Karthik Sirigiri. "Proxy-Based Thermal and Acoustic Evaluation of Cloud GPUs for AI Training Workloads." American Journal of Applied Sciences 7, no. 07 (2025): 111–27. https://doi.org/10.37547/tajas/volume07issue07-12.

Full text
Abstract:
The use of cloud-based Graphics Processing Units (GPUs) to train and deploy Deep Learning models has grown rapidly in importance, with the demand to learn more about their thermal and acoustic behavior under real-world workloads. A normal cloud cannot make direct telemetry like temperature, fan speed, or acoustic emissions. To overcome such shortcomings, this study quantifies GPU workloads' thermal and acoustic output with a proxy-based model derived from available metrics such as GPU utilization, memory provisioning, power consumption, and empirical Thermal Design Power (TDP) values. They com
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Tyng-Yeu, and You-Jie Li. "A Mobile Cloud Computing System for Mathematical Computation." International Journal of Grid and High Performance Computing 7, no. 3 (2015): 36–53. http://dx.doi.org/10.4018/ijghpc.2015070103.

Full text
Abstract:
This paper is aimed at proposing a mobile cloud computing system called M2C (Mobile Math Cloud). This system provides users with an APP to accelerate the execution of MATLAB instructions and scripts on their Android-based mobile devices by taking advantage of diverse processors including CPUs and GPUs available in clouds. On the other hand, it supports time-sharing license management to reduce the user time of waiting system services and increase the resource utilization of clouds. Moreover, it supports parallel computing and optimal resource configurations for maximizing the performance of us
APA, Harvard, Vancouver, ISO, and other styles
9

Rajeev Reddy Chevuri. "The Role of GPUs in Accelerating Machine Learning Workloads." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (2025): 2676–84. https://doi.org/10.32628/cseit251127424.

Full text
Abstract:
This article presents a comprehensive overview of Graphics Processing Units (GPUs) and their transformative role in accelerating machine learning workloads. Starting with an explanation of the fundamental architectural differences between GPUs and CPUs, the article explores how the parallel processing capabilities of GPUs enable dramatic improvements in training deep learning models. The discussion covers GPU applications across convolutional neural networks, transformer architectures, and multi-GPU training strategies. Beyond training, the article examines GPU acceleration in inference, scien
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Anh, Abraham Monrroy Cano, Masato Edahiro, and Shinpei Kato. "GPU-Accelerated 3D Normal Distributions Transform." Journal of Robotics and Mechatronics 35, no. 2 (2023): 445–59. http://dx.doi.org/10.20965/jrm.2023.p0445.

Full text
Abstract:
The three-dimensional (3D) normal distributions transform (NDT) is a popular scan registration method for 3D point cloud datasets. It has been widely used in sensor-based localization and mapping applications. However, the NDT cannot entirely utilize the computing power of modern many-core processors, such as graphics processing units (GPUs), because of the NDT’s linear nature. In this study, we investigated the use of NVIDIA’s GPUs and their programming platform called compute unified device architecture (CUDA) to accelerate the NDT algorithm. We proposed a design and implementation of our GP
APA, Harvard, Vancouver, ISO, and other styles
11

Lulla, Karan. "Designing Fault-Tolerant Test Infrastructure for Large-Scale GPU Manufacturing." International journal of signal processing, embedded systems and VLSI design 5, no. 1 (2025): 35–61. https://doi.org/10.55640/ijvsli-05-01-04.

Full text
Abstract:
In a modern-day digital economy, computational requirements for high-stakes industries such as finance, real estate, retail, and cloud computing must be met by Graphics Processing Units (GPUs). Reliability and performance of such GPUs are integral, as small failures can cause large-scale business disruptions and financial losses. This paper examines the architectural and methodological models for designing a fault-tolerant test infrastructure in the large-scale production of GPUs. It highlights the requirement of redundancy, modularity, real-time monitoring, and automated error check prototypi
APA, Harvard, Vancouver, ISO, and other styles
12

Kwon, Seokmin, and Hyokyung Bahn. "Enhanced Scheduling of AI Applications in Multi-Tenant Cloud Using Genetic Optimizations." Applied Sciences 14, no. 11 (2024): 4697. http://dx.doi.org/10.3390/app14114697.

Full text
Abstract:
The artificial intelligence (AI) industry is increasingly integrating with diverse sectors such as smart logistics, FinTech, entertainment, and cloud computing. This expansion has led to the coexistence of heterogeneous applications within multi-tenant systems, presenting significant scheduling challenges. This paper addresses these challenges by exploring the scheduling of various machine learning workloads in large-scale, multi-tenant cloud systems that utilize heterogeneous GPUs. Traditional scheduling strategies often struggle to achieve satisfactory results due to low GPU utilization in t
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Sheng-Ta, Chun-Yuan Lin, and Che Lun Hung. "GPU-Based Cloud Service for Smith-Waterman Algorithm Using Frequency Distance Filtration Scheme." BioMed Research International 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/721738.

Full text
Abstract:
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to hand
APA, Harvard, Vancouver, ISO, and other styles
14

Mielikainen, J., B. Huang, H. L. A. Huang, M. D. Goldberg, and A. Mehta. "Speeding Up the Computation of WRF Double-Moment 6-Class Microphysics Scheme with GPU." Journal of Atmospheric and Oceanic Technology 30, no. 12 (2013): 2896–906. http://dx.doi.org/10.1175/jtech-d-12-00218.1.

Full text
Abstract:
Abstract The Weather Research and Forecasting model (WRF) double-moment 6-class microphysics scheme (WDM6) implements a double-moment bulk microphysical parameterization of clouds and precipitation and is applicable in mesoscale and general circulation models. WDM6 extends the WRF single-moment 6-class microphysics scheme (WSM6) by incorporating the number concentrations for cloud and rainwater along with a prognostic variable of cloud condensation nuclei (CCN) number concentration. Moreover, it predicts the mixing ratios of six water species (water vapor, cloud droplets, cloud ice, snow, rain
APA, Harvard, Vancouver, ISO, and other styles
15

Garg, Mr Yugansh, and Ms Bhavana Gupta. "Performance Analysis and Comparison of the Micro Virtual Machines Provided by the Top Cloud Vendors." International Journal of Research Publication and Reviews 04, no. 02 (2023): 1326–33. http://dx.doi.org/10.55248/gengpi.2023.4229.

Full text
Abstract:
Public Clouds are the backbone of the modern IT Industry. The modern tech start-ups to are heavily dependent on the Public Clouds as save the capital expenditure. Public Clouds offer a wide range of Virtual Infrastructure beginning from single core 0.25 GB RAM virtual machines to 128 cored 3904 GB RAM Virtual Servers aided with GPUs. Apart from costing, The performance of the virtual machines can impact on the services offered. The paper tries to analyse the performance of the Light virtual machines provided by the different cloud vendors and maps their ability to handle certain set of tasks.
APA, Harvard, Vancouver, ISO, and other styles
16

Dziekan, Piotr, and Piotr Zmijewski. "University of Warsaw Lagrangian Cloud Model (UWLCM) 2.0: adaptation of a mixed Eulerian–Lagrangian numerical model for heterogeneous computing clusters." Geoscientific Model Development 15, no. 11 (2022): 4489–501. http://dx.doi.org/10.5194/gmd-15-4489-2022.

Full text
Abstract:
Abstract. A numerical cloud model with Lagrangian particles coupled to an Eulerian flow is adapted for distributed memory systems. Eulerian and Lagrangian calculations can be done in parallel on CPUs and GPUs, respectively. The fraction of time when CPUs and GPUs work simultaneously is maximized at around 80 % for an optimal ratio of CPU and GPU workloads. The optimal ratio of workloads is different for different systems because it depends on the relation between computing performance of CPUs and GPUs. GPU workload can be adjusted by changing the number of Lagrangian particles, which is limite
APA, Harvard, Vancouver, ISO, and other styles
17

Kang, Jihun, Hwamin Lee, and Daewon Lee. "Pod Placement Techniques to Avoid Job Failures Due to Low GPU Memory in a Kubernetes Environment with Shared GPUs." International Journal on Advanced Science, Engineering and Information Technology 14, no. 5 (2024): 1626–32. http://dx.doi.org/10.18517/ijaseit.14.5.11589.

Full text
Abstract:
In a container-based cloud environment, GPUs have the advantage of providing high-performance computation to multiple users, and through GPU sharing, many GPU container users can be accommodated over the number of physical GPUs. This increases resource utilization and minimizes idle time. However, extended resources used to share GPUs in Kubernetes do not partition GPU resources or limit usage and only logically increase the number of GPUs that can be recognized. Therefore, usage limits and equal use of GPU resources cannot be guaranteed among pods sharing GPUs. Additionally, GPU memory genera
APA, Harvard, Vancouver, ISO, and other styles
18

Alnori, Abdulaziz, Karim Djemame, and Yousef Alsenani. "Agnostic Energy Consumption Models for Heterogeneous GPUs in Cloud Computing." Applied Sciences 14, no. 6 (2024): 2385. http://dx.doi.org/10.3390/app14062385.

Full text
Abstract:
The adoption of cloud computing has grown significantly among individuals and in organizations. According to this growth, Cloud Service Providers have continuously expanded and updated cloud-computing infrastructures, which have become more heterogeneous. Managing these heterogeneous resources in cloud infrastructures while ensuring Quality of Service (QoS) and minimizing energy consumption is a prominent challenge. Therefore, unifying energy consumption models to deal with heterogeneous cloud environments is essential in order to efficiently manage these resources. This paper deeply analyzes
APA, Harvard, Vancouver, ISO, and other styles
19

Mahmoudi, Sidi Ahmed, Mohammed Amin Belarbi, El Wardani Dadi, Saïd Mahmoudi, and Mohammed Benjelloun. "Cloud-Based Image Retrieval Using GPU Platforms." Computers 8, no. 2 (2019): 48. http://dx.doi.org/10.3390/computers8020048.

Full text
Abstract:
The process of image retrieval presents an interesting tool for different domains related to computer vision such as multimedia retrieval, pattern recognition, medical imaging, video surveillance and movements analysis. Visual characteristics of images such as color, texture and shape are used to identify the content of images. However, the retrieving process becomes very challenging due to the hard management of large databases in terms of storage, computation complexity, temporal performance and similarity representation. In this paper, we propose a cloud-based platform in which we integrate
APA, Harvard, Vancouver, ISO, and other styles
20

Ibrahim, Ahmed Hosny, Hossam El Deen Mostafa Faheem, Youssef Bassyouni Mahdy, and Abdel Rahman Hedar. "Resource allocation algorithm for GPUs in a private cloud." International Journal of Cloud Computing 5, no. 1/2 (2016): 45. http://dx.doi.org/10.1504/ijcc.2016.075094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Anh, Abraham Monrroy Cano, Masato Edahiro, and Shinpei Kato. "Fast Euclidean Cluster Extraction Using GPUs." Journal of Robotics and Mechatronics 32, no. 3 (2020): 548–60. http://dx.doi.org/10.20965/jrm.2020.p0548.

Full text
Abstract:
Clustering is the task of dividing an input dataset into groups of objects based on their similarity. This process is frequently required in many applications. However, it is computationally expensive when running on traditional CPUs due to the large number of connections and objects the system needs to inspect. In this paper, we investigate the use of NVIDIA graphics processing units and their programming platform CUDA in the acceleration of the Euclidean clustering (EC) process in autonomous driving systems. We propose GPU-accelerated algorithms for the EC problem on point cloud datasets, op
APA, Harvard, Vancouver, ISO, and other styles
22

Kakade, Manoj Subhash, Anupama Karuppiah, Mayank Mathur, et al. "Multitask Scheduling on Distributed Cloudlet System Built Using SoCs." Journal of Systemics, Cybernetics and Informatics 21, no. 1 (2023): 61–72. http://dx.doi.org/10.54808/jsci.21.01.61.

Full text
Abstract:
With the emergence of IoT, new computing paradigms have also emerged. Initial IoT systems had all the computing happening on the cloud. With the emergence of Industry 4.0 and IoT being the major building block, clouds are not the only solution for data storage and analytics. Cloudlet, Fog Computing, Edge Computing, and Dew Computing models are now available, providing similar capabilities as the cloud. The term cloudlet was introduced first in 2011, but research in this area has picked up only over the past five years. Unlike clouds, which are built with powerful server-class machines and GPUs
APA, Harvard, Vancouver, ISO, and other styles
23

Expósito, Roberto R., Guillermo L. Taboada, Sabela Ramos, Juan Touriño, and Ramón Doallo. "General-purpose computation on GPUs for high performance cloud computing." Concurrency and Computation: Practice and Experience 25, no. 12 (2012): 1628–42. http://dx.doi.org/10.1002/cpe.2845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Timokhin, P. Yu, and M. V. Mikhailyuk. "The Method to Order Point Clouds for Visualization on the Ray Tracing Pipeline." Programmirovanie, no. 3 (November 28, 2024): 42–53. http://dx.doi.org/10.31857/s0132347424030054.

Full text
Abstract:
Currently, the digitization of environment objects (vegetation, terrain, architectural structures, etc.) in the form of point clouds is actively developing. The integration of such digitized objects into virtual environment systems allows the quality of the modeled environment to be improved, but requires efficient methods and algorithms for real-time visualization of large point volumes. In this paper the solution of this task on modern multicore GPUs with support of hardware-accelerated ray tracing is researched. A modified method is proposed where the original unordered point cloud is split
APA, Harvard, Vancouver, ISO, and other styles
25

Charan, Shankar Kummarapurugu. "GPU Acceleration Techniques for Optimizing AI-ML Inference in the Cloud." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 8, no. 6 (2022): 1–7. https://doi.org/10.5281/zenodo.14183964.

Full text
Abstract:
The demand for real-time Artificial Intelligence (AI) and Machine Learning (ML) inference in cloud environments has grown substantially in recent years. However, delivering high- performance inference at scale remains a challenge due to the computational intensity of AI/ML workloads. General-purpose CPUs often struggle to meet the latency and throughput require- ments of modern AI/ML applications. This paper explores the application of Graphics Processing Units (GPUs) to accelerate in- ference tasks, particularly in cloud environments, where dynamic and scalable resources are essential. We rev
APA, Harvard, Vancouver, ISO, and other styles
26

Anderlini, Lucio, Tommaso Boccali, Stefano Dal Pra, et al. "ML_INFN project: Status report and future perspectives." EPJ Web of Conferences 295 (2024): 08013. http://dx.doi.org/10.1051/epjconf/202429508013.

Full text
Abstract:
The ML_INFN initiative (“Machine Learning at INFN”) is an effort to foster Machine Learning (ML) activities at the Italian National Institute for Nuclear Physics (INFN). In recent years, artificial intelligence inspired activities have flourished bottom-up in many efforts in Physics, both at the experimental and theoretical level. Many researchers have procured desktop-level devices, with consumer-oriented GPUs, and have trained themselves in a variety of ways, from webinars, books, and tutorials. ML_INFN aims to help and systematize such effort, in multiple ways: by offering state-of-the-art
APA, Harvard, Vancouver, ISO, and other styles
27

Abirami Dasu Jegadeesh and Gaurav Samdani. "Artificial Intelligence joining forces with cloud computing: Pros and pitfalls." World Journal of Advanced Engineering Technology and Sciences 10, no. 1 (2023): 235–43. https://doi.org/10.30574/wjaets.2023.10.1.0269.

Full text
Abstract:
Artificial intelligence (AI) joining hands with cloud computing is shaking things up in tech changing the game for companies and changing the way they do business. AI is super good at handling big piles of data making tricky jobs run on their own, and figuring out what could happen next. It's getting a helping hand from cloud computing, which brings the kind of big-time infrastructure and brainpower needed for AI's heavy lifting. When you put them together, it's like a turbo boost for coming up with new ideas making work smoother, and helping businesses find all sorts of extra value. AI's not
APA, Harvard, Vancouver, ISO, and other styles
28

Vandebon, Jessica, Jose G. F. Coutinho, and Wayne Luk. "Scheduling Hardware-Accelerated Cloud Functions." Journal of Signal Processing Systems 93, no. 12 (2021): 1419–31. http://dx.doi.org/10.1007/s11265-021-01695-7.

Full text
Abstract:
AbstractThis paper presents a Function-as-a-Service (FaaS) approach for deploying managed cloud functions onto heterogeneous cloud infrastructures. Current FaaS systems, such as AWS Lambda, allow domain-specific functionality, such as AI, HPC and image processing, to be deployed in the cloud while abstracting users from infrastructure and platform concerns. Existing approaches, however, use a single type of resource configuration to execute all function requests. In this paper, we present a novel FaaS approach that allows cloud functions to be effectively executed across heterogeneous compute
APA, Harvard, Vancouver, ISO, and other styles
29

Mahender, Mr Mula. "A Deep Learning Approach for Detecting Unusual Celestial Phenomena in Astronomical Datasets." International Scientific Journal of Engineering and Management 04, no. 06 (2025): 1–9. https://doi.org/10.55041/isjem04013.

Full text
Abstract:
Abstract: Detection of Unusual Celestial Phenomena in Astronomical Datasets is a deep learning -powered project implemented using ResNet50, a Convolutional Neural Network (CNN) model, and TensorFlow. Utilizing these it efficiently scans big open-source datasets such as NASA (National Aeronautics and Space Administration), SDSS (Sloan Digital Sky Survey), ESA (European Space Agency), etc, to identify asteroids, galaxies, exoplanets, supernovae, and other irregularities. Conventional human analysis is time-consuming and prone to errors, whereas current automated solutions are not affordable and
APA, Harvard, Vancouver, ISO, and other styles
30

Xia, Jing. "Parallel Computing Mode in Homomorphic Encryption Using GPUs Acceleration in Cloud." Journal of Computers 14, no. 7 (2019): 451–69. http://dx.doi.org/10.17706/jcp.14.7.451-469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Vernardos, G., and C. J. Fluke. "Adventures in the microlensing cloud: Large datasets, eResearch tools, and GPUs." Astronomy and Computing 6 (October 2014): 1–18. http://dx.doi.org/10.1016/j.ascom.2014.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cardoso, Renato, Dejan Golubovic, Ignacio Peluaga Lozada, Ricardo Rocha, João Fernandes, and Sofia Vallecorsa. "Accelerating GAN training using highly parallel hardware on public cloud." EPJ Web of Conferences 251 (2021): 02073. http://dx.doi.org/10.1051/epjconf/202125102073.

Full text
Abstract:
With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R&D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements
APA, Harvard, Vancouver, ISO, and other styles
33

An, SangWoo, and Seog Chung Seo. "Highly Efficient Implementation of Block Ciphers on Graphic Processing Units for Massively Large Data." Applied Sciences 10, no. 11 (2020): 3711. http://dx.doi.org/10.3390/app10113711.

Full text
Abstract:
With the advent of IoT and Cloud computing service technology, the size of user data to be managed and file data to be transmitted has been significantly increased. To protect users’ personal information, it is necessary to encrypt it in secure and efficient way. Since servers handling a number of clients or IoT devices have to encrypt a large amount of data without compromising service capabilities in real-time, Graphic Processing Units (GPUs) have been considered as a proper candidate for a crypto accelerator for processing a huge amount of data in this situation. In this paper, we present h
APA, Harvard, Vancouver, ISO, and other styles
34

Gettelman, Andrew, Hugh Morrison, Trude Eidhammer, et al. "Importance of ice nucleation and precipitation on climate with the Parameterization of Unified Microphysics Across Scales version 1 (PUMASv1)." Geoscientific Model Development 16, no. 6 (2023): 1735–54. http://dx.doi.org/10.5194/gmd-16-1735-2023.

Full text
Abstract:
Abstract. Cloud microphysics is critical for weather and climate prediction. In this work, we document updates and corrections to the cloud microphysical scheme used in the Community Earth System Model (CESM) and other models. These updates include a new nomenclature for the scheme, now called Parameterization of Unified Microphysics Across Scales (PUMAS), and the ability to run the scheme on graphics processing units (GPUs). The main science changes include refactoring an ice number limiter and associated changes to ice nucleation, adding vapor deposition onto snow, and introducing an implici
APA, Harvard, Vancouver, ISO, and other styles
35

Juvela, Mika. "SOC program for dust continuum radiative transfer." Astronomy & Astrophysics 622 (January 31, 2019): A79. http://dx.doi.org/10.1051/0004-6361/201834354.

Full text
Abstract:
Context. Thermal dust emission carries information on physical conditions and dust properties in many astronomical sources. Because observations represent a sum of emission along the line of sight, their interpretation often requires radiative transfer (RT) modelling. Aims. We describe a new RT program, SOC, for computations of dust emission, and examine its performance in simulations of interstellar clouds with external and internal heating. Methods. SOC implements the Monte Carlo RT method as a parallel program for shared-memory computers. It can be used to study dust extinction, scattering,
APA, Harvard, Vancouver, ISO, and other styles
36

Jonnakuti, Srikanth. "Scalable NLP in the Enterprise: Training Transformer Models on Distributed Cloud GPUs." Journal of Science & Technology 2, no. 1 (2021): 444–55. https://doi.org/10.5281/zenodo.15347227.

Full text
Abstract:
This paper explores the large-scale deployment of transformer-based models, specifically BERT and its variants, for enterprise applications in customer service automation and legal document processing. It presents an in-depth analysis of strategies for training such models on distributed cloud-based GPU infrastructures, highlighting optimizations in data parallelism, model parallelism, and input pipeline design. Leveraging frameworks such as TensorFlow and PyTorch, along with orchestration via Kubernetes and Horovod, the paper examines techniques to achieve scalability, fault tolerance, and ef
APA, Harvard, Vancouver, ISO, and other styles
37

Smith, Mary. "Call for Papers: EduPar 2024." ACM SIGCSE Bulletin 56, no. 1 (2024): 4–5. http://dx.doi.org/10.1145/3643836.3643839.

Full text
Abstract:
Parallel and Distributed Computing (PDC) is widespread in modern computing devices, such as PCs, laptops, and handheld devices, featuring multiple cores and GPUs. The dependence on web and cloud services and the rising demand for PDC solutions in addressing data-intensive challenges like Big Data highlights the importance of integrating PDC into computing curricula. The rapid advancements in PDC-related technologies present ongoing challenges in curriculum development, emphasizing the need to integrate PDC into existing and new courses seamlessly. This integration is essential to prepare stude
APA, Harvard, Vancouver, ISO, and other styles
38

Tang, Shidi, Ruiqi Chen, Mengru Lin, et al. "Accelerating AutoDock Vina with GPUs." Molecules 27, no. 9 (2022): 3041. http://dx.doi.org/10.3390/molecules27093041.

Full text
Abstract:
AutoDock Vina is one of the most popular molecular docking tools. In the latest benchmark CASF-2016 for comparative assessment of scoring functions, AutoDock Vina won the best docking power among all the docking tools. Modern drug discovery is facing a common scenario of large virtual screening of drug hits from huge compound databases. Due to the seriality characteristic of the AutoDock Vina algorithm, there is no successful report on its parallel acceleration with GPUs. Current acceleration of AutoDock Vina typically relies on the stack of computing power as well as the allocation of resourc
APA, Harvard, Vancouver, ISO, and other styles
39

Mamchych, Oleksandr, and Maksym Volk. "ESTIMATION OF POWER CONSUMPTION OF MOBILE DEVICES IN CLOUD COMPUTING." Innovative Technologies and Scientific Solutions for Industries, no. 1 (23) (April 20, 2023): 72–82. http://dx.doi.org/10.30837/itssi.2023.23.072.

Full text
Abstract:
Modern computing tasks require an increase in computing power. This necessitates the creation and production of new equipment for cloud computing. At the same time, the number of personal mobile devices is already measured in billions, and even their partial use could reduce production requirements. In addition, mobile hardware is more energy efficient, which contributes to significant energy savings. The article investigates the issue of qualitative and quantitative assessment of the efficiency of using mobile devices for computing compared to traditional stationary solutions. The purpose of
APA, Harvard, Vancouver, ISO, and other styles
40

Saupi Teri, S., I. A. Musliman, and A. Abdul Rahman. "GPU UTILIZATION IN GEOPROCESSING BIG GEODATA: A REVIEW." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W3-2021 (January 11, 2022): 295–304. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w3-2021-295-2022.

Full text
Abstract:
Abstract. The expansion of data collection from remote sensing and other geographic data sources, as well as from other technology such as cloud, sensors, mobile, and social media, have made mapping and analysis more complex. Some geospatial applications continue to rely on conventional geospatial processing, where limitation on computation capabilities often lacking to attain significant data interpretation. In recent years, GPU processing has improved far more GIS applications than using CPU alone. As a result, numerous researchers have begun utilising GPUs for scientific, geometric, and dat
APA, Harvard, Vancouver, ISO, and other styles
41

Um, Taegeon, Byungsoo Oh, Byeongchan Seo, Minhyeok Kweun, Goeun Kim, and Woo-Yeon Lee. "FastFlow: Accelerating Deep Learning Model Training with Smart Offloading of Input Data Pipeline." Proceedings of the VLDB Endowment 16, no. 5 (2023): 1086–99. http://dx.doi.org/10.14778/3579075.3579083.

Full text
Abstract:
When training a deep learning (DL) model, input data are pre-processed on CPUs and transformed into tensors, which are then fed into GPUs for gradient computations of model training. Expensive GPUs must be fully utilized during training to accelerate the training speed. However, intensive CPU operations for input data preprocessing (input pipeline) often lead to CPU bottlenecks; correspondingly, various DL training jobs suffer from GPU under-utilization. We propose FastFlow, a DL training system that automatically mitigates the CPU bottleneck by offloading (scaling out) input pipelines to remo
APA, Harvard, Vancouver, ISO, and other styles
42

R., Gnana bharathy. "Framework of Object Detection and Classification High Performance Using Video Analytics in Cloud." International Journal of Trend in Scientific Research and Development 3, no. 1 (2018): 1–8. https://doi.org/10.31142/ijtsrd18906.

Full text
Abstract:
Video analytics framework detection performance is worked at cloud. Object detection and classification are the basic tasks in video analytics and become the initial point for other complex submissions. Old fashioned video analytics approaches are manual and time consuming. These are particular due to the very participation of human factor. This paper present a cloud based video analytics framework for accessible and robust analysis of video streams. The framework enables an operative by programing the object detection and classification process from recorded video streams. An operative only s
APA, Harvard, Vancouver, ISO, and other styles
43

Binoy, Kurikaparambil Revi. "Cloud Computing: Kubernetes Application Performance Improvement Using In-Memory Database." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 8, no. 3 (2022): 1–5. https://doi.org/10.5281/zenodo.14951871.

Full text
Abstract:
With remarkable advancements in computing power and the emergence of Kubernetes clusters, application deployment in the cloud has effectively addressed numerous challenges related to accessibility, availability, security, and scalability. This transformation is particularly evident for complex applications that demand substantial computing resources, which can now run efficiently on platforms like Azure Kubernetes Service. This service provides access to high-performance GPUs and CPUs, allowing applications to harness computing powers for their computational needs. As powerful computing resour
APA, Harvard, Vancouver, ISO, and other styles
44

Golubovic, Dejan, and Ricardo Rocha. "Training and Serving ML workloads with Kubeflow at CERN." EPJ Web of Conferences 251 (2021): 02067. http://dx.doi.org/10.1051/epjconf/202125102067.

Full text
Abstract:
Machine Learning (ML) has been growing in popularity in multiple areas and groups at CERN, covering fast simulation, tracking, anomaly detection, among many others. We describe a new service available at CERN, based on Kubeflow and managing the full ML lifecycle: data preparation and interactive analysis, large scale distributed model training and model serving. We cover specific features available for hyper-parameter tuning and model metadata management, as well as infrastructure details to integrate accelerators and external resources. We also present results and a cost evaluation from scali
APA, Harvard, Vancouver, ISO, and other styles
45

Sai Prasad Mukala. "Edge Computing Revolution: Architecting the Future of Distributed Infrastructure." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 2 (2025): 1346–54. https://doi.org/10.32628/cseit25112490.

Full text
Abstract:
VMware's Edge Computing and Emerging Technologies are transforming modern IT infrastructure by enabling organizations to process data closer to the source, reducing latency and enhancing performance for distributed workloads. This comprehensive article encompasses the Edge Compute Stack (ECS), which provides a lightweight, scalable solution for deploying virtual machines at the edge across industries such as retail, manufacturing, and telecommunications. The integration of AI and machine learning capabilities, powered by GPUs, enables advanced analytics while maintaining data sovereignty in co
APA, Harvard, Vancouver, ISO, and other styles
46

Arifin, Oki, Fauzan Azim, Yuli Hartati, Dewi Kania Widyawati, and Ahmad Luqman Ahmad Kamal Ariffin. "Comparative Study on the Efficiency of Deep Learning Model Training in Cloud Environments: Google Colab vs AWS." Decode: Jurnal Pendidikan Teknologi Informasi 5, no. 2 (2025): 324–32. https://doi.org/10.51454/decode.v5i2.1197.

Full text
Abstract:
Deep learning has become a major foundation in the development of modern artificial intelligence technologies, especially in the applications of image recognition, natural language processing, and recommendation systems. However, the training process of deep learning models requires large and efficient computing resources. This study aims to evaluate the efficiency of training deep learning models on two popular cloud platforms, namely Google Colab and Amazon Web Services (AWS). The method used is a comparative experiment with a simple Convolutional Neural Network (CNN) model trained using the
APA, Harvard, Vancouver, ISO, and other styles
47

Espinosa-Aranda, Jose, Noelia Vallez, Jose Rico-Saavedra, et al. "Smart Doll: Emotion Recognition Using Embedded Deep Learning." Symmetry 10, no. 9 (2018): 387. http://dx.doi.org/10.3390/sym10090387.

Full text
Abstract:
Computer vision and deep learning are clearly demonstrating a capability to create engaging cognitive applications and services. However, these applications have been mostly confined to powerful Graphic Processing Units (GPUs) or the cloud due to their demanding computational requirements. Cloud processing has obvious bandwidth, energy consumption and privacy issues. The Eyes of Things (EoT) is a powerful and versatile embedded computer vision platform which allows the user to develop artificial vision and deep learning applications that analyse images locally. In this article, we use the deep
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Tong, Xiao Feng, Jing Jia, Jihao Li, and Songwei Xu. "Research on the Application of Resource Virtualization Management Soft for Spaceborne Edge Computing Node." Journal of Big Data and Computing 2, no. 2 (2024): 120–28. http://dx.doi.org/10.62517/jbdc.202401216.

Full text
Abstract:
Due to the characteristics of multiple nodes, large load differences, and unbalanced network computing resource distribution, this paper explores efficient cloud technology for heterogeneous node resources, such as CPUs and GPUs, in spaceborne distributed edge nodes. A virtual resource pool with low overhead and real-time reconstruction is constructed to enable the unified encapsulation and management of on-orbit edge computing resources. This approach provides fine-grained supply of computing, storage, network, and other multi-dimensional resources, as well as an efficient cloud-edge collabor
APA, Harvard, Vancouver, ISO, and other styles
49

Huntemann, Marcus, Georg Heygster, and Gang Hong. "Discrete dipole approximation simulations on GPUs using OpenCL—Application on cloud ice particles." Journal of Computational Science 2, no. 3 (2011): 262–71. http://dx.doi.org/10.1016/j.jocs.2011.05.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tanwar, Jaswinder, Tajinder Kumar, Ahmed A. Mohamed, et al. "Project Management for Cloud Compute and Storage Deployment: B2B Model." Processes 11, no. 1 (2022): 7. http://dx.doi.org/10.3390/pr11010007.

Full text
Abstract:
This paper explains the project’s objectives, identifies the key stakeholders, defines the project manager’s authority and provides a preliminary breakdown of roles and responsibilities. For the project’s future, it acts as a source of authority. This paper’s objective is to record the justifications for starting the project, its goals, limitations, solution instructions and the names of the principal stakeholders. This manuscript is meant to be used as a “Project Management Plan Light” for small and medium-sized projects when it would be uneconomical to prepare an entire collection of documen
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!