Academic literature on the topic 'Nvidia CUDA'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Nvidia CUDA.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Nvidia CUDA"

1

Nangla, Siddhante. "GPU Programming using NVIDIA CUDA." International Journal for Research in Applied Science and Engineering Technology 6, no. 6 (June 30, 2018): 79–84. http://dx.doi.org/10.22214/ijraset.2018.6016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pogorilyy, S. D., D. Yu Vitel, and O. A. Vereshchynsky. "Новітні архітектури відеоадаптерів. Технологія GPGPU. Частина 2." Реєстрація, зберігання і обробка даних 15, no. 1 (April 4, 2013): 71–81. http://dx.doi.org/10.35681/1560-9189.2013.15.1.103367.

Full text
Abstract:
Детально розглянуто основні принципи роботи зі спільною та розподіленою пам’яттю в технології NVidia CUDA. Описано шаблони взаємодії потоків і проблеми глобальної синхронізації. Проведено порівняльний аналіз основних технологій, що використовуються в підході GPGPU — Nvidia CUDA, OpenCL, Direct Compute.
APA, Harvard, Vancouver, ISO, and other styles
3

HURMAN, Ivan, Kira BOBROVNIKOVA, Leonid BEDRATYUK, and Hanna BEDRATYUK. "APPROACH FOR CODE ANALYSIS TO ESTIMATE POWER CONSUMPTION OF CUDA CORE." Herald of Khmelnytskyi National University. Technical sciences 217, no. 1 (February 23, 2023): 67–73. http://dx.doi.org/10.31891/2307-5732-2023-317-1-67-73.

Full text
Abstract:
The graphics processing unit is a popular computing device for achieving exascale performance in high-performance computing programs, which is used not only in graphics tasks, but also in computational tasks such as machine learning, scientific computing, and cryptography. With the help of a graphics processor, you can achieve significant speed and performance compared to the central processing unit. CUDA, Compute Unified Device Architecture, a graphics processing unit software development platform, allows developers to use the high-performance computing capabilities of graphics processing units to solve problems traditionally handled by central processing units. Even though the graphics processing unit has a relatively high power to performance ratio, it consumes a significant amount of power during computing. The paper proposes an approach for code analysis to estimate power consumption of CUDA core to improve the power efficiency of applications focused on computing on graphics processing units. The proposed approach makes it possible to estimate the power consumption of such applications without the need to run them on physical devices. The proposed approach is based on static analysis of the CUDA program and machine learning methods. To evaluate the effectiveness of the proposed approach, three graphics processing unit architectures were used: NVIDIA PASCAL, NVIDIA TURING, and NVIDIA AMPERE. The results of the experiments showed that for the NVIDIA AMPERE architecture, the proposed approach using decision trees makes it possible to achieve a determination coefficient of 0.9173. The results obtained confirm the effectiveness of the proposed code analysis method for estimating the power consumption of the CUDA core. This method can be useful for CUDA developers who want to improve the efficiency and power efficiency of their programs.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Rafid, Md Sazzadul Islam, and Jia Uddin. "Optimizing Apple Lossless Audio Codec Algorithm using NVIDIA CUDA Architecture." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (February 1, 2018): 70. http://dx.doi.org/10.11591/ijece.v8i1.pp70-75.

Full text
Abstract:
As majority of the compression algorithms are implementations for CPU architecture, the primary focus of our work was to exploit the opportunities of GPU parallelism in audio compression. This paper presents an implementation of Apple Lossless Audio Codec (ALAC) algorithm by using NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework. The core idea was to identify the areas where data parallelism could be applied and parallel programming model CUDA could be used to execute the identified parallel components on Single Instruction Multiple Thread (SIMT) model of CUDA. The dataset was retrieved from European Broadcasting Union, Sound Quality Assessment Material (SQAM). Faster execution of the algorithm led to execution time reduction when applied to audio coding for large audios. This paper also presents the reduction of power usage due to running the parallel components on GPU. Experimental results reveal that we achieve about 80-90% speedup through CUDA on the identified components over its CPU implementation while saving CPU power consumption.
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Youngtae, and Gyuhyeon Hwang. "Efficient Parallel CUDA Random Number Generator on NVIDIA GPUs." Journal of KIISE 42, no. 12 (December 15, 2015): 1467–73. http://dx.doi.org/10.5626/jok.2015.42.12.1467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Semenenko, Julija, and Dmitrij Šešok. "Lygiagretūs skaičiavimai su CUDA." Jaunųjų mokslininkų darbai 47, no. 1 (July 3, 2017): 87–93. http://dx.doi.org/10.21277/jmd.v47i1.135.

Full text
Abstract:
Straipsnyje pateikiami NVIDIA CUDA skaičiavimų technologijos veikimo principai, darbo su CUDA ypatumai. Su „GeForce“ ir „Quadro“ grafinėmis plokštėmis bei CPU atlikti du skaitiniai eksperimentai – masyvų sudėtis ir matricų sandauga, matricų sandaugos optimizacijos (bendroji atmintis, „bankų konfliktų“ sprendimai, lygiagretinimas pagal instrukcijas), analizuojami vykdymo laiko sąnaudų rezultatai dirbant su int, float ir double duomenų tipais ir skirtingais duomenų skaičiais.
APA, Harvard, Vancouver, ISO, and other styles
7

Popov, S. E. "Improved phase unwrapping algorithm based on NVIDIA CUDA." Programming and Computer Software 43, no. 1 (January 2017): 24–36. http://dx.doi.org/10.1134/s0361768817010054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gonzalez Clua, Esteban Walter, and Marcelo Panaro Zamith. "Programming in CUDA for Kepler and Maxwell Architecture." Revista de Informática Teórica e Aplicada 22, no. 2 (November 21, 2015): 233. http://dx.doi.org/10.22456/2175-2745.56384.

Full text
Abstract:
Since the first version of CUDA was launch, many improvements were made in GPU computing. Every new CUDA version included important novel features, turning this architecture more and more closely related to a typical parallel High Performance Language. This tutorial will present the GPU architecture and CUDA principles, trying to conceptualize novel features included by NVIDIA, such as dynamics parallelism, unified memory and concurrent kernels. This text also includes some optimization remarks for CUDA programs.
APA, Harvard, Vancouver, ISO, and other styles
9

Маханьков, Алексей Владимирович, Максим Олегович Кузнецов, and Анатолий Дмитриевич Панферов. "Efficiency of using NVIDIA coprocessors in modeling the behavior of charge carriers in graphene." Program Systems: Theory and Applications 12, no. 1 (March 23, 2021): 115–28. http://dx.doi.org/10.25209/2079-3316-2021-12-1-115-128.

Full text
Abstract:
В развитии суперкомпьютерных технологий важную роль играют специализированные аппаратные решения. В настоящее время большинство вычислительных систем максимальной производительности используют математические сопроцессоры различных типов. По этой причине при разработке прикладных программных решений, рассчитанных на реализацию потенциала современных вычислительных платформ, необходимо обеспечить эффективное использование аппаратных ускорителей. В ходе работы над программной системой для моделирования поведения носителей заряда в графене необходимо было решить задачу поддержки ею таких ускорителей и исследовать эффективность полученного решения. С учётом текущей ситуации и перспективы ближайших лет выбор был сделан в пользу ускорителей NVIDIA и программной технологии CUDA. В силу того, что аппаратная архитектура ускорителей NVIDIA имеет принципиальные отличия от архитектуры CPU, а адаптированные для CUDA математические библиотеки не поддерживают весь спектр алгоритмов, использовавшихся в исходной версии программы, потребовалось найти новые решения и оценить их эффективность. В работе представлены особенности реализации поддержки CUDA и результаты сравнительного тестирования полученного решения на примере задачи с реалистическими характеристиками.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Zhi Yuan, and Xue Zhang Zhao. "Research and Implementation of Image Rotation Based on CUDA." Advanced Materials Research 216 (March 2011): 708–12. http://dx.doi.org/10.4028/www.scientific.net/amr.216.708.

Full text
Abstract:
GPU technology release CPU from burdensome graphic computing task. The nVIDIA company, the main GPU producer, adds CUDA technology in new GPU models which enhances GPU function greatly and has much advantage in computing complex matrix. General algorithms of image rotation and the structure of CUDA are introduced in this paper. An example of rotating an image by using HALCON based on CPU instruction extensions and CUDA technology is to prove the advantage of CUDA by comparing two results.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Nvidia CUDA"

1

Zajíc, Jiří. "Překladač jazyka C# do jazyka Nvidia CUDA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236439.

Full text
Abstract:
This master's thesis is focused on GPU accelerated calculations on NVidia graphics card. CUDA technology is used and converted to implementation on a .NET platform. The problem is solved as a compiler from C# programing language to NVidia CUDA language with expression atrributes of C# language that preserves the same semantics of actions. Application is implemented in C# programing language and uses NRefactory, the open-source library.
APA, Harvard, Vancouver, ISO, and other styles
2

Savioli, Nicolo'. "Parallelization of the algorithm WHAM with NVIDIA CUDA." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6377/.

Full text
Abstract:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
APA, Harvard, Vancouver, ISO, and other styles
3

Ikeda, Patricia Akemi. "Um estudo do uso eficiente de programas em placas gráficas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-25042012-212956/.

Full text
Abstract:
Inicialmente projetadas para processamento de gráficos, as placas gráficas (GPUs) evoluíram para um coprocessador paralelo de propósito geral de alto desempenho. Devido ao enorme potencial que oferecem para as diversas áreas de pesquisa e comerciais, a fabricante NVIDIA destaca-se pelo pioneirismo ao lançar a arquitetura CUDA (compatível com várias de suas placas), um ambiente capaz de tirar proveito do poder computacional aliado à maior facilidade de programação. Na tentativa de aproveitar toda a capacidade da GPU, algumas práticas devem ser seguidas. Uma delas consiste em manter o hardware o mais ocupado possível. Este trabalho propõe uma ferramenta prática e extensível que auxilie o programador a escolher a melhor configuração para que este objetivo seja alcançado.
Initially designed for graphical processing, the graphic cards (GPUs) evolved to a high performance general purpose parallel coprocessor. Due to huge potencial that graphic cards offer to several research and commercial areas, NVIDIA was the pioneer lauching of CUDA architecture (compatible with their several cards), an environment that take advantage of computacional power combined with an easier programming. In an attempt to make use of all capacity of GPU, some practices must be followed. One of them is to maximizes hardware utilization. This work proposes a practical and extensible tool that helps the programmer to choose the best configuration and achieve this goal.
APA, Harvard, Vancouver, ISO, and other styles
4

Rivera-Polanco, Diego Alejandro. "COLLECTIVE COMMUNICATION AND BARRIER SYNCHRONIZATION ON NVIDIA CUDA GPU." Lexington, Ky. : [University of Kentucky Libraries], 2009. http://hdl.handle.net/10225/1158.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2009.
Title from document title page (viewed on May 18, 2010). Document formatted into pages; contains: ix, 88 p. : ill. Includes abstract and vita. Includes bibliographical references (p. 86-87).
APA, Harvard, Vancouver, ISO, and other styles
5

Harvey, Jesse Patrick. "GPU acceleration of object classification algorithms using NVIDIA CUDA /." Online version of thesis, 2009. http://hdl.handle.net/1850/10894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lerchundi, Osa Gorka. "Fast Implementation of Two Hash Algorithms on nVidia CUDA GPU." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9817.

Full text
Abstract:

User needs increases as time passes. We started with computers like the size of a room where the perforated plaques did the same function as the current machine code object does and at present we are at a point where the number of processors within our graphic device unit it’s not enough for our requirements. A change in the evolution of computing is looming. We are in a transition where the sequential computation is losing ground on the benefit of the distributed. And not because of the birth of the new GPUs easily accessible this trend is novel but long before it was used for projects like SETI@Home, fightAIDS@Home, ClimatePrediction and there were shouting from the rooftops about what was to come. Grid computing was its formal name. Until now it was linked only to distributed systems over the network, but as this technology evolves it will take different meaning. nVidia with CUDA has been one of the first companies to make this kind of software package noteworthy. Instead of being a proof of concept it’s a real tool. Where the transition is expressed in greater magnitude in which the true artist is the programmer who uses it and achieves performance increases. As with many innovations, a community distributed worldwide has grown behind this software package and each one doing its bit. It is noteworthy that after CUDA release a lot of software developments grown like the cracking of the hitherto insurmountable WPA. With Sony-Toshiba-IBM (STI) alliance it could be said the same thing, it has a great community and great software (IBM is the company in charge of maintenance). Unlike nVidia is not as accessible as it is but IBM is powerful enough to enter home made supercomputing market. In this case, after IBM released the PS3 SDK, a notorious application was created using the benefits of parallel computing named Folding@Home. Its purpose is to, inter alia, find the cure for cancer. To sum up, this is only the beginning, and in this thesis is sized up the possibility of using this technology for accelerating cryptographic hash algorithms. BLUE MIDNIGHT WISH (The hash algorithm that is applied to the surgery) is undergone to an environment change adapting it to a parallel capable code for creating empirical measures that compare to the current sequential implementations. It will answer questions that nowadays haven’t been answered yet. BLUE MIDNIGHT WISH is a candidate hash function for the next NIST standard SHA-3, designed by professor Danilo Gligoroski from NTNU and Vlastimil Klima – an independent cryptographer from Czech Republic. So far, from speed point of view BLUE MIDNIGHT WISH is on the top of the charts (generally on the second place – right behind EDON-R - another hash function from professor Danilo Gligoroski). One part of the work on this thesis was to investigate is it possible to achieve faster speeds in processing of Blue Midnight Wish when the computations are distributed among the cores in a CUDA device card. My numerous experiments give a clear answer: NO. Although the answer is negative, it still has a significant scientific value. The point is that my work acknowledges viewpoints and standings of a part of the cryptographic community that is doubtful that the cryptographic primitives will benefit when executed in parallel in many cores in one CPU. Indeed, my experiments show that the communication costs between cores in CUDA outweigh by big margin the computational costs done inside one core (processor) unit.

APA, Harvard, Vancouver, ISO, and other styles
7

Virk, Bikram. "Implementing method of moments on a GPGPU using Nvidia CUDA." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33980.

Full text
Abstract:
This thesis concentrates on the algorithmic aspects of Method of Moments (MoM) and Locally Corrected Nyström (LCN) numerical methods in electromagnetics. The data dependency in each step of the algorithm is analyzed to implement a parallel version that can harness the powerful processing power of a General Purpose Graphics Processing Unit (GPGPU). The GPGPU programming model provided by NVIDIA's Compute Unified Device Architecture (CUDA) is described to learn the software tools at hand enabling us to implement C code on the GPGPU. Various optimizations such as the partial update at every iteration, inter-block synchronization and using shared memory enable us to achieve an overall speedup of approximately 10. The study also brings out the strengths and weaknesses in implementing different methods such as Crout's LU decomposition and triangular matrix inversion on a GPGPU architecture. The results suggest future directions of study in different algorithms and their effectiveness on a parallel processor environment. The performance data collected show how different features of the GPGPU architecture can be enhanced to yield higher speedup.
APA, Harvard, Vancouver, ISO, and other styles
8

Sreenibha, Reddy Byreddy. "Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDA." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16846.

Full text
Abstract:
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
APA, Harvard, Vancouver, ISO, and other styles
9

Bourque, Donald. "CUDA-Accelerated ORB-SLAM for UAVs." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/882.

Full text
Abstract:
"The use of cameras and computer vision algorithms to provide state estimation for robotic systems has become increasingly popular, particularly for small mobile robots and unmanned aerial vehicles (UAVs). These algorithms extract information from the camera images and perform simultaneous localization and mapping (SLAM) to provide state estimation for path planning, obstacle avoidance, or 3D reconstruction of the environment. High resolution cameras have become inexpensive and are a lightweight and smaller alternative to laser scanners. UAVs often have monocular camera or stereo camera setups since payload and size impose the greatest restrictions on their flight time and maneuverability. This thesis explores ORB-SLAM, a popular Visual SLAM method that is appropriate for UAVs. Visual SLAM is computationally expensive and normally offloaded to computers in research environments. However, large UAVs with greater payload capacity may carry the necessary hardware for performing the algorithms. The inclusion of general-purpose GPUs on many of the newer single board computers allows for the potential of GPU-accelerated computation within a small board profile. For this reason, an NVidia Jetson board containing an NVidia Pascal GPU was used. CUDA, NVidia’s parallel computing platform, was used to accelerate monocular ORB-SLAM, achieving onboard Visual SLAM on a small UAV. Committee members:"
APA, Harvard, Vancouver, ISO, and other styles
10

Subramoniapillai, Ajeetha Saktheesh. "Architectural Analysis and Performance Characterization of NVIDIA GPUs using Microbenchmarking." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1344623484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Nvidia CUDA"

1

Dagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Nvidia CUDA"

1

Klapka, Ondrej, and Antonin Slaby. "nVidia CUDA Platform in Graph Visualization." In Advances in Intelligent Systems and Computing, 511–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27478-2_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Palomar, Rafael, José M. Palomares, José M. Castillo, Joaquín Olivares, and Juan Gómez-Luna. "Parallelizing and Optimizing LIP-Canny Using NVIDIA CUDA." In Trends in Applied Intelligent Systems, 389–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13033-5_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miletić, Vedran, Martina Holenko Dlab, and Nataša Hoić-Božić. "Optimizing ELARS Algorithms Using NVIDIA CUDA Heterogeneous Parallel Programming Platform." In ICT Innovations 2014, 135–44. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09879-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Yanyan, Hui Chen, Reinhard Klette, Jiaju Liu, and Tobi Vaudrey. "Belief Propagation Implementation Using CUDA on an NVIDIA GTX 280." In AI 2009: Advances in Artificial Intelligence, 180–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10439-8_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dyakonova, Tatyana, Alexander Khoperskov, and Sergey Khrapov. "Numerical Model of Shallow Water: The Use of NVIDIA CUDA Graphics Processors." In Communications in Computer and Information Science, 132–45. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-55669-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pala, Artur, and Jan Sadecki. "Application of the Nvidia CUDA Technology to Solve the System of Ordinary Differential Equations." In Biomedical Engineering and Neuroscience, 207–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75025-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Masada, Tomonari, Tsuyoshi Hamada, Yuichiro Shibata, and Kiyoshi Oguri. "Accelerating Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation with Nvidia CUDA Compatible Devices." In Next-Generation Applied Intelligence, 491–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02568-6_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Ruiyi, and Qian Yin. "A Novel Parallel Clustering Algorithm Based on Artificial Immune Network Using nVidia CUDA Framework." In Human-Computer Interaction. Design and Development Approaches, 598–607. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21602-2_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vingelmann, Péter, and Frank H. P. Fitzek. "Implementation of Random Linear Network Coding Using NVIDIA’s CUDA Toolkit." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 131–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11733-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rokos, Georgios, Gerard Gorman, and Paul H. J. Kelly. "Accelerating Anisotropic Mesh Adaptivity on nVIDIA’s CUDA Using Texture Interpolation." In Euro-Par 2011 Parallel Processing, 387–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23397-5_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Nvidia CUDA"

1

Buck, Ian. "GPU computing with NVIDIA CUDA." In ACM SIGGRAPH 2007 courses. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1281500.1281647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yuancheng Luo and Ramani Duraiswami. "Canny edge detection on NVIDIA CUDA." In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2008. http://dx.doi.org/10.1109/cvprw.2008.4563088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Colic, Aleksandar, Hari Kalva, and Borko Furht. "Exploring NVIDIA-CUDA for video coding." In the first annual ACM SIGMM conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1730836.1730839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

González, David, Christian Sánchez, Ricardo Veguilla, Nayda G. Santiago, Samuel Rosario-Torres, and Miguel Vélez-Reyes. "Abundance estimation algorithms using NVIDIA CUDA technology." In SPIE Defense and Security Symposium, edited by Sylvia S. Shen and Paul E. Lewis. SPIE, 2008. http://dx.doi.org/10.1117/12.777890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harris, Mark. "Many-core GPU computing with NVIDIA CUDA." In the 22nd annual international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1375527.1375528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mazanec, Tomas, Antonin Hermanek, and Jan Kamenicky. "Blind image deconvolution algorithm on NVIDIA CUDA platform." In 2010 IEEE 13th International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS). IEEE, 2010. http://dx.doi.org/10.1109/ddecs.2010.5491803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Langdon, W. B., and M. Harman. "Evolving a CUDA kernel from an nVidia template." In 2010 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2010. http://dx.doi.org/10.1109/cec.2010.5585922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kirk, David. "NVIDIA cuda software and gpu parallel computing architecture." In the 6th international symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1296907.1296909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fredj, Amira Hadj, and Jihene Malek. "Real time ultrasound image denoising using NVIDIA CUDA." In 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2016. http://dx.doi.org/10.1109/atsip.2016.7523083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shams, Ramtin, and Nick Barnes. "Speeding up Mutual Information Computation Using NVIDIA CUDA Hardware." In 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007). IEEE, 2007. http://dx.doi.org/10.1109/dicta.2007.4426846.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Nvidia CUDA"

1

Lippuner, Jonas. NVIDIA CUDA. Office of Scientific and Technical Information (OSTI), July 2019. http://dx.doi.org/10.2172/1532687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography