Academic literature on the topic 'Nvidia CUDA'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Nvidia CUDA.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Nvidia CUDA"
Nangla, Siddhante. "GPU Programming using NVIDIA CUDA." International Journal for Research in Applied Science and Engineering Technology 6, no. 6 (June 30, 2018): 79–84. http://dx.doi.org/10.22214/ijraset.2018.6016.
Full textPogorilyy, S. D., D. Yu Vitel, and O. A. Vereshchynsky. "Новітні архітектури відеоадаптерів. Технологія GPGPU. Частина 2." Реєстрація, зберігання і обробка даних 15, no. 1 (April 4, 2013): 71–81. http://dx.doi.org/10.35681/1560-9189.2013.15.1.103367.
Full textHURMAN, Ivan, Kira BOBROVNIKOVA, Leonid BEDRATYUK, and Hanna BEDRATYUK. "APPROACH FOR CODE ANALYSIS TO ESTIMATE POWER CONSUMPTION OF CUDA CORE." Herald of Khmelnytskyi National University. Technical sciences 217, no. 1 (February 23, 2023): 67–73. http://dx.doi.org/10.31891/2307-5732-2023-317-1-67-73.
Full textAhmed, Rafid, Md Sazzadul Islam, and Jia Uddin. "Optimizing Apple Lossless Audio Codec Algorithm using NVIDIA CUDA Architecture." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (February 1, 2018): 70. http://dx.doi.org/10.11591/ijece.v8i1.pp70-75.
Full textKim, Youngtae, and Gyuhyeon Hwang. "Efficient Parallel CUDA Random Number Generator on NVIDIA GPUs." Journal of KIISE 42, no. 12 (December 15, 2015): 1467–73. http://dx.doi.org/10.5626/jok.2015.42.12.1467.
Full textSemenenko, Julija, and Dmitrij Šešok. "Lygiagretūs skaičiavimai su CUDA." Jaunųjų mokslininkų darbai 47, no. 1 (July 3, 2017): 87–93. http://dx.doi.org/10.21277/jmd.v47i1.135.
Full textPopov, S. E. "Improved phase unwrapping algorithm based on NVIDIA CUDA." Programming and Computer Software 43, no. 1 (January 2017): 24–36. http://dx.doi.org/10.1134/s0361768817010054.
Full textGonzalez Clua, Esteban Walter, and Marcelo Panaro Zamith. "Programming in CUDA for Kepler and Maxwell Architecture." Revista de Informática Teórica e Aplicada 22, no. 2 (November 21, 2015): 233. http://dx.doi.org/10.22456/2175-2745.56384.
Full textМаханьков, Алексей Владимирович, Максим Олегович Кузнецов, and Анатолий Дмитриевич Панферов. "Efficiency of using NVIDIA coprocessors in modeling the behavior of charge carriers in graphene." Program Systems: Theory and Applications 12, no. 1 (March 23, 2021): 115–28. http://dx.doi.org/10.25209/2079-3316-2021-12-1-115-128.
Full textLiu, Zhi Yuan, and Xue Zhang Zhao. "Research and Implementation of Image Rotation Based on CUDA." Advanced Materials Research 216 (March 2011): 708–12. http://dx.doi.org/10.4028/www.scientific.net/amr.216.708.
Full textDissertations / Theses on the topic "Nvidia CUDA"
Zajíc, Jiří. "Překladač jazyka C# do jazyka Nvidia CUDA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236439.
Full textSavioli, Nicolo'. "Parallelization of the algorithm WHAM with NVIDIA CUDA." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6377/.
Full textIkeda, Patricia Akemi. "Um estudo do uso eficiente de programas em placas gráficas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-25042012-212956/.
Full textInitially designed for graphical processing, the graphic cards (GPUs) evolved to a high performance general purpose parallel coprocessor. Due to huge potencial that graphic cards offer to several research and commercial areas, NVIDIA was the pioneer lauching of CUDA architecture (compatible with their several cards), an environment that take advantage of computacional power combined with an easier programming. In an attempt to make use of all capacity of GPU, some practices must be followed. One of them is to maximizes hardware utilization. This work proposes a practical and extensible tool that helps the programmer to choose the best configuration and achieve this goal.
Rivera-Polanco, Diego Alejandro. "COLLECTIVE COMMUNICATION AND BARRIER SYNCHRONIZATION ON NVIDIA CUDA GPU." Lexington, Ky. : [University of Kentucky Libraries], 2009. http://hdl.handle.net/10225/1158.
Full textTitle from document title page (viewed on May 18, 2010). Document formatted into pages; contains: ix, 88 p. : ill. Includes abstract and vita. Includes bibliographical references (p. 86-87).
Harvey, Jesse Patrick. "GPU acceleration of object classification algorithms using NVIDIA CUDA /." Online version of thesis, 2009. http://hdl.handle.net/1850/10894.
Full textLerchundi, Osa Gorka. "Fast Implementation of Two Hash Algorithms on nVidia CUDA GPU." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9817.
Full textUser needs increases as time passes. We started with computers like the size of a room where the perforated plaques did the same function as the current machine code object does and at present we are at a point where the number of processors within our graphic device unit its not enough for our requirements. A change in the evolution of computing is looming. We are in a transition where the sequential computation is losing ground on the benefit of the distributed. And not because of the birth of the new GPUs easily accessible this trend is novel but long before it was used for projects like SETI@Home, fightAIDS@Home, ClimatePrediction and there were shouting from the rooftops about what was to come. Grid computing was its formal name. Until now it was linked only to distributed systems over the network, but as this technology evolves it will take different meaning. nVidia with CUDA has been one of the first companies to make this kind of software package noteworthy. Instead of being a proof of concept its a real tool. Where the transition is expressed in greater magnitude in which the true artist is the programmer who uses it and achieves performance increases. As with many innovations, a community distributed worldwide has grown behind this software package and each one doing its bit. It is noteworthy that after CUDA release a lot of software developments grown like the cracking of the hitherto insurmountable WPA. With Sony-Toshiba-IBM (STI) alliance it could be said the same thing, it has a great community and great software (IBM is the company in charge of maintenance). Unlike nVidia is not as accessible as it is but IBM is powerful enough to enter home made supercomputing market. In this case, after IBM released the PS3 SDK, a notorious application was created using the benefits of parallel computing named Folding@Home. Its purpose is to, inter alia, find the cure for cancer. To sum up, this is only the beginning, and in this thesis is sized up the possibility of using this technology for accelerating cryptographic hash algorithms. BLUE MIDNIGHT WISH (The hash algorithm that is applied to the surgery) is undergone to an environment change adapting it to a parallel capable code for creating empirical measures that compare to the current sequential implementations. It will answer questions that nowadays havent been answered yet. BLUE MIDNIGHT WISH is a candidate hash function for the next NIST standard SHA-3, designed by professor Danilo Gligoroski from NTNU and Vlastimil Klima an independent cryptographer from Czech Republic. So far, from speed point of view BLUE MIDNIGHT WISH is on the top of the charts (generally on the second place right behind EDON-R - another hash function from professor Danilo Gligoroski). One part of the work on this thesis was to investigate is it possible to achieve faster speeds in processing of Blue Midnight Wish when the computations are distributed among the cores in a CUDA device card. My numerous experiments give a clear answer: NO. Although the answer is negative, it still has a significant scientific value. The point is that my work acknowledges viewpoints and standings of a part of the cryptographic community that is doubtful that the cryptographic primitives will benefit when executed in parallel in many cores in one CPU. Indeed, my experiments show that the communication costs between cores in CUDA outweigh by big margin the computational costs done inside one core (processor) unit.
Virk, Bikram. "Implementing method of moments on a GPGPU using Nvidia CUDA." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33980.
Full textSreenibha, Reddy Byreddy. "Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDA." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16846.
Full textBourque, Donald. "CUDA-Accelerated ORB-SLAM for UAVs." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/882.
Full textSubramoniapillai, Ajeetha Saktheesh. "Architectural Analysis and Performance Characterization of NVIDIA GPUs using Microbenchmarking." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1344623484.
Full textBooks on the topic "Nvidia CUDA"
Dagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2013.
Find full textDagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2012.
Find full textDagg, Michael. NVIDIA GPU Programming: Massively Parallel Programming with CUDA. Wiley & Sons, Incorporated, John, 2012.
Find full textBook chapters on the topic "Nvidia CUDA"
Klapka, Ondrej, and Antonin Slaby. "nVidia CUDA Platform in Graph Visualization." In Advances in Intelligent Systems and Computing, 511–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27478-2_38.
Full textPalomar, Rafael, José M. Palomares, José M. Castillo, Joaquín Olivares, and Juan Gómez-Luna. "Parallelizing and Optimizing LIP-Canny Using NVIDIA CUDA." In Trends in Applied Intelligent Systems, 389–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13033-5_40.
Full textMiletić, Vedran, Martina Holenko Dlab, and Nataša Hoić-Božić. "Optimizing ELARS Algorithms Using NVIDIA CUDA Heterogeneous Parallel Programming Platform." In ICT Innovations 2014, 135–44. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09879-1_14.
Full textXu, Yanyan, Hui Chen, Reinhard Klette, Jiaju Liu, and Tobi Vaudrey. "Belief Propagation Implementation Using CUDA on an NVIDIA GTX 280." In AI 2009: Advances in Artificial Intelligence, 180–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10439-8_19.
Full textDyakonova, Tatyana, Alexander Khoperskov, and Sergey Khrapov. "Numerical Model of Shallow Water: The Use of NVIDIA CUDA Graphics Processors." In Communications in Computer and Information Science, 132–45. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-55669-7_11.
Full textPala, Artur, and Jan Sadecki. "Application of the Nvidia CUDA Technology to Solve the System of Ordinary Differential Equations." In Biomedical Engineering and Neuroscience, 207–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75025-5_19.
Full textMasada, Tomonari, Tsuyoshi Hamada, Yuichiro Shibata, and Kiyoshi Oguri. "Accelerating Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation with Nvidia CUDA Compatible Devices." In Next-Generation Applied Intelligence, 491–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02568-6_50.
Full textLuo, Ruiyi, and Qian Yin. "A Novel Parallel Clustering Algorithm Based on Artificial Immune Network Using nVidia CUDA Framework." In Human-Computer Interaction. Design and Development Approaches, 598–607. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21602-2_65.
Full textVingelmann, Péter, and Frank H. P. Fitzek. "Implementation of Random Linear Network Coding Using NVIDIA’s CUDA Toolkit." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 131–38. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11733-6_14.
Full textRokos, Georgios, Gerard Gorman, and Paul H. J. Kelly. "Accelerating Anisotropic Mesh Adaptivity on nVIDIA’s CUDA Using Texture Interpolation." In Euro-Par 2011 Parallel Processing, 387–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23397-5_38.
Full textConference papers on the topic "Nvidia CUDA"
Buck, Ian. "GPU computing with NVIDIA CUDA." In ACM SIGGRAPH 2007 courses. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1281500.1281647.
Full textYuancheng Luo and Ramani Duraiswami. "Canny edge detection on NVIDIA CUDA." In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2008. http://dx.doi.org/10.1109/cvprw.2008.4563088.
Full textColic, Aleksandar, Hari Kalva, and Borko Furht. "Exploring NVIDIA-CUDA for video coding." In the first annual ACM SIGMM conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1730836.1730839.
Full textGonzález, David, Christian Sánchez, Ricardo Veguilla, Nayda G. Santiago, Samuel Rosario-Torres, and Miguel Vélez-Reyes. "Abundance estimation algorithms using NVIDIA CUDA technology." In SPIE Defense and Security Symposium, edited by Sylvia S. Shen and Paul E. Lewis. SPIE, 2008. http://dx.doi.org/10.1117/12.777890.
Full textHarris, Mark. "Many-core GPU computing with NVIDIA CUDA." In the 22nd annual international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1375527.1375528.
Full textMazanec, Tomas, Antonin Hermanek, and Jan Kamenicky. "Blind image deconvolution algorithm on NVIDIA CUDA platform." In 2010 IEEE 13th International Symposium on Design and Diagnostics of Electronic Circuits & Systems (DDECS). IEEE, 2010. http://dx.doi.org/10.1109/ddecs.2010.5491803.
Full textLangdon, W. B., and M. Harman. "Evolving a CUDA kernel from an nVidia template." In 2010 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2010. http://dx.doi.org/10.1109/cec.2010.5585922.
Full textKirk, David. "NVIDIA cuda software and gpu parallel computing architecture." In the 6th international symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1296907.1296909.
Full textFredj, Amira Hadj, and Jihene Malek. "Real time ultrasound image denoising using NVIDIA CUDA." In 2016 2nd International Conference on Advanced Technologies for Signal and Image Processing (ATSIP). IEEE, 2016. http://dx.doi.org/10.1109/atsip.2016.7523083.
Full textShams, Ramtin, and Nick Barnes. "Speeding up Mutual Information Computation Using NVIDIA CUDA Hardware." In 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA 2007). IEEE, 2007. http://dx.doi.org/10.1109/dicta.2007.4426846.
Full textReports on the topic "Nvidia CUDA"
Lippuner, Jonas. NVIDIA CUDA. Office of Scientific and Technical Information (OSTI), July 2019. http://dx.doi.org/10.2172/1532687.
Full text