Academic literature on the topic 'Sparse Vector Vector Multiplication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Vector Vector Multiplication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Vector Vector Multiplication"

1

Tao, Yuan, Yangdong Deng, Shuai Mu, et al. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri, and Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs." ACM Transactions on Mathematical Software 43, no. 4 (2017): 1–49. http://dx.doi.org/10.1145/3017994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS." International Journal of High Speed Computing 02, no. 02 (1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Haque, Sardar Anisul, Shahadat Hossain, and M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication." ACM Communications in Computer Algebra 44, no. 3/4 (2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bienz, Amanda, William D. Gropp, and Luke N. Olson. "Node aware sparse matrix–vector multiplication." Journal of Parallel and Distributed Computing 130 (August 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Xintian, Srinivasan Parthasarathy, and P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs." Proceedings of the VLDB Endowment 4, no. 4 (2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Romero, L. F., and E. L. Zapata. "Data distributions for sparse matrix vector multiplication." Parallel Computing 21, no. 4 (1995): 583–605. http://dx.doi.org/10.1016/0167-8191(94)00087-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Rajesh, Victor DeBrunner, and Linda S. DeBrunner. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors." Signals 2, no. 4 (2021): 688–705. http://dx.doi.org/10.3390/signals2040041.

Full text
Abstract:
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more effic
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Sheng, Yasong Cao, and Shuwei Sun. "Mapping and Optimization Method of SpMV on Multi-DSP Accelerator." Electronics 11, no. 22 (2022): 3699. http://dx.doi.org/10.3390/electronics11223699.

Full text
Abstract:
Sparse matrix-vector multiplication (SpMV) solves the product of a sparse matrix and dense vector, and the sparseness of a sparse matrix is often more than 90%. Usually, the sparse matrix is compressed to save storage resources, but this causes irregular access to dense vectors in the algorithm, which takes a lot of time and degrades the SpMV performance of the system. In this study, we design a dedicated channel in the DMA to implement an indirect memory access process to speed up the SpMV operation. On this basis, we propose six SpMV algorithm schemes and map them to optimize the performance
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sparse Vector Vector Multiplication"

1

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Full text
Abstract:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exp
APA, Harvard, Vancouver, ISO, and other styles
7

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pantawongdecha, Payut. "Autotuning divide-and-conquer matrix-vector multiplication." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105968.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 73-75).<br>Divide and conquer is an important concept in computer science. It is used ubiquitously to simplify and speed up programs. However, it needs to be optimized, with respect to parameter settings for example, in orde
APA, Harvard, Vancouver, ISO, and other styles
9

Hopkins, T. M. "The design of a sparse vector processor." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14094.

Full text
Abstract:
This thesis describes the development of a new vector processor architecture capable of high efficiency when computing with very sparse vector and matrix data, of irregular structure. Two applications are identified as of particular importance: sparse Gaussian elimination, and Linear Programming, and the algorithmic steps involved in the solution of these problems are analysed. Existing techniques for sparse vector computation, which are only able to achieve a small fraction of the arithmetic performance commonly expected on dense matrix problems, are critically examined. A variety of new tech
APA, Harvard, Vancouver, ISO, and other styles
10

Belgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.

Full text
Abstract:
This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative solvers for systems of linear equations. SMVM computations that use sparse formats typically achieve only a small fraction of peak CPU speeds because they are memory bound due to their low flops:byte ratio, they access memory irregularly, and exhibit poor ILP due to inefficient pipelining. We particularly focus on improving the flops:byte ratio, which is the main limiter on performance, by exploiting recurring struct
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sparse Vector Vector Multiplication"

1

Andersen, J. The scheduling of sparse matrix-vector multiplicatiion on a massively parallel DAP computer. Brunel University, Department of Mathematics and Statistics, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Itai, Yad-Shalom, and Langley Research Center, eds. Fast multiresolution algorithms for matrix-vector multiplication. National Aeronautics and Space Administration, Langley Research Center, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

M¨uhlherr, Bernhard, Holger P. Petersson, and Richard M. Weiss. Quadratic Forms of Type F4. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691166902.003.0009.

Full text
Abstract:
This chapter presents various results about quadratic forms of type F₄. The Moufang quadrangles of type F₄ were discovered in the course of carrying out the classification of Moufang polygons and gave rise to the notion of a quadratic form of type F₄. The chapter begins with the notation stating that a quadratic space Λ‎ = (K, L, q) is of type F₄ if char(K) = 2, q is anisotropic and: for some separable quadratic extension E/K with norm N; for some subfield F of K containing K² viewed as a vector space over K with respect to the scalar multiplication (t, s) ↦ t²s for all (t, s) ∈ K x F; and for
APA, Harvard, Vancouver, ISO, and other styles
4

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library B
APA, Harvard, Vancouver, ISO, and other styles
5

Mann, Peter. Legendre Transforms. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198822370.003.0033.

Full text
Abstract:
This chapter introduces vector calculus to the reader from the very basics to a level appropriate for studying classical mechanics. However, it provides only the necessary vector calculus required to understand some of the operations perform in the text and perhaps support self-learning in more advanced topics, so the analysis is not be definitive. The chapter begins by examining the axioms of vector algebra, vector multiplication and vector differentiation, and then tackles the gradient, divergence and curl and other elements of vector integration. Topics discussed include contour integrals,
APA, Harvard, Vancouver, ISO, and other styles
6

Algebraic And Geometric Aspects Of Integrable Systems And Random Matrices Ams Special Session Algebraic And Geometric Aspects Of Integrable Systems And Random Matrices January 67 2012 Boston Ma. American Mathematical Society, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse Vector Vector Multiplication"

1

Vassiliadis, Stamatis, Sorin Cotofana, and Pyrrhos Stathis. "Vector ISA Extension for Sparse Matrix-Vector Multiplication." In Euro-Par’99 Parallel Processing. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Plaksa, Sergiy A., and Vitalii S. Shpakivskyi. "Differentiation in Vector Spaces." In Monogenic Functions in Spaces with Commutative Multiplication and Applications. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-32254-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hishinuma, Toshiaki, Hidehiko Hasegawa, and Teruo Tanaka. "SIMD Parallel Sparse Matrix-Vector and Transposed-Matrix-Vector Multiplication in DD Precision." In High Performance Computing for Computational Science – VECPAR 2016. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61982-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Monakov, Alexander, and Arutyun Avetisyan. "Implementing Blocked Sparse Matrix-Vector Multiplication on NVIDIA GPUs." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03138-0_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

AlAhmadi, Sarah, Thaha Muhammed, Rashid Mehmood, and Aiiad Albeshri. "Performance Characteristics for Sparse Matrix-Vector Multiplication on GPUs." In Smart Infrastructure and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13705-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Çatalyürek, Ümit V., and Cevdet Aykanat. "Decomposing irregularly sparse matrices for parallel matrix-vector multiplication." In Parallel Algorithms for Irregularly Structured Problems. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wellein, Gerhard, Georg Hager, Achim Basermann, and Holger Fehske. "Fast Sparse Matrix-Vector Multiplication for TeraFlop/s Computers." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36569-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Monakov, Alexander, Anton Lokhmotov, and Arutyun Avetisyan. "Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures." In High Performance Embedded Architectures and Compilers. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11515-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vuduc, Richard W., and Hyun-Jin Moon. "Fast Sparse Matrix-Vector Multiplication by Exploiting Variable Block Structure." In High Performance Computing and Communications. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11557654_91.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Vector Vector Multiplication"

1

Zhuo, Ling, and Viktor K. Prasanna. "Sparse Matrix-Vector multiplication on FPGAs." In the 2005 ACM/SIGDA 13th international symposium. ACM Press, 2005. http://dx.doi.org/10.1145/1046192.1046202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haque, Sardar Anisul, Shahadat Hossain, and Marc Moreno Maza. "Cache friendly sparse matrix-vector multiplication." In the 4th International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1837210.1837238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Haoran, Harumichi Yokoyama, and Takuya Araki. "Merge-Based Parallel Sparse Matrix-Sparse Vector Multiplication with a Vector Architecture." In 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). IEEE, 2018. http://dx.doi.org/10.1109/hpcc/smartcity/dss.2018.00038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shah, Monika. "Sparse Matrix Sparse Vector Multiplication - A Novel Approach." In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buluç, Aydin, Jeremy T. Fineman, Matteo Frigo, John R. Gilbert, and Charles E. Leiserson. "Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks." In the twenty-first annual symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1583991.1584053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuowei Wang, Xianbin Xu, Wuqing Zhao, Yuping Zhang, and Shuibing He. "Optimizing sparse matrix-vector multiplication on CUDA." In 2010 2nd International Conference on Education Technology and Computer (ICETC 2010). IEEE, 2010. http://dx.doi.org/10.1109/icetc.2010.5529724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pinar, Ali, and Michael T. Heath. "Improving performance of sparse matrix-vector multiplication." In the 1999 ACM/IEEE conference. ACM Press, 1999. http://dx.doi.org/10.1145/331532.331562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Junqing, Gregory Peterson, and Olaf Storaasli. "Sparse Matrix-Vector Multiplication Design on FPGAs." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Merrill, Duane, and Michael Garland. "Merge-Based Parallel Sparse Matrix-Vector Multiplication." In SC16: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2016. http://dx.doi.org/10.1109/sc.2016.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Quang Anh, Pham Nguyen, Rui Fan, and Yonggang Wen. "Reducing Vector I/O for Faster GPU Sparse Matrix-Vector Multiplication." In 2015 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 2015. http://dx.doi.org/10.1109/ipdps.2015.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse Vector Vector Multiplication"

1

Vuduc, R., and H. Moon. Fast sparse matrix-vector multiplication by exploiting variable block structure. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/891708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada204321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada158274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calahan, D. A. Sparse Elimination on Vector Multiprocessors. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada175121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hendrickson, B., R. Leland, and S. Plimpton. An efficient parallel algorithm for matrix-vector multiplication. Office of Scientific and Technical Information (OSTI), 1993. http://dx.doi.org/10.2172/6519330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liberty, Edo, and Steven W. Zucker. The Mailman Algorithm: A Note on Matrix Vector Multiplication. Defense Technical Information Center, 2008. http://dx.doi.org/10.21236/ada481737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Simon, Horst D. Ordering Methods for Sparse Matrices and Vector Computers. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada186350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lewis, John G. Ordering Methods for Sparse Matrices and Vector Computers. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada198291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gropp, W. D., D. K. Kaushik, M. Minkoff, and B. F. Smith. Improving the performance of tensor matrix vector multiplication in quantum chemistry codes. Office of Scientific and Technical Information (OSTI), 2008. http://dx.doi.org/10.2172/928654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tolleson, Blayne, Matthew Marinella, Christopher Bennett, Hugh Barnaby, Donald Wilson, and Jesse Short. Vector-Matrix Multiplication Engine for Neuromorphic Computation with a CBRAM Crossbar Array. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1846087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!