Academic literature on the topic 'Sparse matrix multiplication'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse matrix multiplication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse matrix multiplication"

1

Seo, Juwon, and Joonho Kong. "VerSA: Versatile Systolic Array Architecture for Sparse and Dense Matrix Multiplications." Electronics 13, no. 8 (April 15, 2024): 1500. http://dx.doi.org/10.3390/electronics13081500.

Full text
Abstract:
A key part of modern deep neural network (DNN) applications is matrix multiplication. As DNN applications are becoming more diverse, there is a need for both dense and sparse matrix multiplications to be accelerated by hardware. However, most hardware accelerators are designed to accelerate either dense or sparse matrix multiplication. In this paper, we propose VerSA, a versatile systolic array architecture for both dense and sparse matrix multiplications. VerSA employs intermediate paths and SRAM buffers between the rows of the systolic array (SA), thereby enabling an early termination in sparse matrix multiplication with a negligible performance overhead when running dense matrix multiplication. When running sparse matrix multiplication, 256 × 256 VerSA brings performance (i.e., an inverse of execution time) improvement and energy saving by 1.21×–1.60× and 7.5–30.2%, respectively, when compared to the conventional SA. When running dense matrix multiplication, VerSA results in only a 0.52% performance overhead compared to the conventional SA.
APA, Harvard, Vancouver, ISO, and other styles
2

Briggs, Preston. "Sparse matrix multiplication." ACM SIGPLAN Notices 31, no. 11 (November 1996): 33–37. http://dx.doi.org/10.1145/240964.240970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yuster, Raphael, and Uri Zwick. "Fast sparse matrix multiplication." ACM Transactions on Algorithms 1, no. 1 (July 2005): 2–13. http://dx.doi.org/10.1145/1077464.1077466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, S. C., J. P. Draayer, and S. Q. Zheng. "Fast sparse matrix multiplication." Computer Physics Communications 70, no. 3 (July 1992): 557–68. http://dx.doi.org/10.1016/0010-4655(92)90116-g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tao, Yuan, Yangdong Deng, Shuai Mu, Zhenzhong Zhang, Mingfa Zhu, Limin Xiao, and Li Ruan. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (October 7, 2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Borna, Keivan, and Sohrab Fard. "A note on the multiplication of sparse matrices." Open Computer Science 4, no. 1 (January 1, 2014): 1–11. http://dx.doi.org/10.2478/s13537-014-0201-x.

Full text
Abstract:
AbstractWe present a practical algorithm for multiplication of two sparse matrices. In fact if A and B are two matrices of size n with m 1 and m 2 non-zero elements respectively, then our algorithm performs O(min{m 1 n, m 2 n, m 1 m 2}) multiplications and O(k) additions where k is the number of non-zero elements in the tiny matrices that are obtained by the columns times rows matrix multiplication method. Note that in the useful case, k ≤ m 2 n. However, in Proposition 3.3 and Proposition 3.4 we obtain tight upper bounds for the complexity of additions. We also study the complexity of multiplication in a practical case where non-zero elements of A (resp. B) are distributed independently with uniform distribution among columns (resp. rows) of them and show that the expected number of multiplications is O(m 1 m 2/n). Finally a comparison of number of required multiplications in the naïve matrix multiplication, Strassen’s method and our algorithm is given.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmed, Md Salman, Jennifer Houser, Mohammad A. Hoque, Rezaul Raju, and Phil Pfeiffer. "Reducing Inter-Process Communication Overhead in Parallel Sparse Matrix-Matrix Multiplication." International Journal of Grid and High Performance Computing 9, no. 3 (July 2017): 46–59. http://dx.doi.org/10.4018/ijghpc.2017070104.

Full text
Abstract:
Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., where is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes run-time complexity for accumulating the results.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Ying, and Korhan Cengiz. "Implementation of the Spark technique in a matrix distributed computing algorithm." Journal of Intelligent Systems 31, no. 1 (January 1, 2022): 660–71. http://dx.doi.org/10.1515/jisys-2022-0051.

Full text
Abstract:
Abstract Two analyzes of Spark engine performance strategies to implement the Spark technique in a matrix distributed computational algorithm, the multiplication of a sparse multiplication operational test model. The dimensions of the two input sparse matrices have been fixed to 30,000 × 30,000, and the density of the input matrix have been changed. The experimental results show that when the density reaches about 0.3, the original dense matrix multiplication performance can outperform the sparse-sparse matrix multiplication, which is basically consistent with the relationship between the sparse matrix multiplication implementation in the single-machine sparse matrix test and the computational performance of the local native library. When the density of the fixed sparse matrix is 0.01, the distributed density-sparse matrix multiplication outperforms the same sparsity but uses the density matrix storage, and the acceleration ratio increases from 1.88× to 5.71× with the increase in dimension. The overall performance of distributed operations is improved.
APA, Harvard, Vancouver, ISO, and other styles
9

Bank, Randolph E., and Craig C. Douglas. "Sparse matrix multiplication package (SMMP)." Advances in Computational Mathematics 1, no. 1 (February 1993): 127–37. http://dx.doi.org/10.1007/bf02070824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Král, Daniel, Pavel Neogrády, and Vladimir Kellö. "Simple sparse matrix multiplication algorithm." Computer Physics Communications 85, no. 2 (February 1995): 213–16. http://dx.doi.org/10.1016/0010-4655(94)00120-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sparse matrix multiplication"

1

Kunchum, Rakshith. "On Improving Sparse Matrix-Matrix Multiplication on GPUs." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492694387445938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thumma, Vineeth Reddy. "Optimizing Sparse Matrix-Matrix Multiplication for Graph Computations on GPUs and Multi-Core Systems." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524113772955789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Full text
Abstract:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exploits the FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements. The architecture is based on a pipelined linear array of Processing Elements (PEs). A hardware-oriented matrix "striping" scheme is developed which reduces the number of required processing elements. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA reconfigurable systems, this architecture can achieve 1.5 GFLOPS sustained performance. A single pipeline uses 30% of the logic resources and 40% of the memory resources of a Stratix S80 FPGA. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solvers such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
APA, Harvard, Vancouver, ISO, and other styles
9

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kuang, Da. "Nonnegative matrix factorization for clustering." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52299.

Full text
Abstract:
This dissertation shows that nonnegative matrix factorization (NMF) can be extended to a general and efficient clustering method. Clustering is one of the fundamental tasks in machine learning. It is useful for unsupervised knowledge discovery in a variety of applications such as text mining and genomic analysis. NMF is a dimension reduction method that approximates a nonnegative matrix by the product of two lower rank nonnegative matrices, and has shown great promise as a clustering method when a data set is represented as a nonnegative data matrix. However, challenges in the widespread use of NMF as a clustering method lie in its correctness and efficiency: First, we need to know why and when NMF could detect the true clusters and guarantee to deliver good clustering quality; second, existing algorithms for computing NMF are expensive and often take longer time than other clustering methods. We show that the original NMF can be improved from both aspects in the context of clustering. Our new NMF-based clustering methods can achieve better clustering quality and run orders of magnitude faster than the original NMF and other clustering methods. Like other clustering methods, NMF places an implicit assumption on the cluster structure. Thus, the success of NMF as a clustering method depends on whether the representation of data in a vector space satisfies that assumption. Our approach to extending the original NMF to a general clustering method is to switch from the vector space representation of data points to a graph representation. The new formulation, called Symmetric NMF, takes a pairwise similarity matrix as an input and can be viewed as a graph clustering method. We evaluate this method on document clustering and image segmentation problems and find that it achieves better clustering accuracy. In addition, for the original NMF, it is difficult but important to choose the right number of clusters. We show that the widely-used consensus NMF in genomic analysis for choosing the number of clusters have critical flaws and can produce misleading results. We propose a variation of the prediction strength measure arising from statistical inference to evaluate the stability of clusters and select the right number of clusters. Our measure shows promising performances in artificial simulation experiments. Large-scale applications bring substantial efficiency challenges to existing algorithms for computing NMF. An important example is topic modeling where users want to uncover the major themes in a large text collection. Our strategy of accelerating NMF-based clustering is to design algorithms that better suit the computer architecture as well as exploit the computing power of parallel platforms such as the graphic processing units (GPUs). A key observation is that applying rank-2 NMF that partitions a data set into two clusters in a recursive manner is much faster than applying the original NMF to obtain a flat clustering. We take advantage of a special property of rank-2 NMF and design an algorithm that runs faster than existing algorithms due to continuous memory access. Combined with a criterion to stop the recursion, our hierarchical clustering algorithm runs significantly faster and achieves even better clustering quality than existing methods. Another bottleneck of NMF algorithms, which is also a common bottleneck in many other machine learning applications, is to multiply a large sparse data matrix with a tall-and-skinny dense matrix. We use the GPUs to accelerate this routine for sparse matrices with an irregular sparsity structure. Overall, our algorithm shows significant improvement over popular topic modeling methods such as latent Dirichlet allocation, and runs more than 100 times faster on data sets with millions of documents.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sparse matrix multiplication"

1

United States. National Aeronautics and Space Administration. Scientific and Technical Information Division., ed. An efficient sparse matrix multiplication scheme for the CYBER 205 computer. [Washington, DC]: National Aeronautics and Space Administration, Scientific and Technical Information Division, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Andersen, J. The scheduling of sparse matrix-vector multiplicatiion on a massively parallel DAP computer. Uxbridge: Brunel University, Department of Mathematics and Statistics, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library BSPlib. Numerical experiments are presented for parallel programs on modern parallel computers ranging from desktop computers to massively parallel supercomputers. The introductory chapter of the book gives a complete overview of BSPlib, so that the reader already at an early stage is able to write his/her own parallel programs. Furthermore, it treats BSP benchmarking and parallel sorting by regular sampling. The next three chapters treat basic numerical linear algebra problems such as linear system solving by LU decomposition, sparse matrix-vector multiplication (SpMV), and the fast Fourier transform (FFT). The final chapter explores parallel algorithms for big data problems such as graph matching. The book is accompanied by a software package BSPedupack, freely available online from the author’s homepage, which contains all programs of the book and a set of test programs.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse matrix multiplication"

1

Yuster, Raphael, and Uri Zwick. "Fast Sparse Matrix Multiplication." In Algorithms – ESA 2004, 604–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30140-0_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dusefante, Matteo, and Riko Jacob. "Cache Oblivious Sparse Matrix Multiplication." In LATIN 2018: Theoretical Informatics, 437–47. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77404-6_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simic, Vladimir, Vladimir Ciric, Nikola Savic, and Ivan Milentijevic. "Sparse Matrix Multiplication on Dataflow Engines." In Parallel Processing and Applied Mathematics, 23–30. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32149-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Patwary, Md Mostofa Ali, Nadathur Rajagopalan Satish, Narayanan Sundaram, Jongsoo Park, Michael J. Anderson, Satya Gautam Vadlamudi, Dipankar Das, Sergey G. Pudov, Vadim O. Pirogov, and Pradeep Dubey. "Parallel Efficient Sparse Matrix-Matrix Multiplication on Multicore Platforms." In Lecture Notes in Computer Science, 48–57. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20119-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016, 3–18. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Greiner, Gero, and Riko Jacob. "The I/O Complexity of Sparse Matrix Dense Matrix Multiplication." In LATIN 2010: Theoretical Informatics, 143–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12200-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pagh, Rasmus, and Morten Stöckel. "The Input/Output Complexity of Sparse Matrix Multiplication." In Algorithms - ESA 2014, 750–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-44777-2_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peng, Richard, and Santosh Vempala. "Solving Sparse Linear Systems Faster than Matrix Multiplication." In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), 504–21. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2021. http://dx.doi.org/10.1137/1.9781611976465.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vassiliadis, Stamatis, Sorin Cotofana, and Pyrrhos Stathis. "Vector ISA Extension for Sparse Matrix-Vector Multiplication." In Euro-Par’99 Parallel Processing, 708–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abboud, Amir, Karl Bringmann, Nick Fischer, and Marvin Künnemann. "The Time Complexity of Fully Sparse Matrix Multiplication." In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 4670–703. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2024. http://dx.doi.org/10.1137/1.9781611977912.167.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse matrix multiplication"

1

Zhang, Zhekai, Hanrui Wang, Song Han, and William J. Dally. "SpArch: Efficient Architecture for Sparse Matrix Multiplication." In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hong, Changwan, Aravind Sukumaran-Rajam, Israt Nisa, Kunal Singh, and P. Sadayappan. "Adaptive sparse tiling for sparse matrix multiplication." In PPoPP '19: 24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3293883.3295712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Matam, Kiran, Siva Rama Krishna Bharadwaj Indarapu, and Kishore Kothapalli. "Sparse matrix-matrix multiplication on modern architectures." In 2012 19th International Conference on High Performance Computing (HiPC). IEEE, 2012. http://dx.doi.org/10.1109/hipc.2012.6507483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain-Mendon, Shweta, and Ron Sass. "Performance evaluation of Sparse Matrix-Matrix Multiplication." In 2013 23rd International Conference on Field Programmable Logic and Applications (FPL). IEEE, 2013. http://dx.doi.org/10.1109/fpl.2013.6645561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ishiguro, Fumiya, Takahiro Katagiri, Satoshi Ohshima, and Toru Nagai. "Performance Evaluation of Accurate Matrix-Matrix Multiplication on GPU Using Sparse Matrix Multiplications." In 2020 Eighth International Symposium on Computing and Networking Workshops (CANDARW). IEEE, 2020. http://dx.doi.org/10.1109/candarw51189.2020.00044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gleinig, Niels, Maciej Besta, and Torsten Hoefler. "I/O-Optimal Cache-Oblivious Sparse Matrix-Sparse Matrix Multiplication." In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE, 2022. http://dx.doi.org/10.1109/ipdps53621.2022.00013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shah, Monika. "Sparse Matrix Sparse Vector Multiplication - A Novel Approach." In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Winter, Martin, Daniel Mlakar, Rhaleb Zayer, Hans-Peter Seidel, and Markus Steinberger. "Adaptive sparse matrix-matrix multiplication on the GPU." In PPoPP '19: 24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3293883.3295701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ballard, Grey, Alex Druinsky, Nicholas Knight, and Oded Schwartz. "Hypergraph Partitioning for Parallel Sparse Matrix-Matrix Multiplication." In SPAA '15: 27th ACM Symposium on Parallelism in Algorithms and Architectures. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2755573.2755613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kurt, Sureyya Emre, Aravind Sukumaran-Rajam, Fabrice Rastello, and P. Sadayyapan. "Efficient Tiled Sparse Matrix Multiplication through Matrix Signatures." In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2020. http://dx.doi.org/10.1109/sc41405.2020.00091.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse matrix multiplication"

1

Deveci, Mehmet, Christian Robert Trott, and Sivasankaran Rajamanickam. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures. Office of Scientific and Technical Information (OSTI), January 2018. http://dx.doi.org/10.2172/1417260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nusbaum, Kurtis Lee. Optimizing Tpetra%3CU%2B2019%3Es sparse matrix-matrix multiplication routine. Office of Scientific and Technical Information (OSTI), August 2011. http://dx.doi.org/10.2172/1029781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deveci, Mehmet, Simon David Hammond, Michael M. Wolf, and Sivasankaran Rajamanickam. Sparse Matrix-Matrix Multiplication on Multilevel Memory Architectures: Algorithms and Experiments. Office of Scientific and Technical Information (OSTI), April 2018. http://dx.doi.org/10.2172/1435688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vuduc, R., and H. Moon. Fast sparse matrix-vector multiplication by exploiting variable block structure. Office of Scientific and Technical Information (OSTI), July 2005. http://dx.doi.org/10.2172/891708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ballard, Grey Malone, Jonathan Joseph Hu, and Christopher Siefert. Reducing Communication Costs for Sparse Matrix Multiplication within Algebraic Multigrid. Office of Scientific and Technical Information (OSTI), September 2015. http://dx.doi.org/10.2172/1504845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography