Academic literature on the topic 'Sparse Matrix Vector Multiplications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Matrix Vector Multiplications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Matrix Vector Multiplications"

1

Tao, Yuan, Yangdong Deng, Shuai Mu, et al. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Donglin, Jianbin Fang, Chuanfu Xu, Shizhao Chen, and Zheng Wang. "Characterizing Scalability of Sparse Matrix–Vector Multiplications on Phytium FT-2000+." International Journal of Parallel Programming 48, no. 1 (2019): 80–97. http://dx.doi.org/10.1007/s10766-019-00646-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Burkhardt, Paul. "Optimal Algebraic Breadth-First Search for Sparse Graphs." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (2021): 1–19. http://dx.doi.org/10.1145/3446216.

Full text
Abstract:
There has been a rise in the popularity of algebraic methods for graph algorithms given the development of the GraphBLAS library and other sparse matrix methods. An exemplar for these approaches is Breadth-First Search (BFS). The algebraic BFS algorithm is simply a recurrence of matrix-vector multiplications with the n × n adjacency matrix, but the many redundant operations over nonzeros ultimately lead to suboptimal performance. Therefore an optimal algebraic BFS should be of keen interest especially if it is easily integrated with existing matrix methods. Current methods, notably in the Grap
APA, Harvard, Vancouver, ISO, and other styles
4

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS." International Journal of High Speed Computing 02, no. 02 (1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bienz, Amanda, William D. Gropp, and Luke N. Olson. "Node aware sparse matrix–vector multiplication." Journal of Parallel and Distributed Computing 130 (August 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri, and Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs." ACM Transactions on Mathematical Software 43, no. 4 (2017): 1–49. http://dx.doi.org/10.1145/3017994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haque, Sardar Anisul, Shahadat Hossain, and M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication." ACM Communications in Computer Algebra 44, no. 3/4 (2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Donglin, Jianbin Fang, Shizhao Chen, Chuanfu Xu, and Zheng Wang. "Optimizing Sparse Matrix–Vector Multiplications on an ARMv8-based Many-Core Architecture." International Journal of Parallel Programming 47, no. 3 (2019): 418–32. http://dx.doi.org/10.1007/s10766-018-00625-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Xintian, Srinivasan Parthasarathy, and P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs." Proceedings of the VLDB Endowment 4, no. 4 (2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sparse Matrix Vector Multiplications"

1

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Full text
Abstract:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exp
APA, Harvard, Vancouver, ISO, and other styles
7

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

DeLorimier, Michael DeHon André. "Floating-point sparse matrix-vector multiply for FPGAs /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05132005-144347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Belgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.

Full text
Abstract:
This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative solvers for systems of linear equations. SMVM computations that use sparse formats typically achieve only a small fraction of peak CPU speeds because they are memory bound due to their low flops:byte ratio, they access memory irregularly, and exhibit poor ILP due to inefficient pipelining. We particularly focus on improving the flops:byte ratio, which is the main limiter on performance, by exploiting recurring struct
APA, Harvard, Vancouver, ISO, and other styles
10

Flegar, Goran. "Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction." Doctoral thesis, Universitat Jaume I, 2019. http://hdl.handle.net/10803/667096.

Full text
Abstract:
With the breakdown of Dennard scaling in the mid-2000s and the end of Moore's law on the horizon, the high performance computing community is turning its attention towards unconventional accelerator hardware to ensure the continued growth of computational capacity. This dissertation presents several contributions related to the iterative solution of sparse linear systems on the most widely used general purpose accelerator - the Graphics Processing Unit (GPU). Specifically, it accelerates the major building blocks of Krylov solvers, and describes their realization as part of a software library
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sparse Matrix Vector Multiplications"

1

Andersen, J. The scheduling of sparse matrix-vector multiplicatiion on a massively parallel DAP computer. Brunel University, Department of Mathematics and Statistics, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library B
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse Matrix Vector Multiplications"

1

Vassiliadis, Stamatis, Sorin Cotofana, and Pyrrhos Stathis. "Vector ISA Extension for Sparse Matrix-Vector Multiplication." In Euro-Par’99 Parallel Processing. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schubert, Gerald, Georg Hager, and Holger Fehske. "Performance Limitations for Sparse Matrix-Vector Multiplications on Current Multi-Core Environments." In High Performance Computing in Science and Engineering, Garching/Munich 2009. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13872-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hishinuma, Toshiaki, Hidehiko Hasegawa, and Teruo Tanaka. "SIMD Parallel Sparse Matrix-Vector and Transposed-Matrix-Vector Multiplication in DD Precision." In High Performance Computing for Computational Science – VECPAR 2016. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61982-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Katagiri, Takahiro, Takao Sakurai, Mitsuyoshi Igai, et al. "Control Formats for Unsymmetric and Symmetric Sparse Matrix–Vector Multiplications on OpenMP Implementations." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38718-0_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Çatalyürek, Ümit V., and Cevdet Aykanat. "Decomposing irregularly sparse matrices for parallel matrix-vector multiplication." In Parallel Algorithms for Irregularly Structured Problems. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wellein, Gerhard, Georg Hager, Achim Basermann, and Holger Fehske. "Fast Sparse Matrix-Vector Multiplication for TeraFlop/s Computers." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36569-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Monakov, Alexander, Anton Lokhmotov, and Arutyun Avetisyan. "Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures." In High Performance Embedded Architectures and Compilers. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11515-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

AlAhmadi, Sarah, Thaha Muhammed, Rashid Mehmood, and Aiiad Albeshri. "Performance Characteristics for Sparse Matrix-Vector Multiplication on GPUs." In Smart Infrastructure and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13705-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Monakov, Alexander, and Arutyun Avetisyan. "Implementing Blocked Sparse Matrix-Vector Multiplication on NVIDIA GPUs." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03138-0_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Matrix Vector Multiplications"

1

Ichimura, Shuntaro, Takahiro Katagiri, Katsuhisa Ozaki, Takeshi Ogita, and Toru Nagai. "Threaded Accurate Matrix-Matrix Multiplications with Sparse Matrix-Vector Multiplications." In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2018. http://dx.doi.org/10.1109/ipdpsw.2018.00168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keklikian, Thalie, J. M. Pierre Langlois, and Yvon Savaria. "A memory transaction model for Sparse Matrix-Vector multiplications on GPUs." In 2014 IEEE 12th International New Circuits and Systems Conference (NEWCAS). IEEE, 2014. http://dx.doi.org/10.1109/newcas.2014.6934044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buluç, Aydin, Jeremy T. Fineman, Matteo Frigo, John R. Gilbert, and Charles E. Leiserson. "Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks." In the twenty-first annual symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1583991.1584053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shah, Monika. "Sparse Matrix Sparse Vector Multiplication - A Novel Approach." In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Haque, Sardar Anisul, Shahadat Hossain, and Marc Moreno Maza. "Cache friendly sparse matrix-vector multiplication." In the 4th International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1837210.1837238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuo, Ling, and Viktor K. Prasanna. "Sparse Matrix-Vector multiplication on FPGAs." In the 2005 ACM/SIGDA 13th international symposium. ACM Press, 2005. http://dx.doi.org/10.1145/1046192.1046202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jamroz, Ben, and Paul Mullowney. "Performance of Parallel Sparse Matrix-Vector Multiplications in Linear Solves on Multiple GPUs." In 2012 Symposium on Application Accelerators in High Performance Computing (SAAHPC). IEEE, 2012. http://dx.doi.org/10.1109/saahpc.2012.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhuowei Wang, Xianbin Xu, Wuqing Zhao, Yuping Zhang, and Shuibing He. "Optimizing sparse matrix-vector multiplication on CUDA." In 2010 2nd International Conference on Education Technology and Computer (ICETC 2010). IEEE, 2010. http://dx.doi.org/10.1109/icetc.2010.5529724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Junqing, Gregory Peterson, and Olaf Storaasli. "Sparse Matrix-Vector Multiplication Design on FPGAs." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinar, Ali, and Michael T. Heath. "Improving performance of sparse matrix-vector multiplication." In the 1999 ACM/IEEE conference. ACM Press, 1999. http://dx.doi.org/10.1145/331532.331562.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse Matrix Vector Multiplications"

1

Vuduc, R., and H. Moon. Fast sparse matrix-vector multiplication by exploiting variable block structure. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/891708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hammond, Simon David, and Christian Robert Trott. Optimizing the Performance of Sparse-Matrix Vector Products on Next-Generation Processors. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1528773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruiz, Pablo, Craig Perry, Alejando Garcia, et al. The Everglades National Park and Big Cypress National Preserve vegetation mapping project: Interim report—Northwest Coastal Everglades (Region 4), Everglades National Park (revised with costs). National Park Service, 2020. http://dx.doi.org/10.36967/nrr-2279586.

Full text
Abstract:
The Everglades National Park and Big Cypress National Preserve vegetation mapping project is part of the Comprehensive Everglades Restoration Plan (CERP). It is a cooperative effort between the South Florida Water Management District (SFWMD), the United States Army Corps of Engineers (USACE), and the National Park Service’s (NPS) Vegetation Mapping Inventory Program (VMI). The goal of this project is to produce a spatially and thematically accurate vegetation map of Everglades National Park and Big Cypress National Preserve prior to the completion of restoration efforts associated with CERP. T
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!