Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sparse Vector Vector Multiplication.

Artykuły w czasopismach na temat „Sparse Vector Vector Multiplication”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Sparse Vector Vector Multiplication”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Tao, Yuan, Yangdong Deng, Shuai Mu, et al. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri, and Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs." ACM Transactions on Mathematical Software 43, no. 4 (2017): 1–49. http://dx.doi.org/10.1145/3017994.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS." International Journal of High Speed Computing 02, no. 02 (1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Haque, Sardar Anisul, Shahadat Hossain, and M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication." ACM Communications in Computer Algebra 44, no. 3/4 (2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bienz, Amanda, William D. Gropp, and Luke N. Olson. "Node aware sparse matrix–vector multiplication." Journal of Parallel and Distributed Computing 130 (August 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yang, Xintian, Srinivasan Parthasarathy, and P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs." Proceedings of the VLDB Endowment 4, no. 4 (2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Romero, L. F., and E. L. Zapata. "Data distributions for sparse matrix vector multiplication." Parallel Computing 21, no. 4 (1995): 583–605. http://dx.doi.org/10.1016/0167-8191(94)00087-q.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Thomas, Rajesh, Victor DeBrunner, and Linda S. DeBrunner. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors." Signals 2, no. 4 (2021): 688–705. http://dx.doi.org/10.3390/signals2040041.

Pełny tekst źródła
Streszczenie:
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more effic
Style APA, Harvard, Vancouver, ISO itp.
10

Liu, Sheng, Yasong Cao, and Shuwei Sun. "Mapping and Optimization Method of SpMV on Multi-DSP Accelerator." Electronics 11, no. 22 (2022): 3699. http://dx.doi.org/10.3390/electronics11223699.

Pełny tekst źródła
Streszczenie:
Sparse matrix-vector multiplication (SpMV) solves the product of a sparse matrix and dense vector, and the sparseness of a sparse matrix is often more than 90%. Usually, the sparse matrix is compressed to save storage resources, but this causes irregular access to dense vectors in the algorithm, which takes a lot of time and degrades the SpMV performance of the system. In this study, we design a dedicated channel in the DMA to implement an indirect memory access process to speed up the SpMV operation. On this basis, we propose six SpMV algorithm schemes and map them to optimize the performance
Style APA, Harvard, Vancouver, ISO itp.
11

Sun, C. C., J. Götze, H. Y. Jheng, and S. J. Ruan. "Sparse matrix-vector multiplication on network-on-chip." Advances in Radio Science 8 (December 22, 2010): 289–94. http://dx.doi.org/10.5194/ars-8-289-2010.

Pełny tekst źródła
Streszczenie:
Abstract. In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of t
Style APA, Harvard, Vancouver, ISO itp.
12

Isupov, Konstantin. "Multiple-precision sparse matrix–vector multiplication on GPUs." Journal of Computational Science 61 (May 2022): 101609. http://dx.doi.org/10.1016/j.jocs.2022.101609.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Zou, Dan, Yong Dou, Song Guo, and Shice Ni. "High performance sparse matrix-vector multiplication on FPGA." IEICE Electronics Express 10, no. 17 (2013): 20130529. http://dx.doi.org/10.1587/elex.10.20130529.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Gao, Jiaquan, Yifei Xia, Renjie Yin, and Guixia He. "Adaptive diagonal sparse matrix-vector multiplication on GPU." Journal of Parallel and Distributed Computing 157 (November 2021): 287–302. http://dx.doi.org/10.1016/j.jpdc.2021.07.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Yzelman, A. N., and Rob H. Bisseling. "Two-dimensional cache-oblivious sparse matrix–vector multiplication." Parallel Computing 37, no. 12 (2011): 806–19. http://dx.doi.org/10.1016/j.parco.2011.08.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Yilmaz, Buse, Bariş Aktemur, MaríA J. Garzarán, Sam Kamin, and Furkan Kiraç. "Autotuning Runtime Specialization for Sparse Matrix-Vector Multiplication." ACM Transactions on Architecture and Code Optimization 13, no. 1 (2016): 1–26. http://dx.doi.org/10.1145/2851500.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

MUKADDES, ABUL MUKID MOHAMMAD, MASAO OGINO, and RYUJI SHIOYA. "PERFORMANCE EVALUATION OF DOMAIN DECOMPOSITION METHOD WITH SPARSE MATRIX STORAGE SCHEMES IN MODERN SUPERCOMPUTER." International Journal of Computational Methods 11, supp01 (2014): 1344007. http://dx.doi.org/10.1142/s0219876213440076.

Pełny tekst źródła
Streszczenie:
The use of proper data structures with corresponding algorithms is critical to achieve good performance in scientific computing. The need of sparse matrix vector multiplication in each iteration of the iterative domain decomposition method has led to implementation of a variety of sparse matrix storage formats. Many storage formats have been presented to represent sparse matrix and integrated in the method. In this paper, the storage efficiency of those sparse matrix storage formats are evaluated and compared. The performance results of sparse matrix vector multiplication used in the domain de
Style APA, Harvard, Vancouver, ISO itp.
18

He, Guixia, and Jiaquan Gao. "A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs." Mathematical Problems in Engineering 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/8471283.

Pełny tekst źródła
Streszczenie:
Sparse matrix-vector multiplication (SpMV) is an important operation in scientific computations. Compressed sparse row (CSR) is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs), for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CS
Style APA, Harvard, Vancouver, ISO itp.
19

Jao, Nicholas, Akshay Krishna Ramanathan, John Sampson, and Vijaykrishnan Narayanan. "Sparse Vector-Matrix Multiplication Acceleration in Diode-Selected Crossbars." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 29, no. 12 (2021): 2186–96. http://dx.doi.org/10.1109/tvlsi.2021.3114186.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Kamin, Sam, María Jesús Garzarán, Barış Aktemur, Danqing Xu, Buse Yılmaz, and Zhongbo Chen. "Optimization by runtime specialization for sparse matrix-vector multiplication." ACM SIGPLAN Notices 50, no. 3 (2015): 93–102. http://dx.doi.org/10.1145/2775053.2658773.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Fernandez, D. M., D. Giannacopoulos, and W. J. Gross. "Efficient Multicore Sparse Matrix-Vector Multiplication for FE Electromagnetics." IEEE Transactions on Magnetics 45, no. 3 (2009): 1392–95. http://dx.doi.org/10.1109/tmag.2009.2012640.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Shantharam, Manu, Anirban Chatterjee, and Padma Raghavan. "Exploiting dense substructures for fast sparse matrix vector multiplication." International Journal of High Performance Computing Applications 25, no. 3 (2011): 328–41. http://dx.doi.org/10.1177/1094342011414748.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Gao, Jiaquan, Panpan Qi, and Guixia He. "Efficient CSR-Based Sparse Matrix-Vector Multiplication on GPU." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4596943.

Pełny tekst źródła
Streszczenie:
Sparse matrix-vector multiplication (SpMV) is an important operation in computational science and needs be accelerated because it often represents the dominant cost in many widely used iterative methods and eigenvalue problems. We achieve this objective by proposing a novel SpMV algorithm based on the compressed sparse row (CSR) on the GPU. Our method dynamically assigns different numbers of rows to each thread block and executes different optimization implementations on the basis of the number of rows it involves for each block. The process of accesses to the CSR arrays is fully coalesced, an
Style APA, Harvard, Vancouver, ISO itp.
24

Maggioni, Marco, and Tanya Berger-Wolf. "Optimization techniques for sparse matrix–vector multiplication on GPUs." Journal of Parallel and Distributed Computing 93-94 (July 2016): 66–86. http://dx.doi.org/10.1016/j.jpdc.2016.03.011.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Geus, Roman, and Stefan Röllin. "Towards a fast parallel sparse symmetric matrix–vector multiplication." Parallel Computing 27, no. 7 (2001): 883–96. http://dx.doi.org/10.1016/s0167-8191(01)00073-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Zardoshti, Pantea, Farshad Khunjush, and Hamid Sarbazi-Azad. "Adaptive sparse matrix representation for efficient matrix–vector multiplication." Journal of Supercomputing 72, no. 9 (2015): 3366–86. http://dx.doi.org/10.1007/s11227-015-1571-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Zhang, Jilin, Enyi Liu, Jian Wan, Yongjian Ren, Miao Yue, and Jue Wang. "Implementing Sparse Matrix-Vector Multiplication with QCSR on GPU." Applied Mathematics & Information Sciences 7, no. 2 (2013): 473–82. http://dx.doi.org/10.12785/amis/070207.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Feng, Xiaowen, Hai Jin, Ran Zheng, Zhiyuan Shao, and Lei Zhu. "A segment-based sparse matrix-vector multiplication on CUDA." Concurrency and Computation: Practice and Experience 26, no. 1 (2012): 271–86. http://dx.doi.org/10.1002/cpe.2978.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Neves, Samuel, and Filipe Araujo. "Straight-line programs for fast sparse matrix-vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 13 (2014): 3245–61. http://dx.doi.org/10.1002/cpe.3211.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Nastea, Sorin G., Ophir Frieder, and Tarek El-Ghazawi. "Load-Balanced Sparse Matrix–Vector Multiplication on Parallel Computers." Journal of Parallel and Distributed Computing 46, no. 2 (1997): 180–93. http://dx.doi.org/10.1006/jpdc.1997.1361.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Moldovyan, N. A., and A. S. Petrenko. "ALGEBRAIC SIGNATURE ALGORITHMS WITH TWO HIDDEN GROUPS." Voprosy kiberbezopasnosti 3, no. 67 (2025): 8–20. https://doi.org/10.21681/2311-3456-2025-3-8-20.

Pełny tekst źródła
Streszczenie:
Purpose of work is improving the performance of post-quantum algebraic signature algorithms based on the computational difficulty of solving large systems of power equations. Research methods: the use of two hidden commutative groups, the elements of one of which are non-commutative with the other, to ensure sufficient completeness of signature randomization in algebraic signature schemes, the security of which is based on the computational difficulty of solving large systems of power equations in the ground finite field GF(p). Calculation of the fitting signature in the form of a vector S dep
Style APA, Harvard, Vancouver, ISO itp.
32

Yzelman, A. N., and Rob H. Bisseling. "Cache-Oblivious Sparse Matrix–Vector Multiplication by Using Sparse Matrix Partitioning Methods." SIAM Journal on Scientific Computing 31, no. 4 (2009): 3128–54. http://dx.doi.org/10.1137/080733243.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Liu, Yongchao, and Bertil Schmidt. "LightSpMV: Faster CUDA-Compatible Sparse Matrix-Vector Multiplication Using Compressed Sparse Rows." Journal of Signal Processing Systems 90, no. 1 (2017): 69–86. http://dx.doi.org/10.1007/s11265-016-1216-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Giannoula, Christina, Ivan Fernandez, Juan Gómez-Luna, Nectarios Koziris, Georgios Goumas, and Onur Mutlu. "Towards Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Architectures." ACM SIGMETRICS Performance Evaluation Review 50, no. 1 (2022): 33–34. http://dx.doi.org/10.1145/3547353.3522661.

Pełny tekst źródła
Streszczenie:
Several manufacturers have already started to commercialize near-bank Processing-In-Memory (PIM) architectures, after decades of research efforts. Near-bank PIM architectures place simple cores close to DRAM banks. Recent research demonstrates that they can yield significant performance and energy improvements in parallel applications by alleviating data access costs. Real PIM systems can provide high levels of parallelism, large aggregate memory bandwidth and low memory access latency, thereby being a good fit to accelerate the Sparse Matrix Vector Multiplication (SpMV) kernel. SpMV has been
Style APA, Harvard, Vancouver, ISO itp.
35

Karsavuran, M. Ozan, Kadir Akbudak, and Cevdet Aykanat. "Locality-Aware Parallel Sparse Matrix-Vector and Matrix-Transpose-Vector Multiplication on Many-Core Processors." IEEE Transactions on Parallel and Distributed Systems 27, no. 6 (2016): 1713–26. http://dx.doi.org/10.1109/tpds.2015.2453970.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Dubois, David, Andrew Dubois, Thomas Boorman, Carolyn Connor, and Steve Poole. "Sparse Matrix-Vector Multiplication on a Reconfigurable Supercomputer with Application." ACM Transactions on Reconfigurable Technology and Systems 3, no. 1 (2010): 1–31. http://dx.doi.org/10.1145/1661438.1661440.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Catalyurek, U. V., and C. Aykanat. "Hypergraph-partitioning-based decomposition for parallel sparse-matrix vector multiplication." IEEE Transactions on Parallel and Distributed Systems 10, no. 7 (1999): 673–93. http://dx.doi.org/10.1109/71.780863.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Toledo, S. "Improving the memory-system performance of sparse-matrix vector multiplication." IBM Journal of Research and Development 41, no. 6 (1997): 711–25. http://dx.doi.org/10.1147/rd.416.0711.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Williams, Samuel, Leonid Oliker, Richard Vuduc, John Shalf, Katherine Yelick, and James Demmel. "Optimization of sparse matrix–vector multiplication on emerging multicore platforms." Parallel Computing 35, no. 3 (2009): 178–94. http://dx.doi.org/10.1016/j.parco.2008.12.006.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Peters, Alexander. "Sparse matrix vector multiplication techniques on the IBM 3090 VF." Parallel Computing 17, no. 12 (1991): 1409–24. http://dx.doi.org/10.1016/s0167-8191(05)80007-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Li, ShiGang, ChangJun Hu, JunChao Zhang, and YunQuan Zhang. "Automatic tuning of sparse matrix-vector multiplication on multicore clusters." Science China Information Sciences 58, no. 9 (2015): 1–14. http://dx.doi.org/10.1007/s11432-014-5254-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Dehn, T., M. Eiermann, K. Giebermann, and V. Sperling. "Structured sparse matrix-vector multiplication on massively parallel SIMD architectures." Parallel Computing 21, no. 12 (1995): 1867–94. http://dx.doi.org/10.1016/0167-8191(95)00055-0.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Zeiser, Andreas. "Fast Matrix-Vector Multiplication in the Sparse-Grid Galerkin Method." Journal of Scientific Computing 47, no. 3 (2010): 328–46. http://dx.doi.org/10.1007/s10915-010-9438-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Yang, Bing, Shuo Gu, Tong-Xiang Gu, Cong Zheng, and Xing-Ping Liu. "Parallel Multicore CSB Format and Its Sparse Matrix Vector Multiplication." Advances in Linear Algebra & Matrix Theory 04, no. 01 (2014): 1–8. http://dx.doi.org/10.4236/alamt.2014.41001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Ahmad, Khalid, Hari Sundar, and Mary Hall. "Data-driven Mixed Precision Sparse Matrix Vector Multiplication for GPUs." ACM Transactions on Architecture and Code Optimization 16, no. 4 (2020): 1–24. http://dx.doi.org/10.1145/3371275.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Tao, Yuan, and Huang Zhi-Bin. "Shuffle Reduction Based Sparse Matrix-Vector Multiplication on Kepler GPU." International Journal of Grid and Distributed Computing 9, no. 10 (2016): 99–106. http://dx.doi.org/10.14257/ijgdc.2016.9.10.09.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Dehnavi, Maryam Mehri, David M. Fernandez, and Dennis Giannacopoulos. "Finite-Element Sparse Matrix Vector Multiplication on Graphic Processing Units." IEEE Transactions on Magnetics 46, no. 8 (2010): 2982–85. http://dx.doi.org/10.1109/tmag.2010.2043511.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Liang, Yun, Wai Teng Tang, Ruizhe Zhao, Mian Lu, Huynh Phung Huynh, and Rick Siow Mong Goh. "Scale-Free Sparse Matrix-Vector Multiplication on Many-Core Architectures." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 36, no. 12 (2017): 2106–19. http://dx.doi.org/10.1109/tcad.2017.2681072.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Aktemur, Barış. "A sparse matrix-vector multiplication method with low preprocessing cost." Concurrency and Computation: Practice and Experience 30, no. 21 (2018): e4701. http://dx.doi.org/10.1002/cpe.4701.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Chen, Xinhai, Peizhen Xie, Lihua Chi, Jie Liu, and Chunye Gong. "An efficient SIMD compression format for sparse matrix-vector multiplication." Concurrency and Computation: Practice and Experience 30, no. 23 (2018): e4800. http://dx.doi.org/10.1002/cpe.4800.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!