Academic literature on the topic 'Sparse computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse computation"

1

Ghorbani, Mahdi, Mathieu Huot, Shideh Hashemian, and Amir Shaikhha. "Compiling Structured Tensor Algebra." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023): 204–33. http://dx.doi.org/10.1145/3622804.

Full text
Abstract:
Tensor algebra is essential for data-intensive workloads in various computational domains. Computational scientists face a trade-off between the specialization degree provided by dense tensor algebra and the algorithmic efficiency that leverages the structure provided by sparse tensors. This paper presents StructTensor, a framework that symbolically computes structure at compilation time. This is enabled by Structured Tensor Unified Representation (STUR), an intermediate language that can capture tensor computations as well as their sparsity and redundancy structures. Through a mathematical vi
APA, Harvard, Vancouver, ISO, and other styles
2

VOISIN, FRÉDÉRIQUE, and GUY-RENÉ PERRIN. "SPARSE COMPUTATION WITH PEI." International Journal of Foundations of Computer Science 10, no. 04 (1999): 425–42. http://dx.doi.org/10.1142/s0129054199000307.

Full text
Abstract:
PEI formalism has been designed to reason and develop parallel programs in the context of data parallelism. In this paper, we focus on the use of PEI to transform a program involving dense matrices into a new program involving sparse matrices, using the example of the matrix-vector product.
APA, Harvard, Vancouver, ISO, and other styles
3

D’Ambra, Pasqua, Fabio Durastante, and Salvatore Filippone. "Parallel Sparse Computation Toolkit." Software Impacts 15 (March 2023): 100463. http://dx.doi.org/10.1016/j.simpa.2022.100463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Harris, David G., Ehab Morsy, Gopal Pandurangan, Peter Robinson, and Aravind Srinivasan. "Efficient computation of sparse structures." Random Structures & Algorithms 49, no. 2 (2016): 322–44. http://dx.doi.org/10.1002/rsa.20653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ruohao, Yijie Lu, and Zhengfei Song. "YOLO sparse training and model pruning for street view house numbers recognition." Journal of Physics: Conference Series 2646, no. 1 (2023): 012025. http://dx.doi.org/10.1088/1742-6596/2646/1/012025.

Full text
Abstract:
Abstract This paper proposes a YOLO (You Only Look Once) sparse training and model pruning technique for recognizing house numbers in street view images. YOLO is a popular object detection algorithm that has achieved state-of-the-art performance in various computer vision tasks. However, its large model size and computational complexity limit its deployment on resource-constrained devices such as smartphones and embedded systems. To address this issue, we use a sparse training technique that trains YOLO with L1 norm regularization to encourage the network to learn sparse representations. This
APA, Harvard, Vancouver, ISO, and other styles
6

WANG, Miao, Shengbing ZHANG, and Meng ZHANG. "Exploring non-zero position constraints: algorithm-hardware co-designed DNN sparse training method." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 43, no. 1 (2025): 119–27. https://doi.org/10.1051/jnwpu/20254310119.

Full text
Abstract:
On-device learning enables edge devices to continuously adapt to new data for AI applications. Leveraging sparsity to eliminate redundant computation and storage usage during training is a key approach to improving the learning efficiency of edge deep neural network(DNN). However, due to the lack of assumptions about non-zero positions, expensive runtime identification and allocation of zero positions and load balancing of irregular computations are often required, making it difficult for existing sparse training works to approach the ideal speedup. This paper points out that if the non-zero p
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xi, Chang Gao, Zuowen Wang, et al. "Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11399–406. http://dx.doi.org/10.1609/aaai.v38i10.29020.

Full text
Abstract:
Recurrent Neural Networks (RNNs) are useful in temporal sequence tasks. However, training RNNs involves dense matrix multiplications which require hardware that can support a large number of arithmetic operations and memory accesses. Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware. Inspired by the spiking neuron model, the Delta RNN exploits temporal sparsity during inference by skipping over the update of hidden states from those inactivated neurons whose change of activation across two timesteps is below a defined thresh
APA, Harvard, Vancouver, ISO, and other styles
8

Huckle, Thomas K. "Efficient computation of sparse approximate inverses." Numerical Linear Algebra with Applications 5, no. 1 (1998): 57–71. http://dx.doi.org/10.1002/(sici)1099-1506(199801/02)5:1<57::aid-nla129>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmad, Muhammad, Sardar Usman, Ameer Hamza, Muhammad Muzamil, and Ildar Batyrshin. "Elegante+: A Machine Learning-Based Optimization Framework for Sparse Matrix–Vector Computations on the CPU Architecture." Information 16, no. 7 (2025): 553. https://doi.org/10.3390/info16070553.

Full text
Abstract:
Sparse matrix–vector multiplication (SpMV) plays a significant role in the computational costs of many scientific applications such as 2D/3D robotics, power network problems, and computer vision. Numerous implementations using different sparse matrix formats have been introduced to optimize this kernel on CPUs and GPUs. However, due to the sparsity patterns of matrices and the diverse configurations of hardware, accurately modeling the performance of SpMV remains a complex challenge. SpMV computation is often a time-consuming process because of its sparse matrix structure. To address this, we
APA, Harvard, Vancouver, ISO, and other styles
10

Gotsman, Craig, and Sivan Toledo. "On the Computation of Null Spaces of Sparse Rectangular Matrices." SIAM Journal on Matrix Analysis and Applications 30, no. 2 (2008): 445–63. http://dx.doi.org/10.1137/050638369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sparse computation"

1

Goyal, Mini, and University of Lethbridge Faculty of Arts and Science. "Graph coloring in sparse derivative matrix computation." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2005, 2005. http://hdl.handle.net/10133/260.

Full text
Abstract:
There has been extensive research activities in the last couple of years to efficiently determine large sparse Jacobian matrices. It is now well known that the estimation of Jacobian matrices can be posed as a graph coloring problem. Unidirectional coloring by Coleman and More [9] and bidirectional coloring independently proposed by Hossain and Steihaug [23] and Coleman and Verma [12] are techniques that employ graph theoretic ideas. In this thesis we present heuristic and exact bidirectional coloring techniques. For bidirectional heuristic techniques we have implemented variants of largest fi
APA, Harvard, Vancouver, ISO, and other styles
2

Hawkins, Stuart C. "The computation of eigenvalues of large sparse matrices." Thesis, University of Bath, 1999. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Xuebin. "Fast computation of sparse data cubes in its applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0009/MQ61455.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pasha, Mirjeta. "Krylov subspace type methods for the computation of non-negative or sparse solutions of ill-posed problems." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1586459362313778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hong, Chao, and 洪潮. "Parallel processing in power systems computation on a distributed memory message passing multicomputer." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B3124032X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hong, Chao. "Parallel processing in power systems computation on a distributed memory message passing multicomputer /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22050383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haessig, Germain. "Neuromorphic computation using event-based sensors : from algorithms to hardware implementations." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS422/document.

Full text
Abstract:
Cette thèse porte sur l’implémentation d’algorithmes événementiels, en utilisant, dans un premier temps, des données provenant d’une rétine artificielle, mimant le fonctionnement de la rétine humaine, pour ensuite évoluer vers tous types de signaux événementiels. Ces signaux événementiels sont issus d’un changement de paradigme dans la représentation du signal, offrant une grande plage dynamique de fonctionnement, une résolution temporelle importante ainsi qu’une compression native du signal. Sera notamment étudiée la réalisation d’un dispositif de création de cartes de profondeur monoculaires
APA, Harvard, Vancouver, ISO, and other styles
8

Nonnenmacher, Marcel [Verfasser], Jakob H. [Akademischer Betreuer] Macke, Jakob H. [Gutachter] Macke, and Julijana [Gutachter] Gjorgjieva. "Learning about neural computation from sparse recordings / Marcel Nonnenmacher ; Gutachter: Jakob H. Macke, Julijana Gjorgjieva ; Betreuer: Jakob H. Macke." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1229086536/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clemens, Jan [Verfasser], Bernhard [Akademischer Betreuer] Ronacher, Jan [Akademischer Betreuer] Benda, and Martin [Akademischer Betreuer] Nawrot. "Neural computation in small sensory systems : lessons on sparse and adaptive coding / Jan Clemens. Gutachter: Bernhard Ronacher ; Jan Benda ; Martin Nawrot." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2012. http://d-nb.info/1025112423/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cunial, Fabio. "Analysis of the subsequence composition of biosequences." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44716.

Full text
Abstract:
Measuring the amount of information and of shared information in biological strings, as well as relating information to structure, function and evolution, are fundamental computational problems in the post-genomic era. Classical analyses of the information content of biosequences are grounded in Shannon's statistical telecommunication theory, while the recent focus is on suitable specializations of the notions introduced by Kolmogorov, Chaitin and Solomonoff, based on data compression and compositional redundancy. Symmetrically, classical estimates of mutual information based on string editing
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sparse computation"

1

George, Alan, John R. Gilbert, and Joseph W. H. Liu, eds. Graph Theory and Sparse Matrix Computation. Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4613-8369-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alan, George, Gilbert J. R. 1953-, and Liu Joseph W. H, eds. Graph theory and sparse matrix computation. Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Lijun, and Lu Qin. Cohesive Subgraph Computation over Large Sparse Graphs. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03599-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Coleman, Thomas F. The efficient computation of sparse Jacobian matrices using automatic differentiation. Cornell Theory Center, Cornell University, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zlatev, Zahari. Computational Methods for General Sparse Matrices. Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-017-1116-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zlatev, Zahari. Computational methods for general sparse matrices. Kluwer Academic, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zlatev, Zahari. Computational Methods for General Sparse Matrices. Springer Netherlands, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Research Institute for Advanced Computer Science (U.S.), ed. SPARSKIT: A basic toolkit for sparse matrix computations. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Soman, S. A., S. A. Khaparde, and Shubha Pandit. Computational Methods for Large Sparse Power Systems Analysis. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0823-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jenkin, Michael. Visual stereoscopic computation. University of Toronto, Dept. of Computer Science, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sparse computation"

1

Zippel, Richard. "Sparse Hensel Algorithms." In Effective Polynomial Computation. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3188-3_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ueberhuber, Christoph W. "Large, Sparse Linear Systems." In Numerical Computation 2. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-642-59109-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Giesbrecht, Mark, Daniel S. Roche, and Hrushikesh Tilak. "Computing Sparse Multiples of Polynomials." In Algorithms and Computation. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17517-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aronov, Boris, Mark de Berg, Otfried Cheong, Joachim Gudmundsson, Herman Haverkort, and Antoine Vigneron. "Sparse Geometric Graphs with Small Dilation." In Algorithms and Computation. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11602613_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Muthu Lekshmi, V. S., K. Harish Kumar, and N. Venkateswaran. "Efficient Computation of Sparse Spectra Using Sparse Fourier Transform." In Emerging Trends in Computing and Expert Technology. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32150-5_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sabbagh, Harold A., R. Kim Murphy, Elias H. Sabbagh, Liming Zhou, and Russell Wincheski. "High-Dimension Model Representation via Sparse Grid Techniques." In Scientific Computation. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67956-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Soman, S. A., S. A. Khaparde, and Shubha Pandit. "Data Structure for Sparse Matrix Computation." In Computational Methods for Large Sparse Power Systems Analysis. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-0823-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Delaplace, Franck, and Didier Remy. "paradeis: An Object Library for Parallel Sparse Array Computation." In Parallel Computation. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-49164-3_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaski, Petteri, Mikko Koivisto, and Jesper Nederlof. "Homomorphic Hashing for Sparse Coefficient Extraction." In Parameterized and Exact Computation. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33293-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Komusiewicz, Christian, and Manuel Sorge. "Finding Dense Subgraphs of Sparse Graphs." In Parameterized and Exact Computation. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33293-7_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse computation"

1

Liu, Fangxin, Shiyuan Huang, Ning Yang, Zongwu Wang, Haomin Li, and Li Jiang. "CROSS: Compiler-Driven Optimization of Sparse DNNs Using Sparse/Dense Computation Kernels." In 2025 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2025. https://doi.org/10.1109/hpca61900.2025.00076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pei, Soo-Chang, and Kuo-Wei Chang. "Fast Sparse DFT Computation for Arbitrary Length by Circular Convolution." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10890724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ye, Zixiang, Houjun Sun, and Yi Zhang. "Fast Sidelobe Computation for Arbitrary Two-Dimensional Array for Array Sparse." In 2024 International Conference on Microwave and Millimeter Wave Technology (ICMMT). IEEE, 2024. http://dx.doi.org/10.1109/icmmt61774.2024.10672363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ong, Frank, Sameer Pawar, and Kannan Ramchandran. "Fast sparse 2-D DFT computation using sparse-graph alias codes." In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. http://dx.doi.org/10.1109/icassp.2016.7472440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Lijun, and Lu Qin. "Cohesive Subgraph Computation Over Large Sparse Graphs." In 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019. http://dx.doi.org/10.1109/icde.2019.00241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liang, Faming. "Consistent Sparse Deep Learning: Theory and Computation." In 3nd International Conference on Statistics: Theory and Applications (ICSTA'21). Avestia Publishing, 2021. http://dx.doi.org/10.11159/icsta21.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hochbaum, Dorit S., and Philipp Baumann. "Sparse computation for large-scale data mining." In 2014 IEEE International Conference on Big Data (Big Data). IEEE, 2014. http://dx.doi.org/10.1109/bigdata.2014.7004252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Esser, Steve K., Anthony Ndirango, and Dharmendra S. Modha. "Binding sparse spatiotemporal patterns in spiking computation." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chamberlain, Bradford L., and Lawrence Snyder. "Array language support for parallel sparse computation." In the 15th international conference. ACM Press, 2001. http://dx.doi.org/10.1145/377792.377820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Asgari, Bahar, Ramyad Hadidi, Tushar Krishna, Hyesoon Kim, and Sudhakar Yalamanchili. "ALRESCHA: A Lightweight Reconfigurable Sparse-Computation Accelerator." In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00029.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse computation"

1

Kevorkian, A. K. Decomposition of Large Sparse Symmetric Systems for Parallel Computation. Part 1. Theoretical Foundations. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada267144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kevorkian, A. K. Decomposition of Large Sparse Symmetric Systems for Parallel Computation. Part 2. Parallelization Tool Roadmap. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada267072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pothen, Alex, and Jesse L. Barlow. Large Sparse Stable Matrix Computations. Defense Technical Information Center, 1990. http://dx.doi.org/10.21236/ada229837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, J., E. Ng, and B. Peyton. On finding supernodes for sparse matrix computations. Office of Scientific and Technical Information (OSTI), 1990. http://dx.doi.org/10.2172/6756314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bader, Brett William, and Tamara Gibson Kolda. Efficient MATLAB computations with sparse and factored tensors. Office of Scientific and Technical Information (OSTI), 2006. http://dx.doi.org/10.2172/897641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sameh, Ahmed H., Alicia Klinvex, and Yao Zhu. A Computing Platform for Parallel Sparse Matrix Computations. Defense Technical Information Center, 2016. http://dx.doi.org/10.21236/ad1007434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anderson, Bradley E. Readiness Spares Package Non-Optimized (NOP) Item Computation Analysis. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/adb243484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tropp, Joel A., and Stephen J. Wright. Computational Methods for Sparse Solution of Linear Inverse Problems. Defense Technical Information Center, 2009. http://dx.doi.org/10.21236/ada633835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pothen, A. Parallel sparse matrix computations: Wavefront minimization of sparse matrices. Final report for the period ending June 14, 1998. Office of Scientific and Technical Information (OSTI), 1999. http://dx.doi.org/10.2172/329566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lutz, Carsten, and Frank Wolter. Modal Logics of Topological Relations. Technische Universität Dresden, 2004. http://dx.doi.org/10.25368/2022.142.

Full text
Abstract:
The eight topological RCC8(or Egenhofer-Franzosa)- relations between spatial regions play a fundamental role in spatial reasoning, spatial and constraint databases, and geographical information systems. In analogy with Halpern and Shoham’s modal logic of time intervals based on the Allen relations, we introduce a family of modal logics equipped with eight modal operators that are interpreted by the RCC8-relations. The semantics is based on region spaces induced by standard topological spaces, in particular the real plane. We investigate the expressive power and computational complexity of the
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!