To see the other types of publications on this topic, follow the link: Sparse computation.

Journal articles on the topic 'Sparse computation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse computation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ghorbani, Mahdi, Mathieu Huot, Shideh Hashemian, and Amir Shaikhha. "Compiling Structured Tensor Algebra." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023): 204–33. http://dx.doi.org/10.1145/3622804.

Full text
Abstract:
Tensor algebra is essential for data-intensive workloads in various computational domains. Computational scientists face a trade-off between the specialization degree provided by dense tensor algebra and the algorithmic efficiency that leverages the structure provided by sparse tensors. This paper presents StructTensor, a framework that symbolically computes structure at compilation time. This is enabled by Structured Tensor Unified Representation (STUR), an intermediate language that can capture tensor computations as well as their sparsity and redundancy structures. Through a mathematical vi
APA, Harvard, Vancouver, ISO, and other styles
2

VOISIN, FRÉDÉRIQUE, and GUY-RENÉ PERRIN. "SPARSE COMPUTATION WITH PEI." International Journal of Foundations of Computer Science 10, no. 04 (1999): 425–42. http://dx.doi.org/10.1142/s0129054199000307.

Full text
Abstract:
PEI formalism has been designed to reason and develop parallel programs in the context of data parallelism. In this paper, we focus on the use of PEI to transform a program involving dense matrices into a new program involving sparse matrices, using the example of the matrix-vector product.
APA, Harvard, Vancouver, ISO, and other styles
3

D’Ambra, Pasqua, Fabio Durastante, and Salvatore Filippone. "Parallel Sparse Computation Toolkit." Software Impacts 15 (March 2023): 100463. http://dx.doi.org/10.1016/j.simpa.2022.100463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Harris, David G., Ehab Morsy, Gopal Pandurangan, Peter Robinson, and Aravind Srinivasan. "Efficient computation of sparse structures." Random Structures & Algorithms 49, no. 2 (2016): 322–44. http://dx.doi.org/10.1002/rsa.20653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ruohao, Yijie Lu, and Zhengfei Song. "YOLO sparse training and model pruning for street view house numbers recognition." Journal of Physics: Conference Series 2646, no. 1 (2023): 012025. http://dx.doi.org/10.1088/1742-6596/2646/1/012025.

Full text
Abstract:
Abstract This paper proposes a YOLO (You Only Look Once) sparse training and model pruning technique for recognizing house numbers in street view images. YOLO is a popular object detection algorithm that has achieved state-of-the-art performance in various computer vision tasks. However, its large model size and computational complexity limit its deployment on resource-constrained devices such as smartphones and embedded systems. To address this issue, we use a sparse training technique that trains YOLO with L1 norm regularization to encourage the network to learn sparse representations. This
APA, Harvard, Vancouver, ISO, and other styles
6

WANG, Miao, Shengbing ZHANG, and Meng ZHANG. "Exploring non-zero position constraints: algorithm-hardware co-designed DNN sparse training method." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 43, no. 1 (2025): 119–27. https://doi.org/10.1051/jnwpu/20254310119.

Full text
Abstract:
On-device learning enables edge devices to continuously adapt to new data for AI applications. Leveraging sparsity to eliminate redundant computation and storage usage during training is a key approach to improving the learning efficiency of edge deep neural network(DNN). However, due to the lack of assumptions about non-zero positions, expensive runtime identification and allocation of zero positions and load balancing of irregular computations are often required, making it difficult for existing sparse training works to approach the ideal speedup. This paper points out that if the non-zero p
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xi, Chang Gao, Zuowen Wang, et al. "Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11399–406. http://dx.doi.org/10.1609/aaai.v38i10.29020.

Full text
Abstract:
Recurrent Neural Networks (RNNs) are useful in temporal sequence tasks. However, training RNNs involves dense matrix multiplications which require hardware that can support a large number of arithmetic operations and memory accesses. Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware. Inspired by the spiking neuron model, the Delta RNN exploits temporal sparsity during inference by skipping over the update of hidden states from those inactivated neurons whose change of activation across two timesteps is below a defined thresh
APA, Harvard, Vancouver, ISO, and other styles
8

Huckle, Thomas K. "Efficient computation of sparse approximate inverses." Numerical Linear Algebra with Applications 5, no. 1 (1998): 57–71. http://dx.doi.org/10.1002/(sici)1099-1506(199801/02)5:1<57::aid-nla129>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmad, Muhammad, Sardar Usman, Ameer Hamza, Muhammad Muzamil, and Ildar Batyrshin. "Elegante+: A Machine Learning-Based Optimization Framework for Sparse Matrix–Vector Computations on the CPU Architecture." Information 16, no. 7 (2025): 553. https://doi.org/10.3390/info16070553.

Full text
Abstract:
Sparse matrix–vector multiplication (SpMV) plays a significant role in the computational costs of many scientific applications such as 2D/3D robotics, power network problems, and computer vision. Numerous implementations using different sparse matrix formats have been introduced to optimize this kernel on CPUs and GPUs. However, due to the sparsity patterns of matrices and the diverse configurations of hardware, accurately modeling the performance of SpMV remains a complex challenge. SpMV computation is often a time-consuming process because of its sparse matrix structure. To address this, we
APA, Harvard, Vancouver, ISO, and other styles
10

Gotsman, Craig, and Sivan Toledo. "On the Computation of Null Spaces of Sparse Rectangular Matrices." SIAM Journal on Matrix Analysis and Applications 30, no. 2 (2008): 445–63. http://dx.doi.org/10.1137/050638369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sun, Dou, Bo Pang, Shiqi Xing, Yongzhen Li, and Xuesong Wang. "Direct 3-D Sparse Imaging Using Non-Uniform Samples Without Data Interpolation." Electronics 9, no. 2 (2020): 321. http://dx.doi.org/10.3390/electronics9020321.

Full text
Abstract:
As an emerging technique, sparse imaging from three-dimensional (3-D) and non-uniform samples provides an attractive approach to obtain high resolution 3-D images along with great convenience in data acquisition, especially in the case of targets consisting of strong isolated scatterers. Although data interpolation in k-space and fast Fourier transform have been employed in the existing 3-D sparse imaging methods to reduce the computational complexity, the data-gridding errors induced by local interpolation may usually result in poor imaging performance. In this paper, we directly regard the i
APA, Harvard, Vancouver, ISO, and other styles
12

Son, Won, Yong-Tae Park, Yu Kyeong Kim, and Johan Lim. "Sparse Matrix Computation in Mixed Effects Model." Korean Journal of Applied Statistics 28, no. 2 (2015): 281–88. http://dx.doi.org/10.5351/kjas.2015.28.2.281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hochbaum, Dorit S., and Philipp Baumann. "Sparse Computation for Large-Scale Data Mining." IEEE Transactions on Big Data 2, no. 2 (2016): 151–74. http://dx.doi.org/10.1109/tbdata.2016.2576470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Codenotti, B., and G. Resta. "Computation of sparse circulant permanents via determinants." Linear Algebra and its Applications 355, no. 1-3 (2002): 15–34. http://dx.doi.org/10.1016/s0024-3795(02)00330-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Narayanan, Sri Hari Krishna, Boyana Norris, Paul Hovland, Duc C. Nguyen, and Assefaw H. Gebremedhin. "Sparse Jacobian Computation Using ADIC2 and ColPack." Procedia Computer Science 4 (2011): 2115–23. http://dx.doi.org/10.1016/j.procs.2011.04.231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lee, Jon, and Daphne Skipper. "Volume computation for sparse Boolean quadric relaxations." Discrete Applied Mathematics 275 (March 2020): 79–94. http://dx.doi.org/10.1016/j.dam.2018.10.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Xing, Shiqi, Shaoqiu Song, Sinong Quan, Dou Sun, Junpeng Wang, and Yongzhen Li. "Near-Field 3D Sparse SAR Direct Imaging with Irregular Samples." Remote Sensing 14, no. 24 (2022): 6321. http://dx.doi.org/10.3390/rs14246321.

Full text
Abstract:
Sparse imaging is widely used in synthetic aperture radar (SAR) imaging. Compared with the traditional matched filtering (MF) methods, sparse SAR imaging can directly image the scattered points of a target and effectively reduce the sidelobes and clutter in irregular samples. However, in view of the large-scale computational complexity of sparse reconstruction with raw echo data, traditional sparse reconstruction algorithms often require huge computational expense. To solve the above problems, in this paper, we propose a 3D near-field sparse SAR direct imaging algorithm for irregular trajector
APA, Harvard, Vancouver, ISO, and other styles
18

Anzt, Hartwig, Stanimire Tomov, and Jack Dongarra. "On the performance and energy efficiency of sparse linear algebra on GPUs." International Journal of High Performance Computing Applications 31, no. 5 (2016): 375–90. http://dx.doi.org/10.1177/1094342016672081.

Full text
Abstract:
In this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based supercomputers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that t
APA, Harvard, Vancouver, ISO, and other styles
19

Xia, Haojun, Zhen Zheng, Yuchao Li, et al. "Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity." Proceedings of the VLDB Endowment 17, no. 2 (2023): 211–24. http://dx.doi.org/10.14778/3626292.3626303.

Full text
Abstract:
With the fast growth of parameter size, it becomes increasingly challenging to deploy large generative models as they typically require large GPU memory consumption and massive computation. Unstructured model pruning has been a common approach to reduce both GPU memory footprint and the overall computation while retaining good model accuracy. However, the existing solutions do not provide an efficient support for handling unstructured sparsity on modern GPUs, especially on the highly-structured tensor core hardware. Therefore, we propose Flash-LLM for enabling low-cost and highly efficient lar
APA, Harvard, Vancouver, ISO, and other styles
20

Uchizawa, Kei, Rodney Douglas, and Wolfgang Maass. "On the Computational Power of Threshold Circuits with Sparse Activity." Neural Computation 18, no. 12 (2006): 2994–3008. http://dx.doi.org/10.1162/neco.2006.18.12.2994.

Full text
Abstract:
Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of ne
APA, Harvard, Vancouver, ISO, and other styles
21

Ahmad, Muhammad, Usman Sardar, Ildar Batyrshin, Muhammad Hasnain, Khan Sajid, and Grigori Sidorov. "Elegante: A Machine Learning-Based Threads Configuration Tool for SpMV Computations on Shared Memory Architecture." Information 15, no. 11 (2024): 685. http://dx.doi.org/10.3390/info15110685.

Full text
Abstract:
The sparse matrix–vector product (SpMV) is a fundamental computational kernel utilized in a diverse range of scientific and engineering applications. It is commonly used to solve linear and partial differential equations. The parallel computation of the SpMV product is a challenging task. Existing solutions often employ a fixed number of threads assignment to rows based on empirical formulas, leading to sub-optimal configurations and significant performance losses. Elegante, our proposed machine learning-powered tool, utilizes a data-driven approach to identify the optimal thread configuration
APA, Harvard, Vancouver, ISO, and other styles
22

Thomas, Rajesh, Victor DeBrunner, and Linda S. DeBrunner. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors." Signals 2, no. 4 (2021): 688–705. http://dx.doi.org/10.3390/signals2040041.

Full text
Abstract:
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N=2k-length DFT (where k is a positive integer), and is substantially more effic
APA, Harvard, Vancouver, ISO, and other styles
23

Beyer, Kevin, and Raghu Ramakrishnan. "Bottom-up computation of sparse and Iceberg CUBE." ACM SIGMOD Record 28, no. 2 (1999): 359–70. http://dx.doi.org/10.1145/304181.304214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Abusalah, A., O. Saad, J. Mahseredjian, U. Karaagac, and I. Kocar. "Accelerated Sparse Matrix-Based Computation of Electromagnetic Transients." IEEE Open Access Journal of Power and Energy 7 (2020): 13–21. http://dx.doi.org/10.1109/oajpe.2019.2952776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Arikati, Srinivasa R., Anil Maheshwari, and Christos D. Zaroliagis. "Efficient computation of implicit representations of sparse graphs." Discrete Applied Mathematics 78, no. 1-3 (1997): 1–16. http://dx.doi.org/10.1016/s0166-218x(97)00007-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mezher, Dany, and Bernard Philippe. "Parallel computation of pseudospectra of large sparse matrices." Parallel Computing 28, no. 2 (2002): 199–221. http://dx.doi.org/10.1016/s0167-8191(01)00136-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yu-cai, Feng, Chen Chang-qing, Feng Jian-lin, and Xiang Long-gang. "Fast computation of sparse data cubes with constraints." Wuhan University Journal of Natural Sciences 9, no. 2 (2004): 167–72. http://dx.doi.org/10.1007/bf02830596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mega, A., M. Belkacemi, and J. M. Kauffmann. "Sparse Computation of Power System Fault Impedance Matrices." Electric Power Components and Systems 34, no. 6 (2006): 681–87. http://dx.doi.org/10.1080/15325000500419243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Del Corso, Gianna M., Antonio Gullí, and Francesco Romani. "Fast PageRank Computation via a Sparse Linear System." Internet Mathematics 2, no. 3 (2005): 251–73. http://dx.doi.org/10.1080/15427951.2005.10129108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jandhyala, Vikram, Eric Michielssen, and Raj Mittra. "A sparse multiresolution technique for fast capacitance computation." Microwave and Optical Technology Letters 11, no. 5 (1996): 242–47. http://dx.doi.org/10.1002/(sici)1098-2760(19960405)11:5<242::aid-mop2>3.0.co;2-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bittracher, Andreas, Mattes Mollenhauer, Péter Koltai, and Christof Schütte. "Optimal Reaction Coordinates: Variational Characterization and Sparse Computation." Multiscale Modeling & Simulation 21, no. 2 (2023): 449–88. http://dx.doi.org/10.1137/21m1448367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Jian, and Delu Zeng. "Sparse Variational Student-t Processes." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 16156–63. http://dx.doi.org/10.1609/aaai.v38i14.29549.

Full text
Abstract:
The theory of Bayesian learning incorporates the use of Student-t Processes to model heavy-tailed distributions and datasets with outliers. However, despite Student-t Processes having a similar computational complexity as Gaussian Processes, there has been limited emphasis on the sparse representation of this model. This is mainly due to the increased difficulty in modeling and computation compared to previous sparse Gaussian Processes. Our motivation is to address the need for a sparse representation framework that reduces computational complexity, allowing Student-t Processes to be more flex
APA, Harvard, Vancouver, ISO, and other styles
33

Laird, Avery, Bangtian Liu, Nikolaj Bjørner, and Maryam Mehri Dehnavi. "SpEQ: Translation of Sparse Codes using Equivalences." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 1680–703. http://dx.doi.org/10.1145/3656445.

Full text
Abstract:
We present SpEQ, a quick and correct strategy for detecting semantics in sparse codes and enabling automatic translation to high-performance library calls or domain-specific languages (DSLs). When sparse linear algebra codes contain implicit preconditions about how data is stored that hamper direct translation, SpEQ identifies the high-level computation along with storage details and related preconditions. A run-time check guards the translation and ensures that required preconditions are met. We implement SpEQ using the LLVM framework, the Z3 solver, and egglog library and correctly translate
APA, Harvard, Vancouver, ISO, and other styles
34

van der Hoeven, Joris, and Grégoire Lecerf. "On sparse interpolation of rational functions and gcds." ACM Communications in Computer Algebra 55, no. 1 (2021): 1–12. http://dx.doi.org/10.1145/3466895.3466896.

Full text
Abstract:
In this note, we present a variant of a probabilistic algorithm by Cuyt and Lee for the sparse interpolation of multivariate rational functions. We also present an analogous method for the computation of sparse gcds.
APA, Harvard, Vancouver, ISO, and other styles
35

Peng, Lijun, Xiaojun Duan, and Jubo Zhu. "A New Sparse Gauss-Hermite Cubature Rule Based on Relative-Weight-Ratios for Bearing-Ranging Target Tracking." Modelling and Simulation in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/2783781.

Full text
Abstract:
A new sparse Gauss-Hermite cubature rule is designed to avoid dimension explosion caused by the traditional full tensor-product based Gauss-Hermite cubature rule. Although Smolyak’s quadrature rule can successfully generate sparse cubature points for high dimensional integral, it has a potential drawback that some cubature points generated by Smolyak’s rule have negative weights, which may result in instability for the computation. A relative-weight-ratio criterion based sparse Gauss-Hermite rule is presented in this paper, in which cubature points are kept symmetric in the input space and cor
APA, Harvard, Vancouver, ISO, and other styles
36

Shu, Ruiwen, Jingwei Hu, and Shi Jin. "A Stochastic Galerkin Method for the Boltzmann Equation with Multi-Dimensional Random Inputs Using Sparse Wavelet Bases." Numerical Mathematics: Theory, Methods and Applications 10, no. 2 (2017): 465–88. http://dx.doi.org/10.4208/nmtma.2017.s12.

Full text
Abstract:
AbstractWe propose a stochastic Galerkin method using sparse wavelet bases for the Boltzmann equation with multi-dimensional random inputs. Themethod uses locally supported piecewise polynomials as an orthonormal basis of the random space. By a sparse approach, only a moderate number of basis functions is required to achieve good accuracy in multi-dimensional random spaces. We discover a sparse structure of a set of basis-related coefficients, which allows us to accelerate the computation of the collision operator. Regularity of the solution of the Boltzmann equation in the random space and an
APA, Harvard, Vancouver, ISO, and other styles
37

Yu, Nan. "Rapid reconstruction of sparse multiband signals based on MMV-SWACGP algorithm." SHS Web of Conferences 166 (2023): 01071. http://dx.doi.org/10.1051/shsconf/202316601071.

Full text
Abstract:
In this paper, an MMV-SWACGP algorithm is proposed, in order to solve the problem of pseudo-inverse computation in the iterative process of OMPMMV algorithm for multi-band signal reconstruction by compressed sensing. This algorithm reduces the complexity and computation of OMPMMV reconstruction algorithm. It is of great significance to the fast reconstruction of sparse multiband signals. Theoretical analysis and simulation results show that the proposed algorithm has faster computation speed and better noise stability.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Chao, Padma Raghavan, Lloyd Arrowood, Donald W. Noid, Bobby G. Sumpter, and Robert E. Tuzun. "Large-Scale Normal Coordinate Analysis on Distributed Memory Parallel Systems." International Journal of High Performance Computing Applications 16, no. 4 (2002): 409–24. http://dx.doi.org/10.1177/109434200201600404.

Full text
Abstract:
Summary A parallel computational scheme for analyzing large-scale molecular vibration on distributed memory computing platforms is presented in this paper. This method combines the implicitly restarted Lanczos algorithm with a state-of-art parallel sparse direct solver to compute a set of low frequency vibrational modes for molecular systems containing tens of thousands of atoms. Although the original motivation for developing such a scheme was to overcome memory limitations on traditional sequential and shared memory machines, our computational experiments show that with a careful parallel de
APA, Harvard, Vancouver, ISO, and other styles
39

Geiser, Jürgen. "Embedded Zassenhaus Expansion to Splitting Schemes: Theory and Multiphysics Applications." International Journal of Differential Equations 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/314290.

Full text
Abstract:
We present some operator splitting methods improved by the use of the Zassenhaus product and designed for applications to multiphysics problems. We treat iterative splitting methods that can be improved by means of the Zassenhaus product formula, which is a sequential splitting scheme. The main idea for reducing the computation time needed by the iterative scheme is to embed fast and cheap Zassenhaus product schemes, since the computation of the commutators involved is very cheap, since we are dealing with nilpotent matrices. We discuss the coupling ideas of iterative and sequential splitting
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Laurence Tianruo. "THE IMPROVED PARALLEL ICGS METHOD FOR LARGE AND SPARSE UNSYMMETRIC LINEAR SYSTEMS." Parallel Processing Letters 15, no. 04 (2005): 459–67. http://dx.doi.org/10.1142/s0129626405002374.

Full text
Abstract:
For the solutions of large and sparse linear systems of equations with unsymmetric coefficient matrices, we propose an improved version of the Conjugate Gradient Squared method (ICGS) method. The algorithm is derived such that all inner products, matrix-vector multiplications and vector updates of a single iteration step are independent and communication time required for inner product can be overlapped efficiently with computation time of vector updates. Therefore, the cost of global communication on parallel distributed memory computers can be significantly reduced. The resulting ICGS algori
APA, Harvard, Vancouver, ISO, and other styles
41

sci, global. "Estimating Primaries by Sparse Inversion with Cost-Effective Computation." Communications in Computational Physics 28, no. 1 (2020): 477–97. http://dx.doi.org/10.4208/cicp.oa-2018-0065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hima, Thottippully, V. Ellappan, and B. Priestly Shan. "An Efficient CFA Image Reconstruction Algorithm using Sparse Computation." Asian Journal of Research in Social Sciences and Humanities 6, no. 7 (2016): 132. http://dx.doi.org/10.5958/2249-7315.2016.00415.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Zhenyu, Fangzheng Liu, and Xuehu Yan. "Sparse Adversarial Video Attacks via Superpixel-Based Jacobian Computation." Sensors 22, no. 10 (2022): 3686. http://dx.doi.org/10.3390/s22103686.

Full text
Abstract:
Adversarial examples have aroused great attention during the past years owing to their threat to the deep neural networks (DNNs). Recently, they have been successfully extended to video models. Compared with image cases, the sparse adversarial perturbations in the videos can not only reduce the computation complexity, but also guarantee the crypticity of adversarial examples. In this paper, we propose an efficient attack to generate adversarial video perturbations with large sparsity in both the temporal (inter-frames) and spatial (intra-frames) domains. Specifically, we select the key frames
APA, Harvard, Vancouver, ISO, and other styles
44

da Silva Maciel, Luiz Maurílio, and Marcelo Bernardes Vieira. "Sparse Optical Flow Computation Using Wave Equation-Based Energy." International Journal of Image and Graphics 20, no. 04 (2020): 2050027. http://dx.doi.org/10.1142/s0219467820500278.

Full text
Abstract:
Identification of motion in videos is a fundamental task for several computer vision problems. One of the main tools for motion identification is optical flow, which estimates the projection of the 3D velocity of the objects onto the plane of the camera. In this work, we propose a differential optical flow method based on the wave equation. The optical flow is computed by minimizing a functional energy composed by two terms: a data term based on brightness constancy and a regularization term based on energy of the wave. Flow is determined by solving a system of linear equations. The decoupling
APA, Harvard, Vancouver, ISO, and other styles
45

Collins, Michael J. "Efficient secure multiparty computation of sparse vector dot products." Journal of Discrete Mathematical Sciences and Cryptography 21, no. 5 (2018): 1107–17. http://dx.doi.org/10.1080/09720529.2018.1453623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mittal, R. C., and Ahmad Al-Kurdi. "Efficient computation of the permanent of a sparse matrix." International Journal of Computer Mathematics 77, no. 2 (2001): 189–99. http://dx.doi.org/10.1080/00207160108805061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Harding, Brendan, Markus Hegland, Jay Larson, and James Southern. "Fault Tolerant Computation with the Sparse Grid Combination Technique." SIAM Journal on Scientific Computing 37, no. 3 (2015): C331—C353. http://dx.doi.org/10.1137/140964448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Yuan, Xin Xu, and Robert J. Adams. "A Sparse Factorization for Fast Computation of Localizing Modes." IEEE Transactions on Antennas and Propagation 58, no. 9 (2010): 3044–49. http://dx.doi.org/10.1109/tap.2010.2052549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pfeiffer, Pia, Andreas Alfons, and Peter Filzmoser. "Efficient computation of sparse and robust maximum association estimators." Computational Statistics & Data Analysis 207 (July 2025): 108133. https://doi.org/10.1016/j.csda.2025.108133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hemaspaandra, Lane A., and Jörg Rothe. "Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets." SIAM Journal on Computing 26, no. 3 (1997): 634–53. http://dx.doi.org/10.1137/s0097539794261970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!