To see the other types of publications on this topic, follow the link: Sparse Tensors.

Journal articles on the topic 'Sparse Tensors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Sparse Tensors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chou, Stephen, and Saman Amarasinghe. "Compilation of dynamic sparse tensor algebra." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (2022): 1408–37. http://dx.doi.org/10.1145/3563338.

Full text
Abstract:
Many applications, from social network graph analytics to control flow analysis, compute on sparse data that evolves over the course of program execution. Such data can be represented as dynamic sparse tensors and efficiently stored in formats (data layouts) that utilize pointer-based data structures like block linked lists, binary search trees, B-trees, and C-trees among others. These specialized formats support fast in-place modification and are thus better suited than traditional, array-based data structures like CSR for storing dynamic sparse tensors. However, different dynamic sparse tens
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Genghan, Olivia Hsu, and Fredrik Kjolstad. "Compilation of Modular and General Sparse Workspaces." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 1213–38. http://dx.doi.org/10.1145/3656426.

Full text
Abstract:
Recent years have seen considerable work on compiling sparse tensor algebra expressions. This paper addresses a shortcoming in that work, namely how to generate efficient code (in time and space) that scatters values into a sparse result tensor. We address this shortcoming through a compiler design that generates code that uses sparse intermediate tensors (sparse workspaces) as efficient adapters between compute code that scatters and result tensors that do not support random insertion. Our compiler automatically detects sparse scattering behavior in tensor expressions and inserts necessary in
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Jingzhi, Yanyan Shen, Yue Wang, and Lei Chen. "STile: Searching Hybrid Sparse Formats for Sparse Deep Learning Operators Automatically." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–26. http://dx.doi.org/10.1145/3639323.

Full text
Abstract:
Sparse operators, i.e., operators that take sparse tensors as input, are of great importance in deep learning models. Due to the diverse sparsity patterns in different sparse tensors, it is challenging to optimize sparse operators by seeking an optimal sparse format, i.e., leading to the lowest operator latency. Existing works propose to decompose a sparse tensor into several parts and search for a hybrid of sparse formats to handle diverse sparse patterns. However, they often make a trade-off between search space and search time: their search spaces are limited in some cases, resulting in lim
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Peiming, Alexander J. Root, Anlun Xu, Yinying Li, Fredrik Kjolstad, and Aart J. C. Bik. "Compiler Support for Sparse Tensor Convolutions." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 275–303. http://dx.doi.org/10.1145/3689721.

Full text
Abstract:
This paper extends prior work on sparse tensor algebra compilers to generate asymptotically efficient code for tensor expressions with affine subscript expressions. Our technique enables compiler support for a wide range of sparse computations, including sparse convolutions and pooling that are widely used in ML and graphics applications. We propose an approach that gradually rewrites compound subscript expressions to simple subscript expressions with loops that exploit the sparsity pattern of the input sparse tensors. As a result, the time complexity of the generated kernels is bounded by the
APA, Harvard, Vancouver, ISO, and other styles
5

Hackbusch, W. "A Note on Nonclosed Tensor Formats." Vietnam Journal of Mathematics 48, no. 4 (2019): 621–31. http://dx.doi.org/10.1007/s10013-019-00372-4.

Full text
Abstract:
AbstractVarious tensor formats exist which allow a data-sparse representation of tensors. Some of these formats are not closed. The consequences are (i) possible non-existence of best approximations and (ii) divergence of the representing parameters when a tensor within the format tends to a border tensor outside. The paper tries to describe the nature of this divergence. A particular question is whether the divergence is uniform for all border tensors.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahrens, Willow, Teodoro Fields Collin, Radha Patel, Kyle Deeds, Changwan Hong, and Saman Amarasinghe. "Finch: Sparse and Structured Tensor Programming with Control Flow." Proceedings of the ACM on Programming Languages 9, OOPSLA1 (2025): 1042–72. https://doi.org/10.1145/3720473.

Full text
Abstract:
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear integer grids. Real world tensors often contain underlying structure, such as sparsity, runs of repeated values, or symmetry. Support for structured data is fragmented and incomplete. Existing frameworks limit the tensor structures and program control flow they support to better simplify the problem. In this work, we propose a new programming language, Finch, which supports both flexible control flow and diverse data struc
APA, Harvard, Vancouver, ISO, and other styles
7

Deeds, Kyle, Willow Ahrens, Magdalena Balazinska, and Dan Suciu. "Galley: Modern Query Optimization for Sparse Tensor Programs." Proceedings of the ACM on Management of Data 3, no. 3 (2025): 1–24. https://doi.org/10.1145/3725301.

Full text
Abstract:
The tensor programming abstraction is a foundational paradigm which allows users to write high performance programs via a high-level imperative interface. Recent work on sparse tensor compilers has extended this paradigm to sparse tensors (i.e., tensors where most entries are not explicitly represented). With these systems, users define the semantics of the program and the algorithmic decisions in a concise language that can be compiled to efficient low-level code. However, these systems still require users to make complex decisions about program structure and memory layouts to write efficient
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Shuangyue, and Ziyan Luo. "Sparse Support Tensor Machine with Scaled Kernel Functions." Mathematics 11, no. 13 (2023): 2829. http://dx.doi.org/10.3390/math11132829.

Full text
Abstract:
As one of the supervised tensor learning methods, the support tensor machine (STM) for tensorial data classification is receiving increasing attention in machine learning and related applications, including remote sensing imaging, video processing, fault diagnosis, etc. Existing STM approaches lack consideration for support tensors in terms of data reduction. To address this deficiency, we built a novel sparse STM model to control the number of support tensors in the binary classification of tensorial data. The sparsity is imposed on the dual variables in the context of the feature space, whic
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Tao, and Gangyao Kuang. "SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition." Electronics 11, no. 18 (2022): 2859. http://dx.doi.org/10.3390/electronics11182859.

Full text
Abstract:
Due to the imaging mechanism of Synthetic Aperture Radars (SARs), the target shape on an SAR image is sensitive to the radar incidence angle and target azimuth, but there is strong correlation and redundancy between adjacent azimuth images of SAR targets. This paper studies multi-angle SAR image reconstruction based on non-negative Tucker decomposition using adjacent azimuth images reconstructed to form a sparse tensor. Sparse tensors are used to perform non-negative Tucker decomposition, resulting in non-negative core tensors and factor matrices. The reconstruction tensor is obtained by calcu
APA, Harvard, Vancouver, ISO, and other styles
10

Friedland, Shmuel, Qun Li, and Dan Schonfeld. "Compressive Sensing of Sparse Tensors." IEEE Transactions on Image Processing 23, no. 10 (2014): 4438–47. http://dx.doi.org/10.1109/tip.2014.2348796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Qiu, Yuning, Guoxu Zhou, Andong Wang, Zhenhao Huang, and Qibin Zhao. "Towards Multi-Mode Outlier Robust Tensor Ring Decomposition." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14713–21. http://dx.doi.org/10.1609/aaai.v38i13.29389.

Full text
Abstract:
Conventional Outlier Robust Tensor Decomposition (ORTD) approaches generally represent sparse outlier corruption within a specific mode. However, such an assumption, which may hold for matrices, proves inadequate when applied to high-order tensors. In the tensor domain, the outliers are prone to be corrupted in multiple modes simultaneously. Addressing this limitation, this study proposes a novel ORTD approach by recovering low-rank tensors contaminated by outliers spanning multiple modes. In particular, we conceptualize outliers within high-order tensors as latent tensor group sparsity by dec
APA, Harvard, Vancouver, ISO, and other styles
12

Zhou, Ruofei, Gang Wang, Bo Li, Jinlong Wang, Tianzhu Liu, and Chungang Liu. "Key-Frame Detection and Super-Resolution of Hyperspectral Video via Sparse-Based Cumulative Tensor Factorization." Mathematical Problems in Engineering 2020 (July 14, 2020): 1–20. http://dx.doi.org/10.1155/2020/9548749.

Full text
Abstract:
Thanks to the rapid development of hyperspectral sensors, hyperspectral videos (HSV) can now be collected with high temporal and spectral resolutions and utilized to handle invisible dynamic monitoring missions, such as chemical gas plume tracking. However, using such sequential large-scale data effectively is challenged, because the direct process of these data requires huge demands in terms of computational loads and memory. This paper presents a key-frame and target-detecting algorithm based on cumulative tensor CANDECOMP/PARAFAC (CP) factorization (CTCF) to select the frames where the targ
APA, Harvard, Vancouver, ISO, and other styles
13

Li, N., N. Pfeifer, and C. Liu. "AIRBORNE LIDAR POINTS CLASSIFICATION BASED ON TENSOR SPARSE REPRESENTATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (September 13, 2017): 107–14. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-107-2017.

Full text
Abstract:
The common statistical methods for supervised classification usually require a large amount of training data to achieve reasonable results, which is time consuming and inefficient. This paper proposes a tensor sparse representation classification (SRC) method for airborne LiDAR points. The LiDAR points are represented as tensors to keep attributes in its spatial space. Then only a few of training data is used for dictionary learning, and the sparse tensor is calculated based on tensor OMP algorithm. The point label is determined by the minimal reconstruction residuals. Experiments are carried
APA, Harvard, Vancouver, ISO, and other styles
14

Zhou, Jing, Anirban Bhattacharya, Amy H. Herring, and David B. Dunson. "Bayesian Factorizations of Big Sparse Tensors." Journal of the American Statistical Association 110, no. 512 (2015): 1562–76. http://dx.doi.org/10.1080/01621459.2014.983233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Yao, Deyu Meng, and Ming Yuan. "Sparse recovery: from vectors to tensors." National Science Review 5, no. 5 (2017): 756–67. http://dx.doi.org/10.1093/nsr/nwx069.

Full text
Abstract:
Abstract Recent advances in various fields such as telecommunications, biomedicine and economics, among others, have created enormous amount of data that are often characterized by their huge size and high dimensionality. It has become evident, from research in the past couple of decades, that sparsity is a flexible and powerful notion when dealing with these data, both from empirical and theoretical viewpoints. In this survey, we review some of the most popular techniques to exploit sparsity, for analyzing high-dimensional vectors, matrices and higher-order tensors.
APA, Harvard, Vancouver, ISO, and other styles
16

Ghorbani, Mahdi, Mathieu Huot, Shideh Hashemian, and Amir Shaikhha. "Compiling Structured Tensor Algebra." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023): 204–33. http://dx.doi.org/10.1145/3622804.

Full text
Abstract:
Tensor algebra is essential for data-intensive workloads in various computational domains. Computational scientists face a trade-off between the specialization degree provided by dense tensor algebra and the algorithmic efficiency that leverages the structure provided by sparse tensors. This paper presents StructTensor, a framework that symbolically computes structure at compilation time. This is enabled by Structured Tensor Unified Representation (STUR), an intermediate language that can capture tensor computations as well as their sparsity and redundancy structures. Through a mathematical vi
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Geunseop. "Accelerated Tensor Robust Principal Component Analysis via Factorized Tensor Norm Minimization." Applied Sciences 15, no. 14 (2025): 8114. https://doi.org/10.3390/app15148114.

Full text
Abstract:
In this paper, we aim to develop an efficient algorithm for the solving Tensor Robust Principal Component Analysis (TRPCA) problem, which focuses on obtaining a low-rank approximation of a tensor by separating sparse and impulse noise. A common approach is to minimize the convex surrogate of the tensor rank by shrinking its singular values. Due to the existence of various definitions of tensor ranks and their corresponding convex surrogates, numerous studies have explored optimal solutions under different formulations. However, many of these approaches suffer from computational inefficiency pr
APA, Harvard, Vancouver, ISO, and other styles
18

Kuznetsov, Maxim A., and Ivan V. Oseledets. "Tensor Train Spectral Method for Learning of Hidden Markov Models (HMM)." Computational Methods in Applied Mathematics 19, no. 1 (2019): 93–99. http://dx.doi.org/10.1515/cmam-2018-0027.

Full text
Abstract:
AbstractWe propose a new algorithm for spectral learning of Hidden Markov Models (HMM). In contrast to the standard approach, we do not estimate the parameters of the HMM directly, but construct an estimate for the joint probability distribution. The idea is based on the representation of a joint probability distribution as an N-th-order tensor with low ranks represented in the tensor train (TT) format. Using TT-format, we get an approximation by minimizing the Frobenius distance between the empirical joint probability distribution and tensors with low TT-ranks with core tensors normalization
APA, Harvard, Vancouver, ISO, and other styles
19

Mørup, Morten, Lars Kai Hansen, and Sidse M. Arnfred. "Algorithms for Sparse Nonnegative Tucker Decompositions." Neural Computation 20, no. 8 (2008): 2112–31. http://dx.doi.org/10.1162/neco.2008.11-06-407.

Full text
Abstract:
There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Zhenjiao, Xinhua Wang, Tianlai Li, and Lei Guo. "Personalized Recommendation Based on Contextual Awareness and Tensor Decomposition." Journal of Electronic Commerce in Organizations 16, no. 3 (2018): 39–51. http://dx.doi.org/10.4018/jeco.2018070104.

Full text
Abstract:
In order to solve users' rating sparsely problem existing in present recommender systems, this article proposes a personalized recommendation algorithm based on contextual awareness and tensor decomposition. Via this algorithm, it was first constructed two third-order tensors to represent six types of entities, including the user-user-item contexts and the item-item-user contexts. And then, this article uses a high order singular value decomposition method to mine the potential semantic association of the two third-order tensors above. Finally, the resulting tensors were combined to reach the
APA, Harvard, Vancouver, ISO, and other styles
21

Xue, Zhaohui, Sirui Yang, Hongyan Zhang, and Peijun Du. "Coupled Higher-Order Tensor Factorization for Hyperspectral and LiDAR Data Fusion and Classification." Remote Sensing 11, no. 17 (2019): 1959. http://dx.doi.org/10.3390/rs11171959.

Full text
Abstract:
Hyperspectral and light detection and ranging (LiDAR) data fusion and classification has been an active research topic, and intensive studies have been made based on mathematical morphology. However, matrix-based concatenation of morphological features may not be so distinctive, compact, and optimal for classification. In this work, we propose a novel Coupled Higher-Order Tensor Factorization (CHOTF) model for hyperspectral and LiDAR data classification. The innovative contributions of our work are that we model different features as multiple third-order tensors, and we formulate a CHOTF model
APA, Harvard, Vancouver, ISO, and other styles
22

Dias, Adhitha, Logan Anderson, Kirshanthan Sundararajah, Artem Pelenitsyn, and Milind Kulkarni. "SparseAuto: An Auto-scheduler for Sparse Tensor Computations using Recursive Loop Nest Restructuring." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 527–56. http://dx.doi.org/10.1145/3689730.

Full text
Abstract:
Automated code generation and performance enhancements for sparse tensor algebra have become essential in many real-world applications, such as quantum computing, physical simulations, computational chemistry, and machine learning. General sparse tensor algebra compilers are not always versatile enough to generate asymptotically optimal code for sparse tensor contractions. This paper shows how to generate asymptotically better schedules for complex sparse tensor expressions using kernel fission and fusion. We present generalized loop restructuring transformations to reduce asymptotic time comp
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Zhao, Cheng Ding, Zhisheng Gao, and Chunzhi Xie. "ANLPT: Self-Adaptive and Non-Local Patch-Tensor Model for Infrared Small Target Detection." Remote Sensing 15, no. 4 (2023): 1021. http://dx.doi.org/10.3390/rs15041021.

Full text
Abstract:
Infrared small target detection is widely used for early warning, aircraft monitoring, ship monitoring, and so on, which requires the small target and its background to be represented and modeled effectively to achieve their complete separation. Low-rank sparse decomposition based on the structural features of infrared images has attracted much attention among many algorithms because of its good interpretability. Based on our study, we found some shortcomings in existing baseline methods, such as redundancy of constructing tensors and fixed compromising factors. A self-adaptive low-rank sparse
APA, Harvard, Vancouver, ISO, and other styles
24

Tao, Zerui, Toshihisa Tanaka, and Qibin Zhao. "Efficient Nonparametric Tensor Decomposition for Binary and Count Data." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15319–27. http://dx.doi.org/10.1609/aaai.v38i14.29456.

Full text
Abstract:
In numerous applications, binary reactions or event counts are observed and stored within high-order tensors. Tensor decompositions (TDs) serve as a powerful tool to handle such high-dimensional and sparse data. However, many traditional TDs are explicitly or implicitly designed based on the Gaussian distribution, which is unsuitable for discrete data. Moreover, most TDs rely on predefined multi-linear structures, such as CP and Tucker formats. Therefore, they may not be effective enough to handle complex real-world datasets. To address these issues, we propose ENTED, an Efficient Nonparametri
APA, Harvard, Vancouver, ISO, and other styles
25

Deng, Shangju, and Jiwei Qin. "Matrix factorization completed multicontext data for tensor-enhanced recommendation." Journal of Intelligent & Fuzzy Systems 41, no. 6 (2021): 6727–38. http://dx.doi.org/10.3233/jifs-210641.

Full text
Abstract:
Tensors have been explored to share latent user-item relations and have been shown to be effective for recommendation. Tensors suffer from sparsity and cold start problems in real recommendation scenarios; therefore, researchers and engineers usually use matrix factorization to address these issues and improve the performance of recommender systems. In this paper, we propose matrix factorization completed multicontext data for tensor-enhanced algorithm a using matrix factorization combined with a multicontext data method for tensor-enhanced recommendation. To take advantage of existing user-it
APA, Harvard, Vancouver, ISO, and other styles
26

Ortiz-Jimenez, Guillermo, Mario Coutino, Sundeep Prabhakar Chepuri, and Geert Leus. "Sparse Sampling for Inverse Problems With Tensors." IEEE Transactions on Signal Processing 67, no. 12 (2019): 3272–86. http://dx.doi.org/10.1109/tsp.2019.2914879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Le, Qihao Cheng, and Zhiguo Chen. "Hyperspectral Image Super-Resolution Method Based on Spectral Smoothing Prior and Tensor Tubal Row-Sparse Representation." Remote Sensing 14, no. 9 (2022): 2142. http://dx.doi.org/10.3390/rs14092142.

Full text
Abstract:
Due to the limited hardware conditions, hyperspectral image (HSI) has a low spatial resolution, while multispectral image (MSI) can gain higher spatial resolution. Therefore, derived from the idea of fusion, we reconstructed HSI with high spatial resolution and spectral resolution from HSI and MSI and put forward an HSI Super-Resolution model based on Spectral Smoothing prior and Tensor tubal row-sparse representation, termed SSTSR. Foremost, nonlocal priors are applied to refine the super-resolution task into reconstructing each nonlocal clustering tensor. Then per nonlocal cluster tensor is
APA, Harvard, Vancouver, ISO, and other styles
28

Grasedyck, Lars, and Wolfgang Hackbusch. "An Introduction to Hierarchical (H-) Rank and TT-Rank of Tensors with Examples." Computational Methods in Applied Mathematics 11, no. 3 (2011): 291–304. http://dx.doi.org/10.2478/cmam-2011-0016.

Full text
Abstract:
Abstract We review two similar concepts of hierarchical rank of tensors (which extend the matrix rank to higher order tensors): the TT-rank and the H-rank (hierarchical or H-Tucker rank). Based on this notion of rank, one can define a data-sparse representation of tensors involving O(dnk + dk^3) data for order d tensors with mode sizes n and rank k. Simple examples underline the differences and similarities between the different formats and ranks. Finally, we derive rank bounds for tensors in one of the formats based on the ranks in the other format.
APA, Harvard, Vancouver, ISO, and other styles
29

Trunschke, Philipp, Martin Eigel, and Anthony Nouy. "Weighted sparsity and sparse tensor networks for least squares approximation." SMAI Journal of computational mathematics 11 (May 5, 2025): 289–333. https://doi.org/10.5802/smai-jcm.126.

Full text
Abstract:
Approximation of high-dimensional functions is a problem in many scientific fields that is only feasible if advantageous structural properties, such as sparsity in a given basis, can be exploited. A relevant tool for analysing sparse approximations is Stechkin’s lemma.In its standard form, however, this lemma does not allow to explain convergence rates for a wide range of relevant function classes. This work presents a new weighted version of Stechkin’s lemma that improves the best n-term rates for weighted ℓ p -spaces and associated function classes such as Sobolev or Besov spaces. For the cl
APA, Harvard, Vancouver, ISO, and other styles
30

Afshar, Ardavan, Kejing Yin, Sherry Yan, et al. "SWIFT: Scalable Wasserstein Factorization for Sparse Nonnegative Tensors." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6548–56. http://dx.doi.org/10.1609/aaai.v35i8.16811.

Full text
Abstract:
Existing tensor factorization methods assume that the input tensor follows some specific distribution (i.e. Poisson, Bernoulli, and Gaussian), and solve the factorization by minimizing some empirical loss functions defined based on the corresponding distribution. However, it suffers from several drawbacks: 1) In reality, the underlying distributions are complicated and unknown, making it infeasible to be approximated by a simple distribution. 2) The correlation across dimensions of the input tensor is not well utilized, leading to sub-optimal performance. Although heuristics were proposed to i
APA, Harvard, Vancouver, ISO, and other styles
31

Bader, Brett W., and Tamara G. Kolda. "Efficient MATLAB Computations with Sparse and Factored Tensors." SIAM Journal on Scientific Computing 30, no. 1 (2008): 205–31. http://dx.doi.org/10.1137/060676489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yuan, Jianjun. "MRI denoising via sparse tensors with reweighted regularization." Applied Mathematical Modelling 69 (May 2019): 552–62. http://dx.doi.org/10.1016/j.apm.2019.01.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xiaofei, and Carmeliza Navasca. "Low-rank approximation of tensors via sparse optimization." Numerical Linear Algebra with Applications 25, no. 2 (2017): e2136. http://dx.doi.org/10.1002/nla.2136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Xi’ai, Zhen Wang, Kaidong Wang, Huidi Jia, Zhi Han, and Yandong Tang. "Multi-Dimensional Low-Rank with Weighted Schatten p-Norm Minimization for Hyperspectral Anomaly Detection." Remote Sensing 16, no. 1 (2023): 74. http://dx.doi.org/10.3390/rs16010074.

Full text
Abstract:
Hyperspectral anomaly detection is an important unsupervised binary classification problem that aims to effectively distinguish between background and anomalies in hyperspectral images (HSIs). In recent years, methods based on low-rank tensor representations have been proposed to decompose HSIs into low-rank background and sparse anomaly tensors. However, current methods neglect the low-rank information in the spatial dimension and rely heavily on the background information contained in the dictionary. Furthermore, these algorithms show limited robustness when the dictionary information is mis
APA, Harvard, Vancouver, ISO, and other styles
35

Sidiropoulos, N. D., and A. Kyrillidis. "Multi-Way Compressed Sensing for Sparse Low-Rank Tensors." IEEE Signal Processing Letters 19, no. 11 (2012): 757–60. http://dx.doi.org/10.1109/lsp.2012.2210872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Shuyuan, Quanwei Gao, and Shigang Wang. "Learning a Deep Representative Saliency Map With Sparse Tensors." IEEE Access 7 (2019): 117861–70. http://dx.doi.org/10.1109/access.2019.2931921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Yuanxiang, Qixiang Zhang, Zhanjiang Yuan, and Chen Wang. "Convex Robust Recovery of Corrupted Tensors via Tensor Singular Value Decomposition and Local Low-Rank Approximation." Journal of Physics: Conference Series 2670, no. 1 (2023): 012026. http://dx.doi.org/10.1088/1742-6596/2670/1/012026.

Full text
Abstract:
Abstract This paper discusses the recovery of tensor data corrupted by random noise. Our approach assumes that the potential structure of data is a linear combination of several low-rank tensor subspaces. The goal is to recover exactly these local low-rank tensors and remove random noise as much as possible. Non-parametric kernel smoothing technique is employed to establish an effective mathematical notion of local models. After that, each local model can be robustly separated into a low-rank tensor and a sparse tensor. The low-rank tensor can be recovered by minimizing a weighted combination
APA, Harvard, Vancouver, ISO, and other styles
38

Dong, Le, and Yuan Yuan. "Sparse Constrained Low Tensor Rank Representation Framework for Hyperspectral Unmixing." Remote Sensing 13, no. 8 (2021): 1473. http://dx.doi.org/10.3390/rs13081473.

Full text
Abstract:
Recently, non-negative tensor factorization (NTF) as a very powerful tool has attracted the attention of researchers. It is used in the unmixing of hyperspectral images (HSI) due to its excellent expression ability without any information loss when describing data. However, most of the existing unmixing methods based on NTF fail to fully explore the unique properties of data, for example, low rank, that exists in both the spectral and spatial domains. To explore this low-rank structure, in this paper we learn the different low-rank representations of HSI in the spectral, spatial and non-local
APA, Harvard, Vancouver, ISO, and other styles
39

Bai, Qipeng, Sidao Ni, Risheng Chu, and Zhe Jia. "gCAPjoint, A Software Package for Full Moment Tensor Inversion of Moderately Strong Earthquakes with Local and Teleseismic Waveforms." Seismological Research Letters 91, no. 6 (2020): 3550–62. http://dx.doi.org/10.1785/0220200031.

Full text
Abstract:
Abstract Earthquake moment tensors and focal depths are crucial to assessing seismic hazards and studying active tectonic and volcanic processes. Although less powerful than strong earthquakes (M 7+), moderately strong earthquakes (M 5–6.5) occur more frequently and extensively, which can cause severe damages in populated areas. The inversion of moment tensors is usually affected by insufficient local waveform data (epicentral distance <5°) in sparse seismic networks. It would be necessary to combine local and teleseismic data (epicentral distance 30°–90°) for a joint inversion. In this
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Jiang, Cheng Zhu, Yun Zhou, and Weiming Zhang. "Vessel Spatio-temporal Knowledge Discovery with AIS Trajectories Using Co-clustering." Journal of Navigation 70, no. 6 (2017): 1383–400. http://dx.doi.org/10.1017/s0373463317000406.

Full text
Abstract:
Large volumes of data collected by the Automatic Identification System (AIS) provide opportunities for studying both single vessel motion behaviours and collective mobility patterns on the sea. Understanding these behaviours or patterns is of great importance to maritime situational awareness applications. In this paper, we leveraged AIS trajectories to discover vessel spatio-temporal co-occurrence patterns, which distinguish vessel behaviours simultaneously in terms of space, time and other dimensions (such as ship type, speed, width etc.). To this end, available AIS data were processed to ge
APA, Harvard, Vancouver, ISO, and other styles
41

Kaya, Oguz, and Bora Uçar. "Parallel Candecomp/Parafac Decomposition of Sparse Tensors Using Dimension Trees." SIAM Journal on Scientific Computing 40, no. 1 (2018): C99—C130. http://dx.doi.org/10.1137/16m1102744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Yiju, Manman Dong, and Yi Xu. "A sparse rank-1 approximation algorithm for high-order tensors." Applied Mathematics Letters 102 (April 2020): 106140. http://dx.doi.org/10.1016/j.aml.2019.106140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Assoweh, Mohamed Ibrahim, Stéphane Chrétien, and Brahim Tamadazte. "Spectrally Sparse Tensor Reconstruction in Optical Coherence Tomography Using Nuclear Norm Penalisation." Mathematics 8, no. 4 (2020): 628. http://dx.doi.org/10.3390/math8040628.

Full text
Abstract:
Reconstruction of 3D objects in various tomographic measurements is an important problem which can be naturally addressed within the mathematical framework of 3D tensors. In Optical Coherence Tomography, the reconstruction problem can be recast as a tensor completion problem. Following the seminal work of Candès et al., the approach followed in the present work is based on the assumption that the rank of the object to be reconstructed is naturally small, and we leverage this property by using a nuclear norm-type penalisation. In this paper, a detailed study of nuclear norm penalised reconstruc
APA, Harvard, Vancouver, ISO, and other styles
44

Chang, Jingya, Yannan Chen, and Liqun Qi. "Computing Eigenvalues of Large Scale Sparse Tensors Arising from a Hypergraph." SIAM Journal on Scientific Computing 38, no. 6 (2016): A3618—A3643. http://dx.doi.org/10.1137/16m1060224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

汪, 亮. "A Sparse Factorization Strategy for Third-Order Tensors and Its Application." Advances in Applied Mathematics 07, no. 08 (2018): 1119–26. http://dx.doi.org/10.12677/aam.2018.78129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Khamis, Mahmoud Abo, Hung Q. Ngo, Xuanlong Nguyen, Dan Olteanu, and Maximilian Schleich. "Learning Models over Relational Data Using Sparse Tensors and Functional Dependencies." ACM Transactions on Database Systems 45, no. 2 (2020): 1–66. http://dx.doi.org/10.1145/3375661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fang, Shuangkang, Weixin Xu, Heng Wang, Yi Yang, Yufeng Wang, and Shuchang Zhou. "One Is All: Bridging the Gap between Neural Radiance Fields Architectures with Progressive Volume Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 597–605. http://dx.doi.org/10.1609/aaai.v37i1.25135.

Full text
Abstract:
Neural Radiance Fields (NeRF) methods have proved effective as compact, high-quality and versatile representations for 3D scenes, and enable downstream tasks such as editing, retrieval, navigation, etc. Various neural architectures are vying for the core structure of NeRF, including the plain Multi-Layer Perceptron (MLP), sparse tensors, low-rank tensors, hashtables and their compositions. Each of these representations has its particular set of trade-offs. For example, the hashtable-based representations admit faster training and rendering but their lack of clear geometric meaning hampers down
APA, Harvard, Vancouver, ISO, and other styles
48

Wei, Wenyan, Tao Ma, Meihui Li, and Haorui Zuo. "Infrared Dim and Small Target Detection Based on Superpixel Segmentation and Spatiotemporal Cluster 4D Fully-Connected Tensor Network Decomposition." Remote Sensing 16, no. 1 (2023): 34. http://dx.doi.org/10.3390/rs16010034.

Full text
Abstract:
The detection of infrared dim and small targets in complex backgrounds is very challenging because of the low signal-to-noise ratio of targets and the drastic change in background. Low-rank sparse decomposition based on the structural characteristics of infrared images has attracted the attention of many scholars because of its good interpretability. In order to improve the sensitivity of sliding window size, insufficient utilization of time series information, and inaccurate tensor rank in existing methods, a four-dimensional tensor model based on superpixel segmentation and statistical clust
APA, Harvard, Vancouver, ISO, and other styles
49

He, Jingfei, Qiegen Liu, Anthony G. Christodoulou, Chao Ma, Fan Lam, and Zhi-Pei Liang. "Accelerated High-Dimensional MR Imaging With Sparse Sampling Using Low-Rank Tensors." IEEE Transactions on Medical Imaging 35, no. 9 (2016): 2119–29. http://dx.doi.org/10.1109/tmi.2016.2550204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Finkbeiner, Jan, Thomas Gmeinder, Mark Pupilli, Alexander Titterton, and Emre Neftci. "Harnessing Manycore Processors with Distributed Memory for Accelerated Training of Sparse and Recurrent Models." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 11996–2005. http://dx.doi.org/10.1609/aaai.v38i11.29087.

Full text
Abstract:
Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures, such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs), that excel at accelerating parallel workloads and dense vector matrix multiplications. Potentially more efficient neural network models utilizing sparsity and recurrence cannot leverage the full power of SIMD processor and are thus at a severe disadvantage compared to today's prominent parallel architectures like Transformers and CNNs, thereby hindering the path towards more sustainable AI. To o
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!