To see the other types of publications on this topic, follow the link: Nonnegative matrix factorization (NMF).

Journal articles on the topic 'Nonnegative matrix factorization (NMF)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Nonnegative matrix factorization (NMF).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Maoying, Qiao, Yu Jun, Liu Tongliang, Wang Xinchao, and Tao Dacheng. "Diversified Bayesian Nonnegative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5420–27. http://dx.doi.org/10.1609/aaai.v34i04.5991.

Full text
Abstract:
Nonnegative matrix factorization (NMF) has been widely employed in a variety of scenarios due to its capability of inducing semantic part-based representation. However, because of the non-convexity of its objective, the factorization is generally not unique and may inaccurately discover intrinsic “parts” from the data. In this paper, we approach this issue using a Bayesian framework. We propose to assign a diversity prior to the parts of the factorization to induce correctness based on the assumption that useful parts should be distinct and thus well-spread. A Bayesian framework including this diversity prior is then established. This framework aims at inducing factorizations embracing both good data fitness from maximizing likelihood and large separability from the diversity prior. Specifically, the diversity prior is formulated with determinantal point processes (DPP) and is seamlessly embedded into a Bayesian NMF framework. To carry out the inference, a Monte Carlo Markov Chain (MCMC) based procedure is derived. Experiments conducted on a synthetic dataset and a real-world MULAN dataset for multi-label learning (MLL) task demonstrate the superiority of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Kai, Xiangyu Li, Zhihui Zhu, Lodewijk Brand, and Hua Wang. "Factor-Bounded Nonnegative Matrix Factorization." ACM Transactions on Knowledge Discovery from Data 15, no. 6 (May 19, 2021): 1–18. http://dx.doi.org/10.1145/3451395.

Full text
Abstract:
Nonnegative Matrix Factorization (NMF) is broadly used to determine class membership in a variety of clustering applications. From movie recommendations and image clustering to visual feature extractions, NMF has applications to solve a large number of knowledge discovery and data mining problems. Traditional optimization methods, such as the Multiplicative Updating Algorithm (MUA), solves the NMF problem by utilizing an auxiliary function to ensure that the objective monotonically decreases. Although the objective in MUA converges, there exists no proof to show that the learned matrix factors converge as well. Without this rigorous analysis, the clustering performance and stability of the NMF algorithms cannot be guaranteed. To address this knowledge gap, in this article, we study the factor-bounded NMF problem and provide a solution algorithm with proven convergence by rigorous mathematical analysis, which ensures that both the objective and matrix factors converge. In addition, we show the relationship between MUA and our solution followed by an analysis of the convergence of MUA. Experiments on both toy data and real-world datasets validate the correctness of our proposed method and its utility as an effective clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Groetzner, Patrick. "A projective approach to nonnegative matrix factorization." Electronic Journal of Linear Algebra 37 (September 13, 2021): 583–97. http://dx.doi.org/10.13001/ela.2021.5067.

Full text
Abstract:
In data science and machine learning, the method of nonnegative matrix factorization (NMF) is a powerful tool that enjoys great popularity. Depending on the concrete application, there exist several subclasses each of which performs a NMF under certain constraints. Consider a given square matrix $A$. The symmetric NMF aims for a nonnegative low-rank approximation $A\approx XX^T$ to $A$, where $X$ is entrywise nonnegative and of given order. Considering a rectangular input matrix $A$, the general NMF again aims for a nonnegative low-rank approximation to $A$ which is now of the type $A\approx XY$ for entrywise nonnegative matrices $X,Y$ of given order. In this paper, we introduce a new heuristic method to tackle the exact nonnegative matrix factorization problem (of type $A=XY$), based on projection approaches to solve a certain feasibility problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Fudong, Zheng Shan, and Yihang Chen. "Parallel Nonnegative Matrix Factorization with Manifold Regularization." Journal of Electrical and Computer Engineering 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/6270816.

Full text
Abstract:
Nonnegative matrix factorization (NMF) decomposes a high-dimensional nonnegative matrix into the product of two reduced dimensional nonnegative matrices. However, conventional NMF neither qualifies large-scale datasets as it maintains all data in memory nor preserves the geometrical structure of data which is needed in some practical tasks. In this paper, we propose a parallel NMF with manifold regularization method (PNMF-M) to overcome the aforementioned deficiencies by parallelizing the manifold regularized NMF on distributed computing system. In particular, PNMF-M distributes both data samples and factor matrices to multiple computing nodes instead of loading the whole dataset in a single node and updates both factor matrices locally on each node. In this way, PNMF-M succeeds to resolve the pressure of memory consumption for large-scale datasets and to speed up the computation by parallelization. For constructing the adjacency matrix in manifold regularization, we propose a two-step distributed graph construction method, which is proved to be equivalent to the batch construction method. Experimental results on popular text corpora and image datasets demonstrate that PNMF-M significantly improves both scalability and time efficiency of conventional NMF thanks to the parallelization on distributed computing system; meanwhile it significantly enhances the representation ability of conventional NMF thanks to the incorporated manifold regularization.
APA, Harvard, Vancouver, ISO, and other styles
5

Schmidt, Mikkel N., and Hans Laurberg. "Nonnegative Matrix Factorization with Gaussian Process Priors." Computational Intelligence and Neuroscience 2008 (2008): 1–10. http://dx.doi.org/10.1155/2008/361705.

Full text
Abstract:
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging.
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Chih-Jen. "Projected Gradient Methods for Nonnegative Matrix Factorization." Neural Computation 19, no. 10 (October 2007): 2756–79. http://dx.doi.org/10.1162/neco.2007.19.10.2756.

Full text
Abstract:
Nonnegative matrix factorization (NMF) can be formulated as a minimization problem with bound constraints. Although bound-constrained optimization has been studied extensively in both theory and practice, so far no study has formally applied its techniques to NMF. In this letter, we propose two projected gradient methods for NMF, both of which exhibit strong optimization properties. We discuss efficient implementations and demonstrate that one of the proposed methods converges faster than the popular multiplicative update approach. A simple Matlab code is also provided.
APA, Harvard, Vancouver, ISO, and other styles
7

Ang, Andersen Man Shun, and Nicolas Gillis. "Accelerating Nonnegative Matrix Factorization Algorithms Using Extrapolation." Neural Computation 31, no. 2 (February 2019): 417–39. http://dx.doi.org/10.1162/neco_a_01157.

Full text
Abstract:
We propose a general framework to accelerate significantly the algorithms for nonnegative matrix factorization (NMF). This framework is inspired from the extrapolation scheme used to accelerate gradient methods in convex optimization and from the method of parallel tangents. However, the use of extrapolation in the context of the exact coordinate descent algorithms tackling the nonconvex NMF problems is novel. We illustrate the performance of this approach on two state-of-the-art NMF algorithms: accelerated hierarchical alternating least squares and alternating nonnegative least squares, using synthetic, image, and document data sets.
APA, Harvard, Vancouver, ISO, and other styles
8

Flatz, Markus, and Marián Vajteršic. "Parallel Nonnegative Matrix Factorization via Newton Iteration." Parallel Processing Letters 26, no. 03 (September 2016): 1650014. http://dx.doi.org/10.1142/s0129626416500146.

Full text
Abstract:
The goal of Nonnegative Matrix Factorization (NMF) is to represent a large nonnegative matrix in an approximate way as a product of two significantly smaller nonnegative matrices. This paper shows in detail how an NMF algorithm based on Newton iteration can be derived using the general Karush-Kuhn-Tucker (KKT) conditions for first-order optimality. This algorithm is suited for parallel execution on systems with shared memory and also with message passing. Both versions were implemented and tested, delivering satisfactory speedup results.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Jiang, Li Min, and Zhang Yongqing. "Neighborhood Preserving Convex Nonnegative Matrix Factorization." Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/154942.

Full text
Abstract:
The convex nonnegative matrix factorization (CNMF) is a variation of nonnegative matrix factorization (NMF) in which each cluster is expressed by a linear combination of the data points and each data point is represented by a linear combination of the cluster centers. When there exists nonlinearity in the manifold structure, both NMF and CNMF are incapable of characterizing the geometric structure of the data. This paper introduces a neighborhood preserving convex nonnegative matrix factorization (NPCNMF), which imposes an additional constraint on CNMF that each data point can be represented as a linear combination of its neighbors. Thus our method is able to reap the benefits of both nonnegative data factorization and the purpose of manifold structure. An efficient multiplicative updating procedure is produced, and its convergence is guaranteed theoretically. The feasibility and effectiveness of NPCNMF are verified on several standard data sets with promising results.
APA, Harvard, Vancouver, ISO, and other styles
10

Le Thi, Hoai An, Xuan Thanh Vo, and Tao Pham Dinh. "Efficient Nonnegative Matrix Factorization by DC Programming and DCA." Neural Computation 28, no. 6 (June 2016): 1163–216. http://dx.doi.org/10.1162/neco_a_00836.

Full text
Abstract:
In this letter, we consider the nonnegative matrix factorization (NMF) problem and several NMF variants. Two approaches based on DC (difference of convex functions) programming and DCA (DC algorithm) are developed. The first approach follows the alternating framework that requires solving, at each iteration, two nonnegativity-constrained least squares subproblems for which DCA-based schemes are investigated. The convergence property of the proposed algorithm is carefully studied. We show that with suitable DC decompositions, our algorithm generates most of the standard methods for the NMF problem. The second approach directly applies DCA on the whole NMF problem. Two algorithms—one computing all variables and one deploying a variable selection strategy—are proposed. The proposed methods are then adapted to solve various NMF variants, including the nonnegative factorization, the smooth regularization NMF, the sparse regularization NMF, the multilayer NMF, the convex/convex-hull NMF, and the symmetric NMF. We also show that our algorithms include several existing methods for these NMF variants as special versions. The efficiency of the proposed approaches is empirically demonstrated on both real-world and synthetic data sets. It turns out that our algorithms compete favorably with five state-of-the-art alternating nonnegative least squares algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Wen-Sheng, Jingmin Liu, Binbin Pan, and Yugao Li. "Block kernel nonnegative matrix factorization for face recognition." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 01 (January 2019): 1850059. http://dx.doi.org/10.1142/s0219691318500595.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a linear approach for extracting localized feature of facial image. However, NMF may fail to process the data points that are nonlinearly separable. The kernel extension of NMF, named kernel NMF (KNMF), can model the nonlinear relationship among data points and extract nonlinear features of facial images. KNMF is an unsupervised method, thus it does not utilize the supervision information. Moreover, the extracted features by KNMF are not sparse enough. To overcome these limitations, this paper proposes a supervised KNMF called block kernel NMF (BKNMF). A novel objective function is established by incorporating the intra-class information. The algorithm is derived by making use of the block strategy and kernel theory. Our BKNMF has some merits for face recognition, such as highly sparse features and orthogonal features from different classes. We theoretically analyze the convergence of the proposed BKNMF. Compared with some state-of-the-art methods, our BKNMF achieves superior performance in face recognition.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Wen-Sheng, Qian Wang, Binbin Pan, and Bo Chen. "Nonnegative matrix factorization with manifold structure for face recognition." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 02 (March 2019): 1940006. http://dx.doi.org/10.1142/s021969131940006x.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a promising method to represent facial images using nonnegative features under a low-rank nonnegative basis-image matrix. The facial images usually reside on a low-dimensional manifold due to the variations of illumination, pose and facial expression. However, NMF has no ability to uncover the manifold structure of data embedded in a high-dimensional Euclidean space, while the manifold structure contains both local and nonlocal intrinsic features. These two kinds of features are of benefit to class discrimination. To enhance the discriminative power of NMF, this paper proposes a novel NMF algorithm with manifold structure (Mani-NMF). Two quantities related to adjacent graph and non-adjacent graph are incorporated into the objective function, which will be minimized by solving two convex suboptimization problems. Based on the gradient descent method and auxiliary function technique, we acquire the update rules of Mani-NMF and theoretically prove the convergence of the proposed Mani-NMF algorithm. Three publicly available face databases, Yale, pain expression and CMU databases, are selected for evaluations. Experiments results show that our algorithm achieves a better performance than some state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Wen-Sheng, Binbin Pan, Bin Fang, Ming Li, and Jianliang Tang. "Incremental Nonnegative Matrix Factorization for Face Recognition." Mathematical Problems in Engineering 2008 (2008): 1–17. http://dx.doi.org/10.1155/2008/410674.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF) for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Bingfeng, Yandong Tang, and Zhi Han. "Robust Structure Preserving Nonnegative Matrix Factorization for Dimensionality Reduction." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/7474839.

Full text
Abstract:
As a linear dimensionality reduction method, nonnegative matrix factorization (NMF) has been widely used in many fields, such as machine learning and data mining. However, there are still two major drawbacks for NMF: (a) NMF can only perform semantic factorization in Euclidean space, and it fails to discover the intrinsic geometrical structure of high-dimensional data distribution. (b) NMF suffers from noisy data, which are commonly encountered in real-world applications. To address these issues, in this paper, we present a new robust structure preserving nonnegative matrix factorization (RSPNMF) framework. In RSPNMF, a local affinity graph and a distant repulsion graph are constructed to encode the geometrical information, and noisy data influence is alleviated by characterizing the data reconstruction term of NMF withl2,1-norm instead ofl2-norm. With incorporation of the local and distant structure preservation regularization term into the robust NMF framework, our algorithm can discover a low-dimensional embedding subspace with the nature of structure preservation. RSPNMF is formulated as an optimization problem and solved by an effective iterative multiplicative update algorithm. Experimental results on some facial image datasets clustering show significant performance improvement of RSPNMF in comparison with the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Junying, Le Wei, Xuerong Feng, Zhen Ma, and Yue Wang. "Pattern Expression Nonnegative Matrix Factorization: Algorithm and Applications to Blind Source Separation." Computational Intelligence and Neuroscience 2008 (2008): 1–10. http://dx.doi.org/10.1155/2008/168769.

Full text
Abstract:
Independent component analysis (ICA) is a widely applicable and effective approach in blind source separation (BSS), with limitations that sources are statistically independent. However, more common situation is blind source separation for nonnegative linear model (NNLM) where the observations are nonnegative linear combinations of nonnegative sources, and the sources may be statistically dependent. We propose a pattern expression nonnegative matrix factorization (PE-NMF) approach from the view point of using basis vectors most effectively to express patterns. Two regularization or penalty terms are introduced to be added to the original loss function of a standard nonnegative matrix factorization (NMF) for effective expression of patterns with basis vectors in the PE-NMF. Learning algorithm is presented, and the convergence of the algorithm is proved theoretically. Three illustrative examples on blind source separation including heterogeneity correction for gene microarray data indicate that the sources can be successfully recovered with the proposed PE-NMF when the two parameters can be suitably chosen from prior knowledge of the problem.
APA, Harvard, Vancouver, ISO, and other styles
16

Jiang, Qin, Yifei Dong, Jiangtao Peng, Mei Yan, and Yi Sun. "Maximum Likelihood Estimation Based Nonnegative Matrix Factorization for Hyperspectral Unmixing." Remote Sensing 13, no. 13 (July 5, 2021): 2637. http://dx.doi.org/10.3390/rs13132637.

Full text
Abstract:
Hyperspectral unmixing (HU) is a research hotspot of hyperspectral remote sensing technology. As a classical HU method, the nonnegative matrix factorization (NMF) unmixing method can decompose an observed hyperspectral data matrix into the product of two nonnegative matrices, i.e., endmember and abundance matrices. Because the objective function of NMF is the traditional least-squares function, NMF is sensitive to noise. In order to improve the robustness of NMF, this paper proposes a maximum likelihood estimation (MLE) based NMF model (MLENMF) for unmixing of hyperspectral images (HSIs), which substitutes the least-squares objective function in traditional NMF by a robust MLE-based loss function. Experimental results on a simulated and two widely used real hyperspectral data sets demonstrate the superiority of our MLENMF over existing NMF methods.
APA, Harvard, Vancouver, ISO, and other styles
17

CICHOCKI, ANDRZEJ, and RAFAL ZDUNEK. "MULTILAYER NONNEGATIVE MATRIX FACTORIZATION USING PROJECTED GRADIENT APPROACHES." International Journal of Neural Systems 17, no. 06 (December 2007): 431–46. http://dx.doi.org/10.1142/s0129065707001275.

Full text
Abstract:
The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative Lee-Seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima. In this paper, we present and compare the performance of additive algorithms based on three different variations of a projected gradient approach. Additionally, we discuss a novel multilayer approach to NMF algorithms combined with multi-start initializations procedure, which in general, considerably improves the performance of all the NMF algorithms. We demonstrate that this approach (the multilayer system with projected gradient algorithms) can usually give much better performance than standard multiplicative algorithms, especially, if data are ill-conditioned, badly-scaled, and/or a number of observations is only slightly greater than a number of nonnegative hidden components. Our new implementations of NMF are demonstrated with the simulations performed for Blind Source Separation (BSS) data.
APA, Harvard, Vancouver, ISO, and other styles
18

Yousefi, Ibarra-Castanedo, and Maldague. "Infrared Non-Destructive Testing via Semi-Nonnegative Matrix Factorization." Proceedings 27, no. 1 (September 20, 2019): 13. http://dx.doi.org/10.3390/proceedings2019027013.

Full text
Abstract:
Detection of subsurface defects is undeniably a growing subfield of infrared non-destructive testing (IR-NDT). There are many algorithms used for this purpose, where non-negative matrix factorization (NMF) is considered to be an interesting alternative to principal component analysis (PCA) by having no negative basis in matrix decomposition. Here, an application of Semi non-negative matrix factorization (Semi-NMF) in IR-NDT is presented to determine the subsurface defects of an Aluminum plate specimen through active thermographic method. To benchmark, the defect detection accuracy and computational load of the Semi-NMF approach is compared to state-of-the-art thermography processing approaches such as: principal component thermography (PCT), Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT), Sparse PCT, Sparse NMF and standard NMF with gradient descend (GD) and non-negative least square (NNLS). The results show 86% accuracy for 27.5s computational time for SemiNMF, which conclusively indicate the promising performance of the approach in the field of IR-NDT.
APA, Harvard, Vancouver, ISO, and other styles
19

Pan, Ji-Yuan, and Jiang-She Zhang. "Relationship Matrix Nonnegative Decomposition for Clustering." Mathematical Problems in Engineering 2011 (2011): 1–15. http://dx.doi.org/10.1155/2011/864540.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a popular tool for analyzing the latent structure of nonnegative data. For a positive pairwise similarity matrix, symmetric NMF (SNMF) and weighted NMF (WNMF) can be used to cluster the data. However, both of them are not very efficient for the ill-structured pairwise similarity matrix. In this paper, a novel model, called relationship matrix nonnegative decomposition (RMND), is proposed to discover the latent clustering structure from the pairwise similarity matrix. The RMND model is derived from the nonlinear NMF algorithm. RMND decomposes a pairwise similarity matrix into a product of three low rank nonnegative matrices. The pairwise similarity matrix is represented as a transformation of a positive semidefinite matrix which pops out the latent clustering structure. We develop a learning procedure based on multiplicative update rules and steepest descent method to calculate the nonnegative solution of RMND. Experimental results in four different databases show that the proposed RMND approach achieves higher clustering accuracy.
APA, Harvard, Vancouver, ISO, and other styles
20

Févotte, Cédric, and Jérôme Idier. "Algorithms for Nonnegative Matrix Factorization with the β-Divergence." Neural Computation 23, no. 9 (September 2011): 2421–56. http://dx.doi.org/10.1162/neco_a_00168.

Full text
Abstract:
This letter describes algorithms for nonnegative matrix factorization (NMF) with the β-divergence (β-NMF). The β-divergence is a family of cost functions parameterized by a single shape parameter β that takes the Euclidean distance, the Kullback-Leibler divergence, and the Itakura-Saito divergence as special cases (β = 2, 1, 0 respectively). The proposed algorithms are based on a surrogate auxiliary function (a local majorization of the criterion function). We first describe a majorization-minimization algorithm that leads to multiplicative updates, which differ from standard heuristic multiplicative updates by a β-dependent power exponent. The monotonicity of the heuristic algorithm can, however, be proven for β ∈ (0, 1) using the proposed auxiliary function. Then we introduce the concept of the majorization-equalization (ME) algorithm, which produces updates that move along constant level sets of the auxiliary function and lead to larger steps than MM. Simulations on synthetic and real data illustrate the faster convergence of the ME approach. The letter also describes how the proposed algorithms can be adapted to two common variants of NMF: penalized NMF (when a penalty function of the factors is added to the criterion function) and convex NMF (when the dictionary is assumed to belong to a known subspace).
APA, Harvard, Vancouver, ISO, and other styles
21

Lin, Chuang, and Meng Pang. "Graph Regularized Nonnegative Matrix Factorization with Sparse Coding." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/239589.

Full text
Abstract:
In this paper, we propose a sparseness constraint NMF method, named graph regularized matrix factorization with sparse coding (GRNMF_SC). By combining manifold learning and sparse coding techniques together, GRNMF_SC can efficiently extract the basic vectors from the data space, which preserves the intrinsic manifold structure and also the local features of original data. The target function of our method is easy to propose, while the solving procedures are really nontrivial; in the paper we gave the detailed derivation of solving the target function and also a strict proof of its convergence, which is a key contribution of the paper. Compared with sparseness constrained NMF and GNMF algorithms, GRNMF_SC can learn much sparser representation of the data and can also preserve the geometrical structure of the data, which endow it with powerful discriminating ability. Furthermore, the GRNMF_SC is generalized as supervised and unsupervised models to meet different demands. Experimental results demonstrate encouraging results of GRNMF_SC on image recognition and clustering when comparing with the other state-of-the-art NMF methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Zeng, Xianhua, Zhengyi He, Hong Yu, and Shengwei Qu. "Bidirectional Nonnegative Deep Model and Its Optimization in Learning." Journal of Optimization 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/5975120.

Full text
Abstract:
Nonnegative matrix factorization (NMF) has been successfully applied in signal processing as a simple two-layer nonnegative neural network. Projective NMF (PNMF) with fewer parameters was proposed, which projects a high-dimensional nonnegative data onto a lower-dimensional nonnegative subspace. Although PNMF overcomes the problem of out-of-sample of NMF, it does not consider the nonlinear characteristic of data and is only a kind of narrow signal decomposition method. In this paper, we combine the PNMF with deep learning and nonlinear fitting to propose a bidirectional nonnegative deep learning (BNDL) model and its optimization learning algorithm, which can obtain nonlinear multilayer deep nonnegative feature representation. Experiments show that the proposed model can not only solve the problem of out-of-sample of NMF but also learn hierarchical nonnegative feature representations with better clustering performance than classical NMF, PNMF, and Deep Semi-NMF algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Shang, Ronghua, Chiyang Liu, Yang Meng, Licheng Jiao, and Rustam Stolkin. "Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint." Neural Computation 29, no. 9 (September 2017): 2553–79. http://dx.doi.org/10.1162/neco_a_00995.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Wenhong, and Hongfu Liu. "Deep Nonnegative Dictionary Factorization for Hyperspectral Unmixing." Remote Sensing 12, no. 18 (September 5, 2020): 2882. http://dx.doi.org/10.3390/rs12182882.

Full text
Abstract:
As a powerful blind source separation tool, Nonnegative Matrix Factorization (NMF) with effective regularizations has shown significant superiority in spectral unmixing of hyperspectral remote sensing images (HSIs) owing to its good physical interpretability and data adaptability. However, the majority of existing NMF-based spectral unmixing methods only adopt the single layer factorization, which is not favorable for exploiting the complex and structured representation relationship of endmembers implied in HSIs. In order to overcome such an issue, we propose a novel two-stage Deep Nonnegative Dictionary Factorization (DNDF) approach with a sparseness constraint and self-supervised regularization for HSI unmixing. Beyond simply extending one-layer factorization to multi-layer, DNDF conducts fuzzy clustering to tackle the mixed endmembers of HSIs. Moreover, self-supervised regularization is integrated into our DNDF model to impose an effective constraint on the endmember matrix. Experimental results on three real HSIs demonstrate the superiority of DNDF over several state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Li, Congying Han, and Ziwen Liu. "Active Set Type Algorithms for Nonnegative Matrix Factorization in Hyperspectral Unmixing." Mathematical Problems in Engineering 2019 (November 21, 2019): 1–10. http://dx.doi.org/10.1155/2019/9609302.

Full text
Abstract:
Hyperspectral unmixing is a powerful method of the remote sensing image mining that identifies the constituent materials and estimates the corresponding fractions from the mixture. We consider the application of nonnegative matrix factorization (NMF) for the mining and analysis of spectral data. In this paper, we develop two effective active set type NMF algorithms for hyperspectral unmixing. Because the factor matrices used in unmixing have sparse features, the active set strategy helps reduce the computational cost. These active set type algorithms for NMF is based on an alternating nonnegative constrained least squares (ANLS) and achieve a quadratic convergence rate under the reasonable assumptions. Finally, numerical tests demonstrate that these algorithms work well and that the function values decrease faster than those obtained with other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Heinrich, Kevin E., Michael W. Berry, and Ramin Homayouni. "Gene Tree Labeling Using Nonnegative Matrix Factorization on Biomedical Literature." Computational Intelligence and Neuroscience 2008 (2008): 1–12. http://dx.doi.org/10.1155/2008/276535.

Full text
Abstract:
Identifying functional groups of genes is a challenging problem for biological applications. Text mining approaches can be used to build hierarchical clusters or trees from the information in the biological literature. In particular, the nonnegative matrix factorization (NMF) is examined as one approach to label hierarchical trees. A generic labeling algorithm as well as an evaluation technique is proposed, and the effects of different NMF parameters with regard to convergence and labeling accuracy are discussed. The primary goals of this study are to provide a qualitative assessment of the NMF and its various parameters and initialization, to provide an automated way to classify biomedical data, and to provide a method for evaluating labeled data assuming a static input tree. As a byproduct, a method for generatinggold standardtrees is proposed.
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Xiangli, Wen Zhang, Xiaoliang Dong, and Juanjuan Shi. "An Inexact Update Method with Double Parameters for Nonnegative Matrix Factorization." Mathematical Problems in Engineering 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/2173914.

Full text
Abstract:
Nonnegative matrix factorization (NMF) has been used as a powerful date representation tool in real world, because the nonnegativity of matrices is usually required. In recent years, many new methods are available to solve NMF in addition to multiplicative update algorithm, such as gradient descent algorithms, the active set method, and alternating nonnegative least squares (ANLS). In this paper, we propose an inexact update method, with two parameters, which can ensure that the objective function is always descent before the optimal solution is found. Experiment results show that the proposed method is effective.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Jing, Bin Chen, and Tao Han. "Two Efficient Algorithms for Orthogonal Nonnegative Matrix Factorization." Mathematical Problems in Engineering 2021 (July 1, 2021): 1–13. http://dx.doi.org/10.1155/2021/8490147.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a popular method for the multivariate analysis of nonnegative data. It involves decomposing a data matrix into a product of two factor matrices with all entries restricted to being nonnegative. Orthogonal nonnegative matrix factorization (ONMF) has been introduced recently. This method has demonstrated remarkable performance in clustering tasks, such as gene expression classification. In this study, we introduce two convergence methods for solving ONMF. First, we design a convergent orthogonal algorithm based on the Lagrange multiplier method. Second, we propose an approach that is based on the alternating direction method. Finally, we demonstrate that the two proposed approaches tend to deliver higher-quality solutions and perform better in clustering tasks compared with a state-of-the-art ONMF.
APA, Harvard, Vancouver, ISO, and other styles
29

Jiang, Wei, Qian Lv, Chenggang Yan, Kewei Tang, and Jie Zhang. "Robust Semisupervised Nonnegative Local Coordinate Factorization for Data Representation." Complexity 2018 (August 1, 2018): 1–16. http://dx.doi.org/10.1155/2018/7963210.

Full text
Abstract:
Obtaining an optimum data representation is a challenging issue that arises in many intellectual data processing techniques such as data mining, pattern recognition, and gene clustering. Many existing methods formulate this problem as a nonnegative matrix factorization (NMF) approximation problem. The standard NMF uses the least square loss function, which is not robust to outlier points and noises and fails to utilize prior label information to enhance the discriminability of representations. In this study, we develop a novel matrix factorization method called robust semisupervised nonnegative local coordinate factorization by integrating robust NMF, a robust local coordinate constraint, and local spline regression into a unified framework. We use the l2,1 norm for the loss function of the NMF and a local coordinate constraint term to make our method insensitive to outlier points and noises. In addition, we exploit the local and global consistencies of sample labels to guarantee that data representation is compact and discriminative. An efficient multiplicative updating algorithm is deduced to solve the novel loss function, followed by a strict proof of the convergence. Several experiments conducted in this study on face and gene datasets clearly indicate that the proposed method is more effective and robust compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Qian, Bin, Lei Tong, Zhenmin Tang, and Xiaobo Shen. "Nonnegative matrix factorization with region sparsity learning for hyperspectral unmixing." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 06 (November 2017): 1750063. http://dx.doi.org/10.1142/s0219691317500631.

Full text
Abstract:
Hyperspectral unmixing is one of the most important techniques in the remote sensing image analysis tasks. In recent decades, nonnegative matrix factorization (NMF) has been shown to be effective for hyperspectral unmixing due to the strong discovery of the latent structure. Most NMFs put emphasize on the spectral information, but ignore the spatial information, which is very crucial for analyzing hyperspectral data. In this paper, we propose an improved NMF method, namely NMF with region sparsity learning (RSLNMF), to simultaneously consider both spectral and spatial information. RSLNMF defines a new sparsity learning model based on a small homogeneous region that is obtained via the graph cut algorithm. Thus RSLNMF is able to explore the relationship of spatial neighbor pixels within each region. An efficient optimization scheme is developed for the proposed RSLNMF, and its convergence is theoretically guaranteed. Experiments on both synthetic and real hyperspectral data validate the superiority of the proposed method over several state-of-the-art unmixing approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Févotte, Cédric, Nancy Bertin, and Jean-Louis Durrieu. "Nonnegative Matrix Factorization with the Itakura-Saito Divergence: With Application to Music Analysis." Neural Computation 21, no. 3 (March 2009): 793–830. http://dx.doi.org/10.1162/neco.2008.04-08-771.

Full text
Abstract:
This letter presents theoretical, algorithmic, and experimental results about nonnegative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. We describe how IS-NMF is underlaid by a well-defined statistical model of superimposed gaussian components and is equivalent to maximum likelihood estimation of variance parameters. This setting can accommodate regularization constraints on the factors through Bayesian priors. In particular, inverse-gamma and gamma Markov chain priors are considered in this work. Estimation can be carried out using a space-alternating generalized expectation-maximization (SAGE) algorithm; this leads to a novel type of NMF algorithm, whose convergence to a stationary point of the IS cost function is guaranteed. We also discuss the links between the IS divergence and other cost functions used in NMF, in particular, the Euclidean distance and the generalized Kullback-Leibler (KL) divergence. As such, we describe how IS-NMF can also be performed using a gradient multiplicative algorithm (a standard algorithm structure in NMF) whose convergence is observed in practice, though not proven. Finally, we report a furnished experimental comparative study of Euclidean-NMF, KL-NMF, and IS-NMF algorithms applied to the power spectrogram of a short piano sequence recorded in real conditions, with various initializations and model orders. Then we show how IS-NMF can successfully be employed for denoising and upmix (mono to stereo conversion) of an original piece of early jazz music. These experiments indicate that IS-NMF correctly captures the semantics of audio and is better suited to the representation of music signals than NMF with the usual Euclidean and KL costs.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Xinqi, Jun Wang, and Sam Kwong. "A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization." Neural Computation 32, no. 8 (August 2020): 1531–62. http://dx.doi.org/10.1162/neco_a_01294.

Full text
Abstract:
Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints. A discrete-time projection neural network is developed for solving the formulated problem. Sufficient conditions for its stability and convergence are analytically characterized by using Lyapunov's method. Experimental results on sparse feature extraction are discussed to substantiate the superiority of this approach to extracting highly sparse features.
APA, Harvard, Vancouver, ISO, and other styles
33

Tang, Bing, Linyao Kang, Li Zhang, Feiyan Guo, and Haiwu He. "Collaborative Filtering Recommendation Using Nonnegative Matrix Factorization in GPU-Accelerated Spark Platform." Scientific Programming 2021 (January 4, 2021): 1–15. http://dx.doi.org/10.1155/2021/8841133.

Full text
Abstract:
Nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data compression and its capability of extracting highly interpretable parts from data sets, and it has also been applied to various fields, such as recommendations, image analysis, and text clustering. However, as the size of the matrix increases, the processing speed of nonnegative matrix factorization is very slow. To solve this problem, this paper proposes a parallel algorithm based on GPU for NMF in Spark platform, which makes full use of the advantages of in-memory computation mode and GPU acceleration. The new GPU-accelerated NMF on Spark platform is evaluated in a 4-node Spark heterogeneous cluster using Google Compute Engine by configuring each node a NVIDIA K80 CUDA device, and experimental results indicate that it is competitive in terms of computational time against the existing solutions on a variety of matrix orders. Furthermore, a GPU-accelerated NMF-based parallel collaborative filtering (CF) algorithm is also proposed, utilizing the advantages of data dimensionality reduction and feature extraction of NMF, as well as the multicore parallel computing mode of CUDA. Using real MovieLens data sets, experimental results have shown that the parallelization of NMF-based collaborative filtering on Spark platform effectively outperforms traditional user-based and item-based CF with a higher processing speed and higher recommendation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Dong-xia, Mao-song Jiang, Fang-lin Niu, Yu-dong Cao, and Cheng-xu Zhou. "Speech Enhancement Control Design Algorithm for Dual-Microphone Systems Using β-NMF in a Complex Environment." Complexity 2018 (September 9, 2018): 1–13. http://dx.doi.org/10.1155/2018/6153451.

Full text
Abstract:
Single-microphone speech enhancement algorithms by using nonnegative matrix factorization can only utilize the temporal and spectral diversity of the received signal, making the performance of the noise suppression degrade rapidly in a complex environment. Microphone arrays have spatial selection and high signal gain, so it applies to the adverse noise conditions. In this paper, we present a new algorithm for speech enhancement based on two microphones with nonnegative matrix factorization. The interchannel characteristic of each nonnegative matrix factorization basis can be modeled by the adopted method, such as the amplitude ratios and the phase differences between channels. The results of the experiment confirm that the proposed algorithm is superior to other dual-microphone speech enhancement algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhou, Dan, Hai Yan Gao, and Yun Jie Zhang. "A Decorrelation-Based Nonnegative Matrix Factorization Algorithm for Face Recognition." Advanced Materials Research 651 (January 2013): 858–63. http://dx.doi.org/10.4028/www.scientific.net/amr.651.858.

Full text
Abstract:
Nonnegative Matrix Factorization (NMF) is among the most popular subspace methods, widely used in a variety of image processing problems. However, this approach is very time-consuming in face recognition due to the extreme high dimensionality of the original matrix. To remedy this limitation, this paper presents a Decorrelation-based NMF (DNMF) method. The proposed algorithm first takes into account the dimension reduction of the original matrix by preprocessing of decorrelation in spatial domain, and then uses nearest neighbor classifier on the reduced subspace. The developed algorithm has been applied for the ORL standard face image database. Experimental results demonstrate the validity of this method.
APA, Harvard, Vancouver, ISO, and other styles
36

Tong, Lei, Jing Yu, Chuangbai Xiao, and Bin Qian. "Hyperspectral unmixing via deep matrix factorization." International Journal of Wavelets, Multiresolution and Information Processing 15, no. 06 (November 2017): 1750058. http://dx.doi.org/10.1142/s0219691317500588.

Full text
Abstract:
Hyperspectral unmixing is one of the most important techniques in hyperspectral remote sensing image analysis. During the past decades, many models have been widely used in hyperspectral unmixing, such as nonnegative matrix factorization (NMF) model, sparse regression model, etc. Most recently, a new matrix factorization model, deep matrix, is proposed and shows good performance in face recognition area. In this paper, we introduce the deep matrix factorization (DMF) for hyperspectral unmixing. In this method, the DMF method is applied for hyperspectral unmixing. Compared with the traditional NMF-based unmixing methods, DMF could extract more information with multiple-layer structures. An optimization algorithm is also proposed for DMF with two designed processes. Results on both synthetic and real data have validated the effectiveness of this method, and shown that it has outperformed several state-of-the-art unmixing approaches.
APA, Harvard, Vancouver, ISO, and other styles
37

Esposito, Flavia. "A Review on Initialization Methods for Nonnegative Matrix Factorization: Towards Omics Data Experiments." Mathematics 9, no. 9 (April 29, 2021): 1006. http://dx.doi.org/10.3390/math9091006.

Full text
Abstract:
Nonnegative Matrix Factorization (NMF) has acquired a relevant role in the panorama of knowledge extraction, thanks to the peculiarity that non-negativity applies to both bases and weights, which allows meaningful interpretations and is consistent with the natural human part-based learning process. Nevertheless, most NMF algorithms are iterative, so initialization methods affect convergence behaviour, the quality of the final solution, and NMF performance in terms of the residual of the cost function. Studies on the impact of NMF initialization techniques have been conducted for text or image datasets, but very few considerations can be found in the literature when biological datasets are studied, even though NMFs have largely demonstrated their usefulness in better understanding biological mechanisms with omic datasets. This paper aims to present the state-of-the-art on NMF initialization schemes along with some initial considerations on the impact of initialization methods when microarrays (a simple instance of omic data) are evaluated with NMF mechanisms. Using a series of measures to qualitatively examine the biological information extracted by a given NMF scheme, it preliminary appears that some information (e.g., represented by genes) can be extracted regardless of the initialization scheme used.
APA, Harvard, Vancouver, ISO, and other styles
38

LEE, HYEKYOUNG, YONG-DEOK KIM, ANDRZEJ CICHOCKI, and SEUNGJIN CHOI. "NONNEGATIVE TENSOR FACTORIZATION FOR CONTINUOUS EEG CLASSIFICATION." International Journal of Neural Systems 17, no. 04 (August 2007): 305–17. http://dx.doi.org/10.1142/s0129065707001159.

Full text
Abstract:
In this paper we present a method for continuous EEG classification, where we employ nonnegative tensor factorization (NTF) to determine discriminative spectral features and use the Viterbi algorithm to continuously classify multiple mental tasks. This is an extension of our previous work on the use of nonnegative matrix factorization (NMF) for EEG classification. Numerical experiments with two data sets in BCI competition, confirm the useful behavior of the method for continuous EEG classification.
APA, Harvard, Vancouver, ISO, and other styles
39

Zdunek, Rafal, and Andrzej Cichocki. "Fast Nonnegative Matrix Factorization Algorithms Using Projected Gradient Approaches for Large-Scale Problems." Computational Intelligence and Neuroscience 2008 (2008): 1–13. http://dx.doi.org/10.1155/2008/939567.

Full text
Abstract:
Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, Barzilai-Borwein gradient projection, projected sequential subspace optimization (PSESOP), interior-point Newton (IPN), and sequential coordinate-wise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signal-to-interference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Chang, Kun He, Ji Liu Zhou, and Yan Li Zhu. "Facial Expression Recognition Based on Orthogonal Nonnegative CP Factorization." Advanced Materials Research 143-144 (October 2010): 111–15. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.111.

Full text
Abstract:
Facial Expression recognition based on Non-negative Matrix Factorization (NMF) requires the object images should be vectorized. The vectorization leads to information loss, since local structure of the images is lost. Moreover, NMF can not guarantee the uniqueness of the decomposition. In order to remedy these limitations, the facial expression image was considered as a high-order tensor, and an Orthogonal Non-negative CP Factorization algorithm (ONNCP) was proposed. With the orthogonal constrain, the low-dimensional presentations of samples were non-negative in ONNCP. The convergence characteristic of the algorithm was proved. The experiments indicate that, compared with other non-negative factorization algorithms, the algorithm proposed in the paper reduces the redundancy of the base image and has better recognition rate in facial expression recognition.
APA, Harvard, Vancouver, ISO, and other styles
41

Hao, Yong-Jing, Ying-Lian Gao, Mi-Xiao Hou, Ling-Yun Dai, and Jin-Xing Liu. "Hypergraph Regularized Discriminative Nonnegative Matrix Factorization on Sample Classification and Co-Differentially Expressed Gene Selection." Complexity 2019 (August 19, 2019): 1–12. http://dx.doi.org/10.1155/2019/7081674.

Full text
Abstract:
Nonnegative Matrix Factorization (NMF) is a significant big data analysis technique. However, standard NMF regularized by simple graph does not have discriminative function, and traditional graph models cannot accurately reflect the problem of multigeometry information between data. To solve the above problem, this paper proposed a new method called Hypergraph Regularized Discriminative Nonnegative Matrix Factorization (HDNMF), which captures intrinsic geometry by constructing hypergraphs rather than simple graphs. The introduction of the hypergraph method allows high-order relationships between samples to be considered, and the introduction of label information enables the method to have discriminative effect. Both the hypergraph Laplace and the discriminative label information are utilized together to learn the projection matrix in the standard method. In addition, we offered a corresponding multiplication update solution for the optimization. Experiments indicate that the method proposed is more effective by comparing with the earlier methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Laurberg, Hans, Mads Græsbøll Christensen, Mark D. Plumbley, Lars Kai Hansen, and Søren Holdt Jensen. "Theorems on Positive Data: On the Uniqueness of NMF." Computational Intelligence and Neuroscience 2008 (2008): 1–9. http://dx.doi.org/10.1155/2008/764206.

Full text
Abstract:
We investigate the conditions for which nonnegative matrix factorization (NMF) is unique and introduce several theorems which can determine whether the decomposition is in fact unique or not. The theorems are illustrated by several examples showing the use of the theorems and their limitations. We have shown that corruption of a unique NMF matrix by additive noise leads to a noisy estimation of the noise-free unique solution. Finally, we use a stochastic view of NMF to analyze which characterization of the underlying model will result in an NMF with small estimation errors.
APA, Harvard, Vancouver, ISO, and other styles
43

Devarajan, Karthik, and Vincent C. K. Cheung. "On Nonnegative Matrix Factorization Algorithms for Signal-Dependent Noise with Application to Electromyography Data." Neural Computation 26, no. 6 (June 2014): 1128–68. http://dx.doi.org/10.1162/neco_a_00576.

Full text
Abstract:
Nonnegative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H, where [Formula: see text]. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology, and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this letter, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates, and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness of fit on data. Our methods are demonstrated using experimental data from electromyography studies, as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise.
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Yehao, Changcheng Shi, Jialin Xu, Sijia Ye, Huilin Zhou, and Guokun Zuo. "A Novel Muscle Synergy Extraction Method Used for Motor Function Evaluation of Stroke Patients: A Pilot Study." Sensors 21, no. 11 (June 1, 2021): 3833. http://dx.doi.org/10.3390/s21113833.

Full text
Abstract:
In this paper, we present a novel muscle synergy extraction method based on multivariate curve resolution–alternating least squares (MCR-ALS) to overcome the limitation of the nonnegative matrix factorization (NMF) method for extracting non-sparse muscle synergy, and we study its potential application for evaluating motor function of stroke survivors. Nonnegative matrix factorization (NMF) is the most widely used method for muscle synergy extraction. However, NMF is susceptible to components’ sparseness and usually provides inferior reliability, which significantly limits the promotion of muscle synergy. In this study, MCR-ALS was employed to extract muscle synergy from electromyography (EMG) data. Its performance was compared with two other matrix factorization algorithms, NMF and self-modeling mixture analysis (SMMA). Simulated data sets were utilized to explore the influences of the sparseness and noise on the extracted synergies. As a result, the synergies estimated by MCR-ALS were the most similar to true synergies as compared with SMMA and NMF. MCR-ALS was used to analyze the muscle synergy characteristics of upper limb movements performed by healthy (n = 11) and stroke (n = 5) subjects. The repeatability and intra-subject consistency were used to evaluate the performance of MCR-ALS. As a result, MCR-ALS provided much higher repeatability and intra-subject consistency as compared with NMF, which were important for the reliability of the motor function evaluation. The stroke subjects had lower intra-subject consistency and seemingly had more synergies as compared with the healthy subjects. Thus, MCR-ALS is a promising muscle synergy analysis method for motor function evaluation of stroke patients.
APA, Harvard, Vancouver, ISO, and other styles
45

Zdunek, Rafał. "Regularized nonnegative matrix factorization: Geometrical interpretation and application to spectral unmixing." International Journal of Applied Mathematics and Computer Science 24, no. 2 (June 26, 2014): 233–47. http://dx.doi.org/10.2478/amcs-2014-0017.

Full text
Abstract:
Abstract Nonnegative Matrix Factorization (NMF) is an important tool in data spectral analysis. However, when a mixing matrix or sources are not sufficiently sparse, NMF of an observation matrix is not unique. Many numerical optimization algorithms, which assure fast convergence for specific problems, may easily get stuck into unfavorable local minima of an objective function, resulting in very low performance. In this paper, we discuss the Tikhonov regularized version of the Fast Combinatorial NonNegative Least Squares (FC-NNLS) algorithm (proposed by Benthem and Keenan in 2004), where the regularization parameter starts from a large value and decreases gradually with iterations. A geometrical analysis and justification of this approach are presented. The numerical experiments, carried out for various benchmarks of spectral signals, demonstrate that this kind of regularization, when applied to the FC-NNLS algorithm, is essential to obtain good performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Lai, Yeuntyng, Morihiro Hayashida, and Tatsuya Akutsu. "Survival Analysis by Penalized Regression and Matrix Factorization." Scientific World Journal 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/632030.

Full text
Abstract:
Because every disease has its unique survival pattern, it is necessary to find a suitable model to simulate followups. DNA microarray is a useful technique to detect thousands of gene expressions at one time and is usually employed to classify different types of cancer. We propose combination methods of penalized regression models and nonnegative matrix factorization (NMF) for predicting survival. We triedL1- (lasso),L2- (ridge), andL1-L2combined (elastic net) penalized regression for diffuse large B-cell lymphoma (DLBCL) patients' microarray data and found thatL1-L2combined method predicts survival best with the smallest logrankPvalue. Furthermore, 80% of selected genes have been reported to correlate with carcinogenesis or lymphoma. Through NMF we found that DLBCL patients can be divided into 4 groups clearly, and it implies that DLBCL may have 4 subtypes which have a little different survival patterns. Next we excluded some patients who were indicated hard to classify in NMF and executed three penalized regression models again. We found that the performance of survival prediction has been improved with lower logrankPvalues. Therefore, we conclude that after preselection of patients by NMF, penalized regression models can predict DLBCL patients' survival successfully.
APA, Harvard, Vancouver, ISO, and other styles
47

Gillis, Nicolas, and François Glineur. "Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for Nonnegative Matrix Factorization." Neural Computation 24, no. 4 (April 2012): 1085–105. http://dx.doi.org/10.1162/neco_a_00256.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.
APA, Harvard, Vancouver, ISO, and other styles
48

Jiang, Lin Cheng, Wen Tang Tan, Zhen Wen Wang, Feng Jing Yin, Bin Ge, and Wen Dong Xiao. "Improved Nonnegative Matrix Factorization Based Feature Selection for High Dimensional Data Analysis." Applied Mechanics and Materials 347-350 (August 2013): 2344–48. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2344.

Full text
Abstract:
Feature selection has become the focus of research areas of applications with high dimensional data. Nonnegative matrix factorization (NMF) is a good method for dimensionality reduction but it cant select the optimal feature subset for its a feature extraction method. In this paper, a two-step strategy method based on improved NMF is proposed.The first step is to get the basis of each catagory in the dataset by NMF. Added constrains can guarantee these basises are sparse and mostly distinguish from each other which can contribute to classfication. An auxiliary function is used to prove the algorithm convergent.The classic ReliefF algorithm is used to weight each feature by all the basis vectors and choose the optimal feature subset in the second step.The experimental results revealed that the proposed method can select a representive and relevant feature subset which is effective in improving the performance of the classifier.
APA, Harvard, Vancouver, ISO, and other styles
49

Yamamoto, Naoki, Jun Murakami, Kei Fujii, Chiharu Okuma, Satoko Saito, Takashi Izumi, and Nozomi Hayashida. "Measurement and Analysis of the Functional Independence Measure Data by Using Nonnegative Matrix Factorization Method." Advanced Materials Research 718-720 (July 2013): 630–35. http://dx.doi.org/10.4028/www.scientific.net/amr.718-720.630.

Full text
Abstract:
In this paper, we describe about a manner of adapting the nonnegative matrix factorization (NMF) method to the medical data, especially functional independence measure (FIM) data, and its experimental results. From the results which were obtained by applying the method to actually measured medical data in a hospital, we confirmed that the NMF method was effective to analyze the patients' characteristics related to disability and recovery tendency.
APA, Harvard, Vancouver, ISO, and other styles
50

Lai, Shu-Zhen, Hou-Biao Li, and Zu-Tao Zhang. "A Symmetric Rank-One Quasi-Newton Method for Nonnegative Matrix Factorization." ISRN Applied Mathematics 2014 (January 22, 2014): 1–11. http://dx.doi.org/10.1155/2014/846483.

Full text
Abstract:
As is well known, the nonnegative matrix factorization (NMF) is a dimension reduction method that has been widely used in image processing, text compressing, signal processing, and so forth. In this paper, an algorithm on nonnegative matrix approximation is proposed. This method is mainly based on a relaxed active set and the quasi-Newton type algorithm, by using the symmetric rank-one and negative curvature direction technologies to approximate the Hessian matrix. The method improves some recent results. In addition, some numerical experiments are presented in the synthetic data, imaging processing, and text clustering. By comparing with the other six nonnegative matrix approximation methods, this method is more robust in almost all cases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography