Academic literature on the topic 'Subspace approach'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Subspace approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Subspace approach"

1

Vijendra, Singh, and Sahoo Laxman. "Subspace Clustering of High-Dimensional Data: An Evolutionary Approach." Applied Computational Intelligence and Soft Computing 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/863146.

Full text
Abstract:
Clustering high-dimensional data has been a major challenge due to the inherent sparsity of the points. Most existing clustering algorithms become substantially inefficient if the required similarity measure is computed between data points in the full-dimensional space. In this paper, we have presented a robust multi objective subspace clustering (MOSCL) algorithm for the challenging problem of high-dimensional clustering. The first phase of MOSCL performs subspace relevance analysis by detecting dense and sparse regions with their locations in data set. After detection of dense regions it eliminates outliers. MOSCL discovers subspaces in dense regions of data set and produces subspace clusters. In thorough experiments on synthetic and real-world data sets, we demonstrate that MOSCL for subspace clustering is superior to PROCLUS clustering algorithm. Additionally we investigate the effects of first phase for detecting dense regions on the results of subspace clustering. Our results indicate that removing outliers improves the accuracy of subspace clustering. The clustering results are validated by clustering error (CE) distance on various data sets. MOSCL can discover the clusters in all subspaces with high quality, and the efficiency of MOSCL outperforms PROCLUS.
APA, Harvard, Vancouver, ISO, and other styles
2

Blajer, W. "A Projection Method Approach to Constrained Dynamic Analysis." Journal of Applied Mechanics 59, no. 3 (1992): 643–49. http://dx.doi.org/10.1115/1.2893772.

Full text
Abstract:
The paper presents a unified approach to the dynamic analysis of mechanical systems subject to (ideal) holonomic and/or nonholonomic constraints. The approach is based on the projection of the initial (constraint reaction-containing) dynamical equations into the orthogonal and tangent subspaces; the orthogonal subspace which is spanned by the constraint vectors, and the tangent subspace which complements the orthogonal subspace in the system’s configuration space. The tangential projection gives the reaction-free (or purely kinetic) equations of motion, whereas the orthogonal projection determines the constraint reactions. Simplifications due to the use of independent variables are indicated, and examples illustrating the concepts are included.
APA, Harvard, Vancouver, ISO, and other styles
3

Fatehi, Kavan, Mohsen Rezvani, Mansoor Fateh, and Mohammad-Reza Pajoohan. "Subspace Clustering for High-Dimensional Data Using Cluster Structure Similarity." International Journal of Intelligent Information Technologies 14, no. 3 (2018): 38–55. http://dx.doi.org/10.4018/ijiit.2018070103.

Full text
Abstract:
This article describes how recently, because of the curse of dimensionality in high dimensional data, a significant amount of research has been conducted on subspace clustering aiming at discovering clusters embedded in any possible attributes combination. The main goal of subspace clustering algorithms is to find all clusters in all subspaces. Previous studies have mostly been generating redundant subspace clusters, leading to clustering accuracy loss and also increasing the running time of the algorithms. A bottom-up density-based approach is suggested in this article, in which the cluster structure serves as a similarity measure to generate the optimal subspaces which result in raising the accuracy of the subspace clustering. Based on this idea, the algorithm discovers similar subspaces by considering similarity in their cluster structure, then combines them and the data in the new subspaces would be clustered again. Finally, the algorithm determines all the subspaces and also finds all clusters within them. Experiments on various synthetic and real datasets show that the results of the proposed approach are significantly better in quality and runtime than the state-of-the-art on clustering high-dimensional data.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Xing, Jun Wang, Carlotta Domeniconi, Guoxian Yu, Guoqiang Xiao, and Maozu Guo. "Multiple Independent Subspace Clusterings." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5353–60. http://dx.doi.org/10.1609/aaai.v33i01.33015353.

Full text
Abstract:
Multiple clustering aims at discovering diverse ways of organizing data into clusters. Despite the progress made, it’s still a challenge for users to analyze and understand the distinctive structure of each output clustering. To ease this process, we consider diverse clusterings embedded in different subspaces, and analyze the embedding subspaces to shed light into the structure of each clustering. To this end, we provide a two-stage approach called MISC (Multiple Independent Subspace Clusterings). In the first stage, MISC uses independent subspace analysis to seek multiple and statistical independent (i.e. non-redundant) subspaces, and determines the number of subspaces via the minimum description length principle. In the second stage, to account for the intrinsic geometric structure of samples embedded in each subspace, MISC performs graph regularized semi-nonnegative matrix factorization to explore clusters. It additionally integrates the kernel trick into matrix factorization to handle non-linearly separable clusters. Experimental results on synthetic datasets show that MISC can find different interesting clusterings from the sought independent subspaces, and it also outperforms other related and competitive approaches on real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Sia, Florence, and Rayner Alfred. "Tree-based mining contrast subspace." International Journal of Advances in Intelligent Informatics 5, no. 2 (2019): 169. http://dx.doi.org/10.26555/ijain.v5i2.359.

Full text
Abstract:
All existing mining contrast subspace methods employ density-based likelihood contrast scoring function to measure the likelihood of a query object to a target class against other class in a subspace. However, the density tends to decrease when the dimensionality of subspaces increases causes its bounds to identify inaccurate contrast subspaces for the given query object. This paper proposes a novel contrast subspace mining method that employs tree-based likelihood contrast scoring function which is not affected by the dimensionality of subspaces. The tree-based scoring measure recursively binary partitions the subspace space in the way that objects belong to the target class are grouped together and separated from objects belonging to other class. In contrast subspace, the query object should be in a group having a higher number of objects of the target class than other class. It incorporates the feature selection approach to find a subset of one-dimensional subspaces with high likelihood contrast score with respect to the query object. Therefore, the contrast subspaces are then searched through the selected subset of one-dimensional subspaces. An experiment is conducted to evaluate the effectiveness of the tree-based method in terms of classification accuracy. The experiment results show that the proposed method has higher classification accuracy and outperform the existing method on several real-world data sets.
APA, Harvard, Vancouver, ISO, and other styles
6

Ratilal, Purnima, Peter Gerstoft, and Joo Thiam Goh. "Subspace Approach to Inversion by Genetic Algorithms Involving Multiple Frequencies." Journal of Computational Acoustics 06, no. 01n02 (1998): 99–115. http://dx.doi.org/10.1142/s0218396x98000090.

Full text
Abstract:
Based on waveguide physics, a subspace inversion approach is proposed. It is observed that the ability to estimate a given parameter depends on its sensitivity to the acoustic wavefield, and this sensitivity depends on frequency. At low frequencies it is mainly the bottom parameters that are most sensitive and at high frequencies the geometric parameters are the most sensitive. Thus, the parameter vector to be determined is split into two subspaces, and only part of the data that is most influenced by the parameters in each subspace is used. The data sets from the Geoacoustic Inversion Workshop (June 1997) are inverted to demonstrate the approach. In each subspace Genetic Algorithms are used for the optimization — it provides flexibility to search over a wide range of parameters and also helps in selecting data sets to be used in the inversion. During optimization, the responses from many environmental parameter sets are computed in order to estimate the a posteriori probabilities of the model parameters. Thus the uniqueness and uncertainty of the model parameters are assessed. Using data from several frequencies to estimate a smaller subspace of parameters iteratively provides stability and greater accuracy in the estimated parameters.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Junpeng, Xiaotong Liu, and Han-Wei Shen. "High-dimensional data analysis with subspace comparison using matrix visualization." Information Visualization 18, no. 1 (2017): 94–109. http://dx.doi.org/10.1177/1473871617733996.

Full text
Abstract:
Due to the intricate relationship between different dimensions of high-dimensional data, subspace analysis is often conducted to decompose dimensions and give prominence to certain subsets of dimensions, i.e. subspaces. Exploring and comparing subspaces are important to reveal the underlying features of subspaces, as well as to portray the characteristics of individual dimensions. To date, most of the existing high-dimensional data exploration and analysis approaches rely on dimensionality reduction algorithms (e.g. principal component analysis and multi-dimensional scaling) to project high-dimensional data, or their subspaces, to two-dimensional space and employ scatterplots for visualization. However, the dimensionality reduction algorithms are sometimes difficult to fine-tune and scatterplots are not effective for comparative visualization, making subspace comparison hard to perform. In this article, we aggregate high-dimensional data or their subspaces by computing pair-wise distances between all data items and showing the distances with matrix visualizations to present the original high-dimensional data or subspaces. Our approach enables effective visual comparisons among subspaces, which allows users to further investigate the characteristics of individual dimensions by studying their behaviors in similar subspaces. Through subspace comparisons, we identify dominant, similar, and conforming dimensions in different subspace contexts of synthetic and real-world high-dimensional data sets. Additionally, we present a prototype that integrates parallel coordinates plot and matrix visualization for high-dimensional data exploration and incremental dimensionality analysis, which also allows users to further validate the dimension characterization results derived from the subspace comparisons.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yingwei, Lingjun Zhang, and Hailong Zhang. "Fault Detection for Industrial Processes." Mathematical Problems in Engineering 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/757828.

Full text
Abstract:
A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Ying Han, Pang, Andrew Teoh Beng Jin, and Lim Heng Siong. "Eigenvector Weighting Function in Face Recognition." Discrete Dynamics in Nature and Society 2011 (2011): 1–15. http://dx.doi.org/10.1155/2011/521935.

Full text
Abstract:
Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF), is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Chengli, Jianxiao Xie, and Yan Leng. "A Signal Subspace Speech Enhancement Approach Based on Joint Low-Rank and Sparse Matrix Decomposition." Archives of Acoustics 41, no. 2 (2016): 245–54. http://dx.doi.org/10.1515/aoa-2016-0024.

Full text
Abstract:
Abstract Subspace-based methods have been effectively used to estimate enhanced speech from noisy speech samples. In the traditional subspace approaches, a critical step is splitting of two invariant subspaces associated with signal and noise via subspace decomposition, which is often performed by singular-value decomposition or eigenvalue decomposition. However, these decomposition algorithms are highly sensitive to the presence of large corruptions, resulting in a large amount of residual noise within enhanced speech in low signal-to-noise ratio (SNR) situations. In this paper, a joint low-rank and sparse matrix decomposition (JLSMD) based subspace method is proposed for speech enhancement. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank value for the underlying clean speech matrix. Then the subspace decomposition is performed by means of JLSMD, where the decomposed low-rank part corresponds to enhanced speech and the sparse part corresponds to noise signal, respectively. An extensive set of experiments have been carried out for both of white Gaussian noise and real-world noise. Experimental results show that the proposed method performs better than conventional methods in many types of strong noise conditions, in terms of yielding less residual noise and lower speech distortion.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!