To see the other types of publications on this topic, follow the link: Data Dimensionality Reduction.

Journal articles on the topic 'Data Dimensionality Reduction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data Dimensionality Reduction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

K, Bhargavi. "Data Dimensionality Reduction Techniques : Review." International Journal of Engineering Technology and Management Sciences 4, no. 4 (2020): 62–65. http://dx.doi.org/10.46647/ijetms.2020.v04i04.010.

Full text
Abstract:
Data science is the study of data. It involves developing methods of recording, storing, and analyzing data to effectively extract useful information. The goal of data science is to gain insights and knowledge from any type of data — both structured and unstructured. Data science is related to computer science, but is a separate field. Computer science involves creating programs and algorithms to record and process data, while data science covers any type of data analysis, which may or may not use computers. Data science is more closely related to the mathematics field of Statistics, which inc
APA, Harvard, Vancouver, ISO, and other styles
2

Nagabhushan, P., K. Chidananda Gowda, and Edwin Diday. "Dimensionality reduction of symbolic data." Pattern Recognition Letters 16, no. 2 (1995): 219–23. http://dx.doi.org/10.1016/0167-8655(94)00085-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

K., Ashfaq Ahmed, and Dr Shaheda Akthar. "Ridge Regression based Missing Data Estimation with Dimensionality Reduction: Microarray Gene Expression Data." Webology 19, no. 1 (2022): 4113–28. http://dx.doi.org/10.14704/web/v19i1/web19271.

Full text
Abstract:
Data is considered to be the important element in the field of Data Science and Machine Learning. Performance of Machine Learning and Data Mining algorithms greatly influenced by the characteristics of data and data with missing values. Performance of all these Machine Learning algorithms greatly improved and they can give accurate results when the data is in full without missing values. So before applying these algorithms; dataset and its missing values are completely filled. To impute these missing values in the dataset there are numerous methods were proposed. In this paper we used micro ar
APA, Harvard, Vancouver, ISO, and other styles
4

Mahadev, Preeti, and P. Nagabhushan. "Incremental Dimensionality Reduction in Hyperspectral Data." International Journal of Computer Applications 163, no. 7 (2017): 21–34. http://dx.doi.org/10.5120/ijca2017913575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sanguinetti, Guido. "Dimensionality Reduction of Clustered Data Sets." IEEE Transactions on Pattern Analysis and Machine Intelligence 30, no. 3 (2008): 535–40. http://dx.doi.org/10.1109/tpami.2007.70819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Villalón, Elena. "High-Dimensionality Data Reduction with Java." Computing in Science & Engineering 10, no. 5 (2008): 64–69. http://dx.doi.org/10.1109/mcse.2008.134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

., Smita J. Khelukar. "HIGH DIMENSIONALITY REDUCTION ON GRAPHICAL DATA." International Journal of Research in Engineering and Technology 04, no. 11 (2015): 177–79. http://dx.doi.org/10.15623/ijret.2015.0411029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gámez, A. J., C. S. Zhou, A. Timmermann, and J. Kurths. "Nonlinear dimensionality reduction in climate data." Nonlinear Processes in Geophysics 11, no. 3 (2004): 393–98. http://dx.doi.org/10.5194/npg-11-393-2004.

Full text
Abstract:
Abstract. Linear methods of dimensionality reduction are useful tools for handling and interpreting high dimensional data. However, the cumulative variance explained by each of the subspaces in which the data space is decomposed may show a slow convergence that makes the selection of a proper minimum number of subspaces for successfully representing the variability of the process ambiguous. The use of nonlinear methods can improve the embedding of multivariate data into lower dimensional manifolds. In this article, a nonlinear method for dimensionality reduction, Isomap, is applied to the sea
APA, Harvard, Vancouver, ISO, and other styles
9

Gisbrecht, Andrej, and Barbara Hammer. "Data visualization by nonlinear dimensionality reduction." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 5, no. 2 (2015): 51–73. http://dx.doi.org/10.1002/widm.1147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hongda, Jian Cui, Xinle Zhang, Yongqi Han, and Liying Cao. "Dimensionality Reduction and Classification of Hyperspectral Remote Sensing Image Feature Extraction." Remote Sensing 14, no. 18 (2022): 4579. http://dx.doi.org/10.3390/rs14184579.

Full text
Abstract:
Terrain classification is an important research direction in the field of remote sensing. Hyperspectral remote sensing image data contain a large amount of rich ground object information. However, such data have the characteristics of high spatial dimensions of features, strong data correlation, high data redundancy, and long operation time, which lead to difficulty in image data classification. A data dimensionality reduction algorithm can transform the data into low-dimensional data with strong features and then classify the dimensionally reduced data. However, most classification methods ca
APA, Harvard, Vancouver, ISO, and other styles
11

HE, GUANGHUI, ZHAOWEI SHANG, and HENGXIN CHEN. "DISTANCE-RATIO LEARNING FOR DATA VISUALIZATION." International Journal of Wavelets, Multiresolution and Information Processing 10, no. 06 (2012): 1250055. http://dx.doi.org/10.1142/s0219691312500555.

Full text
Abstract:
Most dimensionality reduction methods depend significantly on the distance measure used to compute distances between different examples. Therefore, a good distance metric is essential to many dimensionality reduction algorithms. In this paper, we present a new dimensionality reduction method for data visualization, called Distance-ratio Preserving Embedding (DrPE), which preserves the ratio between the pairwise distances. It is achieved by minimizing the mismatch between the distance ratios derived from input and output space. The proposed method can preserve the relational structures among po
APA, Harvard, Vancouver, ISO, and other styles
12

Rafieian, Bardia, Pedro Hermosilla, and Pere-Pau Vázquez. "Improving Dimensionality Reduction Projections for Data Visualization." Applied Sciences 13, no. 17 (2023): 9967. http://dx.doi.org/10.3390/app13179967.

Full text
Abstract:
In data science and visualization, dimensionality reduction techniques have been extensively employed for exploring large datasets. These techniques involve the transformation of high-dimensional data into reduced versions, typically in 2D, with the aim of preserving significant properties from the original data. Many dimensionality reduction algorithms exist, and nonlinear approaches such as the t-SNE (t-Distributed Stochastic Neighbor Embedding) and UMAP (Uniform Manifold Approximation and Projection) have gained popularity in the field of information visualization. In this paper, we introdu
APA, Harvard, Vancouver, ISO, and other styles
13

Marina, Popolizio, Amato Alberto, Dario Rita, and Di Lecce Vincenzo. "DATA DIMENSIONALITY REDUCTION ALGORITHMS FOR MACHINE LEARNING." INTERNATIONAL JOURNAL OF MATHEMATICS AND COMPUTER RESEARCH 11, no. 09 (2023): 3713–15. https://doi.org/10.5281/zenodo.8320889.

Full text
Abstract:
The rise of machine learning yields remarkable outcomes across fields. Phrases like big data, AI,  and  cloud  computing  are  becoming  commonplace.  Yet,  data  abundance  doesn't  assure success.  Numerous  works  address  data  preprocessing  for  information  discovery.  This  study assesses  three  techniques  on  unsupervised  clustering,  spotlighting  SPQR's  novel  application. Findings stress preprocessing's data impact, ur
APA, Harvard, Vancouver, ISO, and other styles
14

Ahmad, Noor, and Ali Bou Nassif. "Dimensionality Reduction: Challenges and Solutions." ITM Web of Conferences 43 (2022): 01017. http://dx.doi.org/10.1051/itmconf/20224301017.

Full text
Abstract:
The use of dimensionality reduction techniques is a keystone for analyzing and interpreting high dimensional data. These techniques gather several data features of interest, such as dynamical structure, input-output relationships, the correlation between data sets, covariance, etc. Dimensionality reduction entails mapping a set of high dimensional data features onto low dimensional data. Motivated by the lack of learning models’ performance due to the high dimensionality data, this study encounters five distinct dimensionality reduction methods. Besides, a comparison between reduced dimensiona
APA, Harvard, Vancouver, ISO, and other styles
15

Omkar, S., V. Sivaranjani, J. Senthilnath, and Suman Mukherjee. "Dimensionality Reduction and Classification of Hyperspectral Data." International Journal of Aerospace Innovations 2, no. 3 (2010): 157–64. http://dx.doi.org/10.1260/1757-2258.2.3.157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lehrmann, Andreas, Michael Huber, Aydin C. Polatkan, Albert Pritzkau, and Kay Nieselt. "Visualizing dimensionality reduction of systems biology data." Data Mining and Knowledge Discovery 27, no. 1 (2012): 146–65. http://dx.doi.org/10.1007/s10618-012-0268-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kaski, Samuel, and Jaakko Peltonen. "Dimensionality Reduction for Data Visualization [Applications Corner]." IEEE Signal Processing Magazine 28, no. 2 (2011): 100–104. http://dx.doi.org/10.1109/msp.2010.940003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Taveira De Souza, Jovani, Antonio Carlos De Francisco, and Dayana Carla De Macedo. "Dimensionality Reduction in Gene Expression Data Sets." IEEE Access 7 (2019): 61136–44. http://dx.doi.org/10.1109/access.2019.2915519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Delicado, P. "Dimensionality reduction when data are density functions." Computational Statistics & Data Analysis 55, no. 1 (2011): 401–20. http://dx.doi.org/10.1016/j.csda.2010.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

H.Aboud, Imad, and Qassim M. Jameel. "Dimensionality reduction in data from LASER applications." Journal of University of Anbar for Pure Science 3, no. 1 (2009): 71–74. http://dx.doi.org/10.37652/juaps.2009.15450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Sai, Kalyana Pranitha Buddiga. "Navigating the Complexity of Big Data: Exploring Dimensionality Reduction Methods." Journal of Scientific and Engineering Research 7, no. 8 (2020): 220–23. https://doi.org/10.5281/zenodo.11216323.

Full text
Abstract:
In the era of big data, the exponential growth of data volume and dimensionality poses significant challenges for data analysis and interpretation. Dimensionality reduction techniques play a crucial role in managing the complexity of high-dimensional datasets by extracting essential features while preserving the inherent structure and information. This paper provides a comprehensive overview of dimensionality reduction methods, ranging from classical techniques like Principal Component Analysis (PCA) to advance nonlinear methods such as t-Distributed Stochastic Neighbor Embedding (t-SNE) and a
APA, Harvard, Vancouver, ISO, and other styles
22

Chava, Mojesh Chowdary. "Feature Extraction and Selection Techniques for High-Dimensional Data." International Journal of Innovative Science and Research Technology 7, no. 10 (2022): 729–33. https://doi.org/10.5281/zenodo.7278660.

Full text
Abstract:
As a preprocessing step, dimensionality reduction from high-dimensional data helps reduce unnecessary data, enhance learning accuracy, and improve result comprehensibility. However, the recent growth in data dimensionality offers a serious challenge to the efficiency and efficacy of many existing feature selection and feature extraction approaches. Dimensionality reduction is an essential topic in machine learning and pattern recognition, and numerous algorithms have been presented. In this research, certain commonly used feature selection and feature extraction approaches are examined to see
APA, Harvard, Vancouver, ISO, and other styles
23

Remesh, Reshma, and Pattabiraman V. "A SURVEY ON THE CURES FOR THE CURSE OF DIMENSIONALITY IN BIG DATA." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (2017): 355. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.19755.

Full text
Abstract:
Dimensionality reduction techniques are used to reduce the complexity for analysis of high dimensional data sets. The raw input data set may have large dimensions and it might consume time and lead to wrong predictions if unnecessary data attributes are been considered for analysis. So using dimensionality reduction techniques one can reduce the dimensions of input data towards accurate prediction with less cost. In this paper the different machine learning approaches used for dimensionality reductions such as PCA, SVD, LDA, Kernel Principal Component Analysis and Artificial Neural Network hav
APA, Harvard, Vancouver, ISO, and other styles
24

Md., Abu Marjan, Rashedul Islam Md., Palash Uddin Md., Ibn Afjal Masud, and Al Mamun Md. "PCA-based dimensionality reduction for face recognition." TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 19, no. 5 (2021): 1622–29. https://doi.org/10.12928/telkomnika.v19i5.19566.

Full text
Abstract:
In this paper, we conduct a comprehensive study on dimensionality reduction (DR) techniques and discuss the mostly used statistical DR technique called principal component analysis (PCA) in detail with a view to addressing the classical face recognition problem. Therefore, we, more devotedly, propose a solution to either a typical face or individual face recognition based on the principal components, which are constructed using PCA on the face images. We simulate the proposed solution with several training and test sets of manually captured face images and also with the popular Olivetti Resear
APA, Harvard, Vancouver, ISO, and other styles
25

Vats, Deepak, and Avinash Sharma. "Dimensionality Reduction Techniques: Comparative Analysis." Journal of Computational and Theoretical Nanoscience 17, no. 6 (2020): 2684–88. http://dx.doi.org/10.1166/jctn.2020.8967.

Full text
Abstract:
It has been spotted an exponential growth in terms of dimension in real world data. Some example of higher dimensional data may includes speech signal, sensor data, medical data, criminal data and data related to recommendation process for different field like news, movies (Netflix) and e-commerce. To empowering learning accuracy in the area of machine learning and enhancing mining performance one need to remove redundant feature and feature not relevant for mining and learning task from this high dimension dataset. There exist many supervised and unsupervised methodologies in literature to pe
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Xiao Zhou. "LTSA Algorithm for Dimension Reduction of Microarray Data." Advanced Materials Research 645 (January 2013): 192–95. http://dx.doi.org/10.4028/www.scientific.net/amr.645.192.

Full text
Abstract:
Dimension reduction is an important issue to understand microarray data. In this study, we proposed a efficient approach for dimensionality reduction of microarray data. Our method allows to apply the manifold learning algorithm to analyses dimensionality reduction of microarray data. The intra-/inter-category distances were used as the criteria to quantitatively evaluate the effects of data dimensionality reduction. Colon cancer and leukaemia gene expression datasets are selected for our investigation. When the neighborhood parameter was effectivly set, all the intrinsic dimension numbers of
APA, Harvard, Vancouver, ISO, and other styles
27

Zhao, Xingyu. "Comparison of Data Visualization, Outlier Detection and Data Dimensionality Reduction Methods." Highlights in Science, Engineering and Technology 85 (March 13, 2024): 1141–49. http://dx.doi.org/10.54097/wgchmc87.

Full text
Abstract:
With the deepening of the digital age of information, people's daily data is getting larger and larger, and it is more and more difficult to quantify and process. At this time, the data processing means becomes particularly important. This paper compares and analyzes some methods from data visualization to data dimensionality reduction to outlier detection. In this paper, two different types of datasets, ModelNet40, and red wine quality, are used to introduce the visualization method of the Farthest Point Sampling (FPS). This method can have a clear visual effect on the data dimension and scal
APA, Harvard, Vancouver, ISO, and other styles
28

Kolluru, P., K. Pandey, and H. Padalia. "A Unified Framework for Dimensionality Reduction and Classification of Hyperspectral Data." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 447–53. http://dx.doi.org/10.5194/isprsarchives-xl-8-447-2014.

Full text
Abstract:
The processing of hyperspectral remote sensing data, for information retrieval, is challenging due to its higher dimensionality. Machine learning based algorithms such as Support Vector Machine (SVM) is preferably applied to perform classification of high dimensionality data. A single-step unified framework is required which could decide the intrinsic dimensionality of data and achieve higher classification accuracy using SVM. This work present development of a SVM-based dimensionality reduction and classification (SVMDRC) framework for hyperspectral data. The proposed unified framework was te
APA, Harvard, Vancouver, ISO, and other styles
29

Sun, Yu-Yin, Michael Ng, and Zhi-Hua Zhou. "Multi-Instance Dimensionality Reduction." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 587–92. http://dx.doi.org/10.1609/aaai.v24i1.7700.

Full text
Abstract:
Multi-instance learning deals with problems that treat bags of instances as training examples. In single-instance learning problems, dimensionality reduction is an essential step for high-dimensional data analysis and has been studied for years. The curse of dimensionality also exists in multiinstance learning tasks, yet this difficult task has not been studied before. Direct application of existing single-instance dimensionality reduction objectives to multi-instance learning tasks may not work well since it ignores the characteristic of multi-instance learning that the labels of bags are kno
APA, Harvard, Vancouver, ISO, and other styles
30

Shen, Zilin. "Comparison and Evaluation of Classical Dimensionality Reduction Methods." Highlights in Science, Engineering and Technology 70 (November 15, 2023): 411–18. http://dx.doi.org/10.54097/hset.v70i.13890.

Full text
Abstract:
As one of the tasks of unsupervised learning, data dimensionality reduction is faced with the problem of a lack of evaluation methods. Based on this, three classical dimensionality reduction methods such as PCA, t-SNE and UMAP were selected as the research object in this paper. This article selected 5 three-classification datasets and used the three methods mentioned above to perform dimensionality reduction. This paper plotted 3D scatter graphs after dimensionality reduction to analyze the differentiation effect of the data on different categories of the target variable. Then the data after d
APA, Harvard, Vancouver, ISO, and other styles
31

Tang, Yunbo, Dan Chen, and Xiaoli Li. "Dimensionality Reduction Methods for Brain Imaging Data Analysis." ACM Computing Surveys 54, no. 4 (2021): 1–36. http://dx.doi.org/10.1145/3448302.

Full text
Abstract:
The past century has witnessed the grand success of brain imaging technologies, such as electroencephalography and magnetic resonance imaging, in probing cognitive states and pathological brain dynamics for neuroscience research and neurology practices. Human brain is “the most complex object in the universe,” and brain imaging data ( BID ) are routinely of multiple/many attributes and highly non-stationary. These are determined by the nature of BID as the recordings of the evolving processes of the brain(s) under examination in various views. Driven by the increasingly high demands for precis
APA, Harvard, Vancouver, ISO, and other styles
32

Paul, Ipsita. "Dimensionality Reduction for Microarray Data: An Analytical Survey." International Journal of Darshan Institute on Engineering Research & Emerging Technology 11, no. 1 (2022): 61–65. http://dx.doi.org/10.32692/ijdi-eret/11.1.2022.2210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wan, Xiaoji, Hailin Li, Liping Zhang, and Yenchun Jim Wu. "Dimensionality reduction for multivariate time-series data mining." Journal of Supercomputing 78, no. 7 (2022): 9862–78. http://dx.doi.org/10.1007/s11227-021-04303-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hsu, Chung-Chian, and Jhen-Wei Wu. "Visualized mixed-type data analysis via dimensionality reduction." Intelligent Data Analysis 22, no. 5 (2018): 981–1007. http://dx.doi.org/10.3233/ida-173480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Belkin, Mikhail, and Partha Niyogi. "Laplacian Eigenmaps for Dimensionality Reduction and Data Representation." Neural Computation 15, no. 6 (2003): 1373–96. http://dx.doi.org/10.1162/089976603321780317.

Full text
Abstract:
One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality r
APA, Harvard, Vancouver, ISO, and other styles
36

Yahong Han, Fei Wu, Dacheng Tao, Jian Shao, Yueting Zhuang, and Jianmin Jiang. "Sparse Unsupervised Dimensionality Reduction for Multiple View Data." IEEE Transactions on Circuits and Systems for Video Technology 22, no. 10 (2012): 1485–96. http://dx.doi.org/10.1109/tcsvt.2012.2202075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Rangarajan, Lalitha, and P. Nagabhushan. "Dimensionality reduction of multidimensional temporal data through regression." Pattern Recognition Letters 25, no. 8 (2004): 899–910. http://dx.doi.org/10.1016/j.patrec.2004.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nowakowska, Ewa, Jacek Koronacki, and Stan Lipovetsky. "Dimensionality reduction for data of unknown cluster structure." Information Sciences 330 (February 2016): 74–87. http://dx.doi.org/10.1016/j.ins.2015.10.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Houari, Rima, Ahcène Bounceur, M.-Tahar Kechadi, A.-Kamel Tari, and Reinhardt Euler. "Dimensionality reduction in data mining: A Copula approach." Expert Systems with Applications 64 (December 2016): 247–60. http://dx.doi.org/10.1016/j.eswa.2016.07.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sokolovska, Nataliya, Karine Clément, and Jean-Daniel Zucker. "Deep kernel dimensionality reduction for scalable data integration." International Journal of Approximate Reasoning 74 (July 2016): 121–32. http://dx.doi.org/10.1016/j.ijar.2016.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yin, Hujun. "Nonlinear dimensionality reduction and data visualization: A review." International Journal of Automation and Computing 4, no. 3 (2007): 294–303. http://dx.doi.org/10.1007/s11633-007-0294-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Joswiak, Mark, You Peng, Ivan Castillo, and Leo H. Chiang. "Dimensionality reduction for visualizing industrial chemical process data." Control Engineering Practice 93 (December 2019): 104189. http://dx.doi.org/10.1016/j.conengprac.2019.104189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Wei-Chao, Chih-Fong Tsai, and Shih-Wen Ke. "Dimensionality and data reduction in telecom churn prediction." Kybernetes 43, no. 5 (2014): 737–49. http://dx.doi.org/10.1108/k-03-2013-0045.

Full text
Abstract:
Purpose – Churn prediction is a very important task for successful customer relationship management. In general, churn prediction can be achieved by many data mining techniques. However, during data mining, dimensionality reduction (or feature selection) and data reduction are the two important data preprocessing steps. In particular, the aims of feature selection and data reduction are to filter out irrelevant features and noisy data samples, respectively. The purpose of this paper, performing these data preprocessing tasks, is to make the mining algorithm produce good quality mining results.
APA, Harvard, Vancouver, ISO, and other styles
44

Reddy, G. Thippa, M. Praveen Kumar Reddy, Kuruva Lakshmanna, et al. "Analysis of Dimensionality Reduction Techniques on Big Data." IEEE Access 8 (2020): 54776–88. http://dx.doi.org/10.1109/access.2020.2980942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Xiang, Chen Xiaojun, and Luo cheng. "A Dimensionality Reduction Model for Complex Data Grouping." Journal of Physics: Conference Series 1229 (May 2019): 012052. http://dx.doi.org/10.1088/1742-6596/1229/1/012052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kushwaha, Neetu, and Millie Pant. "Textual data dimensionality reduction - a deep learning approach." Multimedia Tools and Applications 79, no. 15-16 (2018): 11039–50. http://dx.doi.org/10.1007/s11042-018-6900-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Giraldo, Luis Felipe, Fernando Lozano, and Nicanor Quijano. "Foraging theory for dimensionality reduction of clustered data." Machine Learning 82, no. 1 (2009): 71–90. http://dx.doi.org/10.1007/s10994-009-5156-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

K., B. V. Brahma Rao, Krishnam Raju Indukuri R, Suresh Varma P., and V. Rama Sundari M. "Evaluation of Various DR Techniques in Massive Patient Datasets using HDFS." International Journal of Recent Technology and Engineering (IJRTE) 10, no. 4 (2021): 1–6. https://doi.org/10.35940/ijrte.D6508.1110421.

Full text
Abstract:
The objective of comparing various dimensionality techniques is to reduce feature sets in order to group attributes effectively with less computational processing time and utilization of memory. The various reduction algorithms can decrease the dimensionality of dataset consisting of a huge number of interrelated variables, while retaining the dissimilarity present in the dataset as much as possible. In this paper we use, Standard Deviation, Variance, Principal Component Analysis, Linear Discriminant Analysis, Factor Analysis, Positive Region, Information Entropy and Independent Component Anal
APA, Harvard, Vancouver, ISO, and other styles
49

Gao, Yun-Long, Si-Zhe Luo, Zhi-Hao Wang, Chih-Cheng Chen, and Jin-Yan Pan. "Locality Sensitive Discriminative Unsupervised Dimensionality Reduction." Symmetry 11, no. 8 (2019): 1036. http://dx.doi.org/10.3390/sym11081036.

Full text
Abstract:
Graph-based embedding methods receive much attention due to the use of graph and manifold information. However, conventional graph-based embedding methods may not always be effective if the data have high dimensions and have complex distributions. First, the similarity matrix only considers local distance measurement in the original space, which cannot reflect a wide variety of data structures. Second, separation of graph construction and dimensionality reduction leads to the similarity matrix not being fully relied on because the original data usually contain lots of noise samples and feature
APA, Harvard, Vancouver, ISO, and other styles
50

Vizárraga, Jorge, Roberto Casas, Álvaro Marco, and J. David Buldain. "Dimensionality Reduction for Smart IoT Sensors." Electronics 9, no. 12 (2020): 2035. http://dx.doi.org/10.3390/electronics9122035.

Full text
Abstract:
Smart IoT sensors are characterized by their ability to sense and process signals, producing high-level information that is usually sent wirelessly while minimising energy consumption and maximising communication efficiency. Systems are getting smarter, meaning that they are providing ever richer information from the same raw data. This increasing intelligence can occur at various levels, including in the sensor itself, at the edge, and in the cloud. As sending one byte of data is several orders of magnitude more energy-expensive than processing it, data must be handled as near as possible to
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!