To see the other types of publications on this topic, follow the link: Feature Extraction and Classification.

Journal articles on the topic 'Feature Extraction and Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Feature Extraction and Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Marnur, Akshata M. "Feature Extraction and Image classification." International Journal for Research in Applied Science and Engineering Technology 6, no. 6 (June 30, 2018): 637–49. http://dx.doi.org/10.22214/ijraset.2018.6099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rana, M., and S. Kharel. "FEATURE EXTRACTION FOR URBAN AND AGRICULTURAL DOMAINS USING ECOGNITION DEVELOPER." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W6 (July 26, 2019): 609–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w6-609-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Feature extraction has always been a challenging task in Geo-Spatial studies both in urban areas as well as in agricultural areas. After the evolution of eCognition Developer, different segmentation techniques and classification algorithms which help in automating feature extraction have been developed in recent years which have been a boon for scientists and people conducting research in the field of geomatics. This research reflects a study depicting the potential of eCognition Developer in extracting features in Agricultural as well as urban areas using various classification techniques. Rule Based and SVM Classification techniques were used for feature extraction in urban areas whereas Feature Space Optimization and K-Nearest Neighbor were used for classifying agricultural features. Results reflect that rule based classification yields more accurate results for urban areas whereas Feature Space Optimization along with object–based classification gave more accuracy in case of agricultural areas.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Suhaidi, Mustazzihim, Rabiah Abdul Kadir, and Sabrina Tiun. "A REVIEW OF FEATURE EXTRACTION METHODS ON MACHINE LEARNING." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 51–59. http://dx.doi.org/10.35631/jistm.622005.

Full text
Abstract:
Extracting features from input data is vital for successful classification and machine learning tasks. Classification is the process of declaring an object into one of the predefined categories. Many different feature selection and feature extraction methods exist, and they are being widely used. Feature extraction, obviously, is a transformation of large input data into a low dimensional feature vector, which is an input to classification or a machine learning algorithm. The task of feature extraction has major challenges, which will be discussed in this paper. The challenge is to learn and extract knowledge from text datasets to make correct decisions. The objective of this paper is to give an overview of methods used in feature extraction for various applications, with a dataset containing a collection of texts taken from social media.
APA, Harvard, Vancouver, ISO, and other styles
4

Kusuma, Arya, De Rosal Ignatius Moses Setiadi, and M. Dalvin Marno Putra. "Tomato Maturity Classification using Naive Bayes Algorithm and Histogram Feature Extraction." Journal of Applied Intelligent System 3, no. 1 (August 27, 2018): 39–48. http://dx.doi.org/10.33633/jais.v3i1.1988.

Full text
Abstract:
Tomatoes have nutritional content that is very beneficial for human health and is one source of vitamins and minerals. Tomato classification plays an important role in many ways related to the distribution and sales of tomatoes. Classification can be done on images by extracting features and then classifying them with certain methods. This research proposes a classification technique using feature histogram extraction and Naïve Bayes Classifier. Histogram feature extractions are widely used and play a role in the classification results. Naïve Bayes is proposed because it has high accuracy and high computational speed when applied to a large number of databases, is robust to isolated noise points, and only requires small training data to estimate the parameters needed for classification. The proposed classification is divided into three classes, namely raw, mature and rotten. Based on the results of the experiment using 75 training data and 25 testing data obtained 76% accuracy
APA, Harvard, Vancouver, ISO, and other styles
5

Ge, Zixian, Guo Cao, Hao Shi, Youqiang Zhang, Xuesong Li, and Peng Fu. "Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification." Remote Sensing 13, no. 16 (August 20, 2021): 3305. http://dx.doi.org/10.3390/rs13163305.

Full text
Abstract:
Recently, hyperspectral image (HSI) classification has become a popular research direction in remote sensing. The emergence of convolutional neural networks (CNNs) has greatly promoted the development of this field and demonstrated excellent classification performance. However, due to the particularity of HSIs, redundant information and limited samples pose huge challenges for extracting strong discriminative features. In addition, addressing how to fully mine the internal correlation of the data or features based on the existing model is also crucial in improving classification performance. To overcome the above limitations, this work presents a strong feature extraction neural network with an attention mechanism. Firstly, the original HSI is weighted by means of the hybrid spectral–spatial attention mechanism. Then, the data are input into a spectral feature extraction branch and a spatial feature extraction branch, composed of multiscale feature extraction modules and weak dense feature extraction modules, to extract high-level semantic features. These two features are compressed and fused using the global average pooling and concat approaches. Finally, the classification results are obtained by using two fully connected layers and one Softmax layer. A performance comparison shows the enhanced classification performance of the proposed model compared to the current state of the art on three public datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Gang, Ying Zi Lin, and Sagar Kamarthi. "Wavelets-Based Feature Extraction for Texture Classification." Advanced Materials Research 97-101 (March 2010): 1273–76. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.1273.

Full text
Abstract:
Texture classification is a necessary task in a wider variety of application areas such as manufacturing, textiles, and medicine. In this paper, we propose a novel wavelet-based feature extraction method for robust, scale invariant and rotation invariant texture classification. The method divides the 2-D wavelet coefficient matrices into 2-D clusters and then computes features from the energies inherent in these clusters. The features that contain the information effective for classifying texture images are computed from the energy content of the clusters, and these feature vectors are input to a neural network for texture classification. The results show that the discrimination performance obtained with the proposed cluster-based feature extraction method is superior to that obtained using conventional feature extraction methods, and robust to the rotation and scale invariant texture classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Xiao Li, Qiong He, and Fen Yang. "Feature Extraction for Classification of Proteomic Profile." Advanced Materials Research 756-759 (September 2013): 4576–80. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.4576.

Full text
Abstract:
This work studies on feature extraction for classification of proteomic profile. We evaluated four methods, including principal component analysis (PCA), independent component analysis (ICA), locally linear embedding (LLE) and weighted maximum margin criterion (WMMC). PCA, ICA and LLE extract features based on traditional low-dimension map technique. Comparatively, WMMC extracts features according to classification goal. To study classification performance of PCA, ICA, LLE and WMMC in detail, we used two well known classification methods, support vector machine (SVM) and Fisher discriminant analysis (FDA), to classify profiles. The results show WMMC having relatively good performance due to its prediction accuracy, sensitivity and specificity for diagnosis; it can correctly identify features with high discrimination ability from high-dimensional proteomic profile. When feature set size was reduced less than 10, PCA, ICA and LLE lose a lot of classification information, and the prediction accuracies are less than 90%. However, WMMC can extract most classification information. Its prediction accuracies, sensitivities and specificities are more than 95%. Obviously, WMMC is more suitable to proteomic profile classification. For classifier, FDA is sensible to feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Yun, Shuicheng Yan, and Thomas S. Huang. "Classification and Feature Extraction by Simplexization." IEEE Transactions on Information Forensics and Security 3, no. 1 (2008): 91–100. http://dx.doi.org/10.1109/tifs.2007.916280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bor-Chen Kuo and D. A. Landgrebe. "Nonparametric weighted feature extraction for classification." IEEE Transactions on Geoscience and Remote Sensing 42, no. 5 (May 2004): 1096–105. http://dx.doi.org/10.1109/tgrs.2004.825578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wenming Zheng. "Heteroscedastic Feature Extraction for Texture Classification." IEEE Signal Processing Letters 16, no. 9 (September 2009): 766–69. http://dx.doi.org/10.1109/lsp.2009.2023939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Das, Manab Kumar, and Samit Ari. "ECG Beats Classification Using Mixture of Features." International Scholarly Research Notices 2014 (September 17, 2014): 1–12. http://dx.doi.org/10.1155/2014/178436.

Full text
Abstract:
Classification of electrocardiogram (ECG) signals plays an important role in clinical diagnosis of heart disease. This paper proposes the design of an efficient system for classification of the normal beat (N), ventricular ectopic beat (V), supraventricular ectopic beat (S), fusion beat (F), and unknown beat (Q) using a mixture of features. In this paper, two different feature extraction methods are proposed for classification of ECG beats: (i) S-transform based features along with temporal features and (ii) mixture of ST and WT based features along with temporal features. The extracted feature set is independently classified using multilayer perceptron neural network (MLPNN). The performances are evaluated on several normal and abnormal ECG signals from 44 recordings of the MIT-BIH arrhythmia database. In this work, the performances of three feature extraction techniques with MLP-NN classifier are compared using five classes of ECG beat recommended by AAMI (Association for the Advancement of Medical Instrumentation) standards. The average sensitivity performances of the proposed feature extraction technique for N, S, F, V, and Q are 95.70%, 78.05%, 49.60%, 89.68%, and 33.89%, respectively. The experimental results demonstrate that the proposed feature extraction techniques show better performances compared to other existing features extraction techniques.
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Guoqi, Changyun Wen, Wei Wei, Yi Xu, Jie Ding, Guangshe Zhao, and Luping Shi. "Trace Ratio Criterion for Feature Extraction in Classification." Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/725204.

Full text
Abstract:
A generalized linear discriminant analysis based on trace ratio criterion algorithm (GLDA-TRA) is derived to extract features for classification. With the proposed GLDA-TRA, a set of orthogonal features can be extracted in succession. Each newly extracted feature is the optimal feature that maximizes the trace ratio criterion function in the subspace orthogonal to the space spanned by the previous extracted features.
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Zhenyi, Jiayang Sun, Haotian Yang, Jiarui Tan, Bin Zhou, Qi Wei, and Rong Zhang. "Exploration and Research of Human Identification Scheme Based on Inertial Data." Sensors 20, no. 12 (June 18, 2020): 3444. http://dx.doi.org/10.3390/s20123444.

Full text
Abstract:
The identification work based on inertial data is not limited by space, and has high flexibility and concealment. Previous research has shown that inertial data contains information related to behavior categories. This article discusses whether inertial data contains information related to human identity. The classification experiment, based on the neural network feature fitting function, achieves 98.17% accuracy on the test set, confirming that the inertial data can be used for human identification. The accuracy of the classification method without feature extraction on the test set is only 63.84%, which further indicates the need for extracting features related to human identity from the changes in inertial data. In addition, the research on classification accuracy based on statistical features discusses the effect of different feature extraction functions on the results. The article also discusses the dimensionality reduction processing and visualization results of the collected data and the extracted features, which helps to intuitively assess the existence of features and the quality of different feature extraction effects.
APA, Harvard, Vancouver, ISO, and other styles
14

Guo, Guang Nan, Mei Chu, Xiao Hua Wang, Xiao Bo Huang, and Zheng Wei. "Design of Image Classification System Based on Feature Extraction." Key Engineering Materials 474-476 (April 2011): 1859–64. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.1859.

Full text
Abstract:
Image classification is a kind of image data mining method to classify different targets based on different features reflected in image information. The paper designed a kind of image classification system based on feature selection, which utilize feature selection and feature weight to optimize the features and obtain features that can reflect essential of classification, so as to improve image classification accuracy. Meanwhile, the paper gave material implementation method of main modules of image classification system. Image classification experiment based on the system proves effectiveness of the designed system.
APA, Harvard, Vancouver, ISO, and other styles
15

S.P.Kamalapriya, S. P. Kamalapriya, S. Pathur Nisha, and V. S. Thangarasu V.S.Thangarasu. "Enhanced Image Patch Approximation For Lung Tissue Classification Using Feature Based Extraction." Paripex - Indian Journal Of Research 3, no. 2 (January 15, 2012): 226–28. http://dx.doi.org/10.15373/22501991/feb2014/77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Misaki, Daigo, Shigeru Aomura, and Noriyuki Aoyama. "Pattern Recognition by Hierarchical Feature Extraction." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 278–85. http://dx.doi.org/10.20965/jrm.2003.p0278.

Full text
Abstract:
We discuss effective pattern recognition for contour images by hierarchical feature extraction. When pattern recognition is done for an unlimited object, it is effective to see the object in a perspective manner at the beginning and next to see in detail. General features are used for rough classification and local features are used for a more detailed classification. D-P matching is applied for classification of a typical contour image of individual class, which contains selected points called ""landmark""s, and rough classification is done. Features between these landmarks are analyzed and used as input data of neural networks for more detailed classification. We apply this to an illustrated referenced book of insects in which much information is classified hierarchically to verify the proposed method. By introducing landmarks, a neural network can be used effectively for pattern recognition of contour images.
APA, Harvard, Vancouver, ISO, and other styles
17

Siregar, Alda Cendekia, and Barry Ceasar Octariadi. "Classification of Sambas Traditional Fabric “Kain Lunggi” Using Texture Feature." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 13, no. 4 (October 31, 2019): 389. http://dx.doi.org/10.22146/ijccs.49782.

Full text
Abstract:
Traditional fabric is a cultural heritage that has to be preserved. Kain Lunggi is Sambas traditional fabric that saw a decline in its crafter. To introduce Kain Lunggi in a broader national and global society in order to preserve it, a digital image processing based system to perform Kain Lunggi pattern recognition need to be built. Feature extraction is an important part of digital image processing. The visual feature that does not represent the character of an object will affect the accuracy of a recognition system. The purposes of this research are to perform feature selection on sets of feature to determine the best feature that can increase recognition accuracy. This research conducted in several steps which are image acquisition of Kain Lunggi pattern, preprocessing to reduce image noise, feature extraction to obtain image features, and feature selection. GLCM is implemented as a feature extraction method. Feature extraction result will be used in a feature selection process using CFS (Correlation-based Feature Selection) methods. Selected features from CFS process are Angular Second Moment, Contrast, and Correlation. Selected features evaluation is conducted by calculating classification accuracy with the KNN method. Classification accuracy prior to feature extraction is 85.18% with K values K=1 ; meanwhile, the accuracy increases to 88.89% after feature selection. The highest accuracy improvement of 20.74% in KNN occurred when using K value K= 4.
APA, Harvard, Vancouver, ISO, and other styles
18

Mohsin Al-juboori, Ali. "An Efficient Method for Texture Feature Extraction and Recognition based on Contourlet Transform and Canonical Correlation Analysis." Journal of Education College Wasit University 1, no. 29 (January 16, 2018): 498–511. http://dx.doi.org/10.31185/eduj.vol1.iss29.167.

Full text
Abstract:
Feature extraction is an important processing step in texture classification. For feature extraction in contourlet domain, statistical features for blocks of subband are computed. In this paper, we present an efficient feature vector extraction method for texture classification. For more discriminative feature a canonical correlation analysis method is propose for feature vector fused to the different sample of texture in the same cluster. The KNN (K-Nearest Neighbor) classifier is utilizing to perform texture classification.
APA, Harvard, Vancouver, ISO, and other styles
19

Rana, Bharti, Akanksha Juneja, and Ramesh Kumar Agrawal. "Relevant Feature Subset Selection from Ensemble of Multiple Feature Extraction Methods for Texture Classification." International Journal of Computer Vision and Image Processing 5, no. 1 (January 2015): 48–65. http://dx.doi.org/10.4018/ijcvip.2015010103.

Full text
Abstract:
Performance of texture classification for a given set of texture patterns depends on the choice of feature extraction technique. Integration of features from various feature extraction methods not only eliminates risk of method selection but also brings benefits from the participating methods which play complimentary role among themselves to represent underlying texture pattern. However, it comes at the cost of a large feature vector which may contain redundant features. The presence of such redundant features leads to high computation time, memory requirement and may deteriorate the performance of the classifier. In this research workMonirst phase, a pool of texture features is constructed by integrating features from seven well known feature extraction methods. In the second phase, a few popular feature subset selection techniques are investigated to determine a minimal subset of relevant features from this pool of features. In order to check the efficacy of the proposed approach, performance is evaluated on publically available Brodatz dataset, in terms of classification error. Experimental results demonstrate substantial improvement in classification performance over existing feature extraction techniques. Furthermore, ranking and statistical test also strengthen the results.
APA, Harvard, Vancouver, ISO, and other styles
20

Girish Baabu, M. C., and Padma M. C. "Semantic feature extraction method for hyperspectral crop classification." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 1 (July 1, 2021): 387. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp387-395.

Full text
Abstract:
<span>Hyperspectral imaging (HSI) is composed of several hundred of narrow bands (NB) with high spectral correlation and is widely used in crop classification; thus induces time and space complexity, resulting in high computational overhead and Hughes phenomenon in processing these images. Dimensional reduction technique such as band selection and feature extraction plays an important part in enhancing performance of hyperspectral image classification. However, existing method are not efficient when put forth in noisy and mixed pixel environment with dynamic illumination and climatic condition. Here the proposed Sematic Feature Representation based HSI (SFR-HSI) crop classification method first employ Image Fusion (IF) method for finding meaningful features from raw HSI spectrally. Second, to extract inherent features that keeps spatially meaningful representation of different crops by eliminating shading elements. Then, the meaningful feature set are used for training using Support vector machine (SVM). Experiment outcome shows proposed HSI crop classification model achieves much better accuracies and Kappa coefficient performance. </span>
APA, Harvard, Vancouver, ISO, and other styles
21

GHOSH, ANIL KUMAR, and SMARAJIT BOSE. "FEATURE EXTRACTION FOR CLASSIFICATION USING STATISTICAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1103–26. http://dx.doi.org/10.1142/s0218001407005855.

Full text
Abstract:
In a classification problem, quite often the dimension of the measurement vector is large. Some of these measurements may not be important for separating the classes. Removal of these measurement variables not only reduces the computational cost but also leads to better understanding of class separability. There are some methods in the existing literature for reducing the dimensionality of a classification problem without losing much of the separability information. However, these dimension reduction procedures usually work well for linear classifiers. In the case where competing classes are not linearly separable, one has to look for ideal "features" which could be some transformations of one or more measurements. In this paper, we make an attempt to tackle both, the problems of dimension reduction and feature extraction, by considering a projection pursuit regression model. The single hidden layer perceptron model and some other popular models can be viewed as special cases of this model. An iterative algorithm based on backfitting is proposed to select the features dynamically, and cross-validation method is used to select the ideal number of features. We carry out an extensive simulation study to show the effectiveness of this fully automatic method.
APA, Harvard, Vancouver, ISO, and other styles
22

Podgorelec, David, and Borut �alik. "Feature Extraction and Classification from Boundary Representation." Journal of Computing and Information Technology 11, no. 1 (2003): 41. http://dx.doi.org/10.2498/cit.2003.01.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Pethkar, Sneha. "Classification of Soil Image using Feature Extraction." International Journal for Research in Applied Science and Engineering Technology 6, no. 7 (July 31, 2018): 819–23. http://dx.doi.org/10.22214/ijraset.2018.7138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Simone, G., F. C. Morabito, R. Polikar, P. Ramuhalli, L. Udpa, and S. Udpa. "Feature extraction techniques for ultrasonic signal classification." International Journal of Applied Electromagnetics and Mechanics 15, no. 1-4 (December 21, 2002): 291–94. http://dx.doi.org/10.3233/jae-2002-462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Yang, Feiping Nie, Quanxue Gao, Xinbo Gao, Jungong Han, and Ling Shao. "Flexible unsupervised feature extraction for image classification." Neural Networks 115 (July 2019): 65–71. http://dx.doi.org/10.1016/j.neunet.2019.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Han, Euihwan, and Hyungtai Cha. "Audio Feature Extraction for Effective Emotion Classification." IEIE Transactions on Smart Processing & Computing 8, no. 2 (April 30, 2019): 100–107. http://dx.doi.org/10.5573/ieiespc.2019.8.2.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Deepika, Chandupatla. "Speech Emotion recognition feature Extraction and Classification." International Journal of Advanced Trends in Computer Science and Engineering 9, no. 2 (April 25, 2020): 1257–61. http://dx.doi.org/10.30534/ijatcse/2020/54922020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, C., and D. A. Landgrebe. "Decision boundary feature extraction for nonparametric classification." IEEE Transactions on Systems, Man, and Cybernetics 23, no. 2 (1993): 433–44. http://dx.doi.org/10.1109/21.229456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, H., and S. B. Gelfand. "Classification trees with neural network feature extraction." IEEE Transactions on Neural Networks 3, no. 6 (1992): 923–33. http://dx.doi.org/10.1109/72.165594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lu, Nan, Jihong Wang, Isobel McDermott, Steve Thornton, Manu Vatish, and Harpal Randeva. "Uterine electromyography signal feature extraction and classification." International Journal of Modelling, Identification and Control 6, no. 2 (2009): 136. http://dx.doi.org/10.1504/ijmic.2009.024330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Andayani, Relly, and Syarifudin Madenda. "Concrete Slump Classification Using GLCM Feature Extraction." Advanced Science, Engineering and Medicine 8, no. 10 (October 1, 2016): 800–803. http://dx.doi.org/10.1166/asem.2016.1934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mallet, Y., D. Coomans, J. Kautsky, and O. De Vel. "Classification using adaptive wavelets for feature extraction." IEEE Transactions on Pattern Analysis and Machine Intelligence 19, no. 10 (1997): 1058–66. http://dx.doi.org/10.1109/34.625106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Xuchu, Ruirui Wang, Bing Liu, and Anzhu Yu. "Salient feature extraction for hyperspectral image classification." Remote Sensing Letters 10, no. 6 (February 22, 2019): 553–62. http://dx.doi.org/10.1080/2150704x.2019.1579936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Qiaoqiao, and Salah Bourennane. "Hyperspectral image classification with unsupervised feature extraction." Remote Sensing Letters 11, no. 5 (February 25, 2020): 475–84. http://dx.doi.org/10.1080/2150704x.2020.1731769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Nyongesa, H. O., S. Al-Khayatt, S. M. Mohamed, and M. Mahmoud. "Fast Robust Fingerprint Feature Extraction and Classification." Journal of Intelligent and Robotic Systems 40, no. 1 (May 2004): 103–12. http://dx.doi.org/10.1023/b:jint.0000034344.58449.fd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mellinger, David K. "Acoustic feature extraction and classification in Ishmael." Journal of the Acoustical Society of America 134, no. 5 (November 2013): 3986. http://dx.doi.org/10.1121/1.4830526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Blevins, Matthew G., Steven L. Bunkley, Edward T. Nykaza, Anton Netchaev, and Gordon Ochi. "Improved feature extraction for environmental acoustic classification." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3964. http://dx.doi.org/10.1121/1.4989023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Pei-Yuan, and Keith C. C. Chan. "Fuzzy Feature Extraction for Multichannel EEG Classification." IEEE Transactions on Cognitive and Developmental Systems 10, no. 2 (June 2018): 267–79. http://dx.doi.org/10.1109/tcds.2016.2632130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Andayani, Relly, and Syarifudin Madenda. "Concrete Slump Classification using GLCM Feature Extraction." IOP Conference Series: Materials Science and Engineering 131 (May 2016): 012011. http://dx.doi.org/10.1088/1757-899x/131/1/012011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Benediktsson, J. A., J. R. Sveinsson, and K. Amason. "Classification and feature extraction of AVIRIS data." IEEE Transactions on Geoscience and Remote Sensing 33, no. 5 (1995): 1194–205. http://dx.doi.org/10.1109/36.469483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Pyone, Htwe Htwe, Hnin Yu Yu Win, and Thin Thin Swe. "Sound Classification using Image Feature Extraction Technique." International Journal of Scientific and Research Publications (IJSRP) 9, no. 7 (July 18, 2019): p9168. http://dx.doi.org/10.29322/ijsrp.9.07.2019.p9168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ahlstrom, Christer, Peter Hult, Peter Rask, Jan-Erik Karlsson, Eva Nylander, Ulf Dahlström, and Per Ask. "Feature Extraction for Systolic Heart Murmur Classification." Annals of Biomedical Engineering 34, no. 11 (October 4, 2006): 1666–77. http://dx.doi.org/10.1007/s10439-006-9187-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sun, Shiliang, and Changshui Zhang. "Adaptive feature extraction for EEG signal classification." Medical & Biological Engineering & Computing 44, no. 10 (September 12, 2006): 931–35. http://dx.doi.org/10.1007/s11517-006-0107-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Beltrán, N. H., M. A. Duarte-Mermoud, M. A. Bustos, S. A. Salah, E. A. Loyola, A. I. Peña-Neira, and J. W. Jalocha. "Feature extraction and classification of Chilean wines." Journal of Food Engineering 75, no. 1 (July 2006): 1–10. http://dx.doi.org/10.1016/j.jfoodeng.2005.03.045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tu, Wenting, and Shiliang Sun. "Semi-supervised feature extraction for EEG classification." Pattern Analysis and Applications 16, no. 2 (September 25, 2012): 213–22. http://dx.doi.org/10.1007/s10044-012-0298-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Bing, Anzhu Yu, Xiong Tan, and Ruirui Wang. "Slow feature extraction for hyperspectral image classification." Remote Sensing Letters 12, no. 5 (March 10, 2021): 429–38. http://dx.doi.org/10.1080/2150704x.2021.1895448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Abbas, Heba Kh, Nada A. Fatah, Haidar J. Mohamad, and Ali A. Alzuky. "Brain Tumor Classification Using Texture Feature Extraction." Journal of Physics: Conference Series 1892, no. 1 (April 1, 2021): 012012. http://dx.doi.org/10.1088/1742-6596/1892/1/012012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tu, Bing, Nanying Li, Leyuan Fang, Danbing He, and Pedram Ghamisi. "Hyperspectral Image Classification with Multi-Scale Feature Extraction." Remote Sensing 11, no. 5 (March 5, 2019): 534. http://dx.doi.org/10.3390/rs11050534.

Full text
Abstract:
Spectral features cannot effectively reflect the differences among the ground objects and distinguish their boundaries in hyperspectral image (HSI) classification. Multi-scale feature extraction can solve this problem and improve the accuracy of HSI classification. The Gaussian pyramid can effectively decompose HSI into multi-scale structures, and efficiently extract features of different scales by stepwise filtering and downsampling. Therefore, this paper proposed a Gaussian pyramid based multi-scale feature extraction (MSFE) classification method for HSI. First, the HSI is decomposed into several Gaussian pyramids to extract multi-scale features. Second, we construct probability maps in each layer of the Gaussian pyramid and employ edge-preserving filtering (EPF) algorithms to further optimize the details. Finally, the final classification map is acquired by a majority voting method. Compared with other spectral-spatial classification methods, the proposed method can not only extract the characteristics of different scales, but also can better preserve detailed structures and the edge regions of the image. Experiments performed on three real hyperspectral datasets show that the proposed method can achieve competitive classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Jingwen. "Music Feature Extraction and Classification Algorithm Based on Deep Learning." Scientific Programming 2021 (May 25, 2021): 1–9. http://dx.doi.org/10.1155/2021/1651560.

Full text
Abstract:
With the rapid development of information technology and communication, digital music has grown and exploded. Regarding how to quickly and accurately retrieve the music that users want from huge bulk of music repository, music feature extraction and classification are considered as an important part of music information retrieval and have become a research hotspot in recent years. Traditional music classification approaches use a large number of artificially designed acoustic features. The design of features requires knowledge and in-depth understanding in the domain of music. The features of different classification tasks are often not universal and comprehensive. The existing approach has two shortcomings as follows: ensuring the validity and accuracy of features by manually extracting features and the traditional machine learning classification approaches not performing well on multiclassification problems and not having the ability to be trained on large-scale data. Therefore, this paper converts the audio signal of music into a sound spectrum as a unified representation, avoiding the problem of manual feature selection. According to the characteristics of the sound spectrum, the research has combined 1D convolution, gating mechanism, residual connection, and attention mechanism and proposed a music feature extraction and classification model based on convolutional neural network, which can extract more relevant sound spectrum characteristics of the music category. Finally, this paper designs comparison and ablation experiments. The experimental results show that this approach is better than traditional manual models and machine learning-based approaches.
APA, Harvard, Vancouver, ISO, and other styles
50

KWAK, NOJUN. "FEATURE EXTRACTION BASED ON DIRECT CALCULATION OF MUTUAL INFORMATION." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1213–31. http://dx.doi.org/10.1142/s0218001407005892.

Full text
Abstract:
In many pattern recognition problems, it is desirable to reduce the number of input features by extracting important features related to the problems. By focusing on only the problem-relevant features, the dimension of features can be greatly reduced and thereby can result in a better generalization performance with less computational complexity. In this paper, we propose a feature extraction method for handling classification problems. The proposed algorithm is used to search for a set of linear combinations of the original features, whose mutual information with the output class can be maximized. The mutual information between the extracted features and the output class is calculated by using the probability density estimation based on the Parzen window method. A greedy algorithm using the gradient descent method is used to determine the new features. The computational load is proportional to the square of the number of samples. The proposed method was applied to several classification problems, which showed better or comparable performances than the conventional feature extraction methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography