Academic literature on the topic 'Feature Extraction and Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Feature Extraction and Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Feature Extraction and Classification"

1

Marnur, Akshata M. "Feature Extraction and Image classification." International Journal for Research in Applied Science and Engineering Technology 6, no. 6 (June 30, 2018): 637–49. http://dx.doi.org/10.22214/ijraset.2018.6099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rana, M., and S. Kharel. "FEATURE EXTRACTION FOR URBAN AND AGRICULTURAL DOMAINS USING ECOGNITION DEVELOPER." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W6 (July 26, 2019): 609–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w6-609-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Feature extraction has always been a challenging task in Geo-Spatial studies both in urban areas as well as in agricultural areas. After the evolution of eCognition Developer, different segmentation techniques and classification algorithms which help in automating feature extraction have been developed in recent years which have been a boon for scientists and people conducting research in the field of geomatics. This research reflects a study depicting the potential of eCognition Developer in extracting features in Agricultural as well as urban areas using various classification techniques. Rule Based and SVM Classification techniques were used for feature extraction in urban areas whereas Feature Space Optimization and K-Nearest Neighbor were used for classifying agricultural features. Results reflect that rule based classification yields more accurate results for urban areas whereas Feature Space Optimization along with object–based classification gave more accuracy in case of agricultural areas.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Suhaidi, Mustazzihim, Rabiah Abdul Kadir, and Sabrina Tiun. "A REVIEW OF FEATURE EXTRACTION METHODS ON MACHINE LEARNING." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 51–59. http://dx.doi.org/10.35631/jistm.622005.

Full text
Abstract:
Extracting features from input data is vital for successful classification and machine learning tasks. Classification is the process of declaring an object into one of the predefined categories. Many different feature selection and feature extraction methods exist, and they are being widely used. Feature extraction, obviously, is a transformation of large input data into a low dimensional feature vector, which is an input to classification or a machine learning algorithm. The task of feature extraction has major challenges, which will be discussed in this paper. The challenge is to learn and extract knowledge from text datasets to make correct decisions. The objective of this paper is to give an overview of methods used in feature extraction for various applications, with a dataset containing a collection of texts taken from social media.
APA, Harvard, Vancouver, ISO, and other styles
4

Kusuma, Arya, De Rosal Ignatius Moses Setiadi, and M. Dalvin Marno Putra. "Tomato Maturity Classification using Naive Bayes Algorithm and Histogram Feature Extraction." Journal of Applied Intelligent System 3, no. 1 (August 27, 2018): 39–48. http://dx.doi.org/10.33633/jais.v3i1.1988.

Full text
Abstract:
Tomatoes have nutritional content that is very beneficial for human health and is one source of vitamins and minerals. Tomato classification plays an important role in many ways related to the distribution and sales of tomatoes. Classification can be done on images by extracting features and then classifying them with certain methods. This research proposes a classification technique using feature histogram extraction and Naïve Bayes Classifier. Histogram feature extractions are widely used and play a role in the classification results. Naïve Bayes is proposed because it has high accuracy and high computational speed when applied to a large number of databases, is robust to isolated noise points, and only requires small training data to estimate the parameters needed for classification. The proposed classification is divided into three classes, namely raw, mature and rotten. Based on the results of the experiment using 75 training data and 25 testing data obtained 76% accuracy
APA, Harvard, Vancouver, ISO, and other styles
5

Ge, Zixian, Guo Cao, Hao Shi, Youqiang Zhang, Xuesong Li, and Peng Fu. "Compound Multiscale Weak Dense Network with Hybrid Attention for Hyperspectral Image Classification." Remote Sensing 13, no. 16 (August 20, 2021): 3305. http://dx.doi.org/10.3390/rs13163305.

Full text
Abstract:
Recently, hyperspectral image (HSI) classification has become a popular research direction in remote sensing. The emergence of convolutional neural networks (CNNs) has greatly promoted the development of this field and demonstrated excellent classification performance. However, due to the particularity of HSIs, redundant information and limited samples pose huge challenges for extracting strong discriminative features. In addition, addressing how to fully mine the internal correlation of the data or features based on the existing model is also crucial in improving classification performance. To overcome the above limitations, this work presents a strong feature extraction neural network with an attention mechanism. Firstly, the original HSI is weighted by means of the hybrid spectral–spatial attention mechanism. Then, the data are input into a spectral feature extraction branch and a spatial feature extraction branch, composed of multiscale feature extraction modules and weak dense feature extraction modules, to extract high-level semantic features. These two features are compressed and fused using the global average pooling and concat approaches. Finally, the classification results are obtained by using two fully connected layers and one Softmax layer. A performance comparison shows the enhanced classification performance of the proposed model compared to the current state of the art on three public datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Gang, Ying Zi Lin, and Sagar Kamarthi. "Wavelets-Based Feature Extraction for Texture Classification." Advanced Materials Research 97-101 (March 2010): 1273–76. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.1273.

Full text
Abstract:
Texture classification is a necessary task in a wider variety of application areas such as manufacturing, textiles, and medicine. In this paper, we propose a novel wavelet-based feature extraction method for robust, scale invariant and rotation invariant texture classification. The method divides the 2-D wavelet coefficient matrices into 2-D clusters and then computes features from the energies inherent in these clusters. The features that contain the information effective for classifying texture images are computed from the energy content of the clusters, and these feature vectors are input to a neural network for texture classification. The results show that the discrimination performance obtained with the proposed cluster-based feature extraction method is superior to that obtained using conventional feature extraction methods, and robust to the rotation and scale invariant texture classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Xiao Li, Qiong He, and Fen Yang. "Feature Extraction for Classification of Proteomic Profile." Advanced Materials Research 756-759 (September 2013): 4576–80. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.4576.

Full text
Abstract:
This work studies on feature extraction for classification of proteomic profile. We evaluated four methods, including principal component analysis (PCA), independent component analysis (ICA), locally linear embedding (LLE) and weighted maximum margin criterion (WMMC). PCA, ICA and LLE extract features based on traditional low-dimension map technique. Comparatively, WMMC extracts features according to classification goal. To study classification performance of PCA, ICA, LLE and WMMC in detail, we used two well known classification methods, support vector machine (SVM) and Fisher discriminant analysis (FDA), to classify profiles. The results show WMMC having relatively good performance due to its prediction accuracy, sensitivity and specificity for diagnosis; it can correctly identify features with high discrimination ability from high-dimensional proteomic profile. When feature set size was reduced less than 10, PCA, ICA and LLE lose a lot of classification information, and the prediction accuracies are less than 90%. However, WMMC can extract most classification information. Its prediction accuracies, sensitivities and specificities are more than 95%. Obviously, WMMC is more suitable to proteomic profile classification. For classifier, FDA is sensible to feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Yun, Shuicheng Yan, and Thomas S. Huang. "Classification and Feature Extraction by Simplexization." IEEE Transactions on Information Forensics and Security 3, no. 1 (2008): 91–100. http://dx.doi.org/10.1109/tifs.2007.916280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bor-Chen Kuo and D. A. Landgrebe. "Nonparametric weighted feature extraction for classification." IEEE Transactions on Geoscience and Remote Sensing 42, no. 5 (May 2004): 1096–105. http://dx.doi.org/10.1109/tgrs.2004.825578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wenming Zheng. "Heteroscedastic Feature Extraction for Texture Classification." IEEE Signal Processing Letters 16, no. 9 (September 2009): 766–69. http://dx.doi.org/10.1109/lsp.2009.2023939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Feature Extraction and Classification"

1

Liu, Raymond. "Feature extraction in classification." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23634.

Full text
Abstract:
Feature extraction, or dimensionality reduction, is an essential part of many machine learning applications. The necessity for feature extraction stems from the curse of dimensionality and the high computational cost of manipulating high-dimensional data. In this thesis we focus on feature extraction for classification. There are several approaches, and we will focus on two such: the increasingly popular information-theoretic approach, and the classical distance-based, or variance-based approach. Current algorithms for information-theoretic feature extraction are usually iterative. In contrast, PCA and LDA are popular examples of feature extraction techniques that can be solved by eigendecomposition, and do not require an iterative procedure. We study the behaviour of an example of iterative algorithm that maximises Kapur's quadratic mutual information by gradient ascent, and propose a new estimate of mutual information that can be maximised by closed-form eigendecomposition. This new technique is more computationally efficient than iterative algorithms, and its behaviour is more reliable and predictable than gradient ascent. Using a general framework of eigendecomposition-based feature extraction, we show a connection between information-theoretic and distance-based feature extraction. Using the distance-based approach, we study the effects of high input dimensionality and over-fitting on feature extraction, and propose a family of eigendecomposition-based algorithms that can solve this problem. We investigate the relationship between class-discrimination and over-fitting, and show why the advantages of information-theoretic feature extraction become less relevant in high-dimensional spaces.
APA, Harvard, Vancouver, ISO, and other styles
2

Goodman, Steve. "Feature extraction and classification." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elliott, Rodney Bruce. "Feature extraction techniques for grasp classification." Thesis, University of Canterbury. Mechanical Engineering, 1998. http://hdl.handle.net/10092/3447.

Full text
Abstract:
This thesis examines the ability of four signal parameterisation techniques to provide discriminatory information between six different classes of signal. This was done with a view to assessing the suitability of the four techniques for inclusion in the real-time control scheme of a next generation robotic prosthesis. Each class of signal correlates to a particular type of grasp that the robotic prosthesis is able to form. Discrimination between the six classes of signal was done on the basis of parameters extracted from four channels of electromyographie (EMG) data that was recorded from muscles in the forearm. Human skeletal muscle tissue produces EMG signals whenever it contracts. Therefore, providing that the EMG signals of the muscles controlling the movements of the hand vary sufficiently when forming the different grasp types, discrimination between the grasps is possible. While it is envisioned that the chosen command discrimination system will be used by mid-forearm amputees to control a robotic prosthesis, the viability of the different parameterisation techniques was tested on data gathered from able-bodied volunteers in order to establish an upper limit of performance. The muscles from which signals were recorded are: the extensor pollicis brevis and extensor pollicis longus pair (responsible for moving the thumb); the extensor communis digitorum (responsible for moving the middle and index fingers); and the extensor carpi ulnaris (responsible for moving the little finger). The four signal parameterisation techniques that were evaluated are: 1. Envelope Maxima. This method parameterises each EMG signal by the maximum value of a smoothed fitted signal envelope. A tenth order polynomial is fitted to the rectified EMG signal peaks, and the maximum value of the polynomial is used to parameterise the signal. 2. Orthogonal Decomposition. This method uses a set of orthogonal functions to decompose the EMG signal into a finite set of orthogonal components. Each burst is then parameterised by the coefficients of the set of orthogonal functions. Two sets of orthogonal functions were tested: the Legendre polynomials, and the wavelet packets associated with the scaling functions of the Haar wavelet (referred to as the Haar wavelet for brevity). 3. Global Dynamical Model. This method uses a discretised set of nonlinear ordinary differential equations to model the dynamical processes that produced the recorded EMG signals. The coefficients of this model are then used to parameterise the EMG signal 4. EMG Histogram. This method formulates a histogram detailing the frequency with which the EMG signal enters particular voltage bins) and uses these frequency measurements to parameterise the signal. Ten sets of EMG data were gathered and processed to extract the desired parameters. Each data set consisted of 600 grasps- lOO grasp records of four channels of EMG data for each of the six grasp classes. From this data a hit rate statistic was formed for each feature extraction technique. The mean hit rates obtained from the four signal parameterisation techniques that were tested are summarised in Table 1. The EMG histogram provided Parameterisation Technique Hit Rate (%) Envelope Maxima 75 Legendre Polynomials 77 Haar Wavelets 79 Global Dynamical Model 75 EMG Histogram 81 Table 1: Hit Rate Summary. the best mean hit rate of all the signal parameterisation techniques of 81%. However, like all of the signal parameterisations that were tested, there was considerable variance in hit rates between the ten sets of data. This has been attributed to the manner in which the electrodes used to record the EMG signals were positioned. By locating the muscles of interest more accurately, consistent hit rates of 95% are well within reach. The fact that the EMG histogram produces the best mean hit rates is surprising given its relative simplicity. However, this simplicity makes the EMG histogram feature ideal for inclusion in a real-time control scheme.
APA, Harvard, Vancouver, ISO, and other styles
4

Chilo, José. "Feature extraction for low-frequency signal classification /." Stockholm : Fysik, Physics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Graf, Arnulf B. A. "Classification and feature extraction in man and machine." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972533508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hamsici, Onur C. "Bayes Optimality in Classification, Feature Extraction and Shape Analysis." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218513562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nilsson, Mikael. "On feature extraction and classification in speech and image processing /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/forskinfo.nsf/allfirst2/fcbe16e84a9ba028c12573920048bce9?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Coath, Martin. "A computational model of auditory feature extraction and sound classification." Thesis, University of Plymouth, 2005. http://hdl.handle.net/10026.1/1822.

Full text
Abstract:
This thesis introduces a computer model that incorporates responses similar to those found in the cochlea, in sub-corticai auditory processing, and in auditory cortex. The principle aim of this work is to show that this can form the basis for a biologically plausible mechanism of auditory stimulus classification. We will show that this classification is robust to stimulus variation and time compression. In addition, the response of the system is shown to support multiple, concurrent, behaviourally relevant classifications of natural stimuli (speech). The model incorporates transient enhancement, an ensemble of spectro - temporal filters, and a simple measure analogous to the idea of visual salience to produce a quasi-static description of the stimulus suitable either for classification with an analogue artificial neural network or, using appropriate rate coding, a classifier based on artificial spiking neurons. We also show that the spectotemporal ensemble can be derived from a limited class of 'formative' stimuli, consistent with a developmental interpretation of ensemble formation. In addition, ensembles chosen on information theoretic grounds consist of filters with relatively simple geometries, which is consistent with reports of responses in mammalian thalamus and auditory cortex. A powerful feature of this approach is that the ensemble response, from which salient auditory events are identified, amounts to stimulus-ensemble driven method of segmentation which respects the envelope of the stimulus, and leads to a quasi-static representation of auditory events which is suitable for spike rate coding. We also present evidence that the encoded auditory events may form the basis of a representation-of-similarity, or second order isomorphism, which implies a representational space that respects similarity relationships between stimuli including novel stimuli.
APA, Harvard, Vancouver, ISO, and other styles
9

Benn, David E. "Model-based feature extraction and classification for automatic face recognition." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Yue Chu. "Feature extraction for chart pattern classification in financial time series." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Feature Extraction and Classification"

1

Lee, Chulhee. Feature extraction and classification algorithms for high dimensional data. West Lafayette, Ind: School of Electrical Engineering, Purdue University, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Eyben, Florian. Real-time Speech and Music Classification by Large Audio Feature Space Extraction. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27299-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guyon, Isabelle, Masoud Nikravesh, Steve Gunn, and Lotfi A. Zadeh, eds. Feature Extraction. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-35488-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chaki, Jyotismita, and Nilanjan Dey. Image Color Feature Extraction Techniques. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-5761-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Huan, and Hiroshi Motoda, eds. Feature Extraction, Construction and Selection. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5725-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

S, Aguado Alberto, ed. Feature extraction and image processing. Oxford: Newnes, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rand, Robert S. Texture analysis and cartographic feature extraction. Fort Belvoir, Va: U.S. Army Corps of Engineers, Engineer Topographic Laboratories, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Li, and Zhiguo Zhang, eds. EEG Signal Processing and Feature Extraction. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9113-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Agarwal, Basant, and Namita Mittal. Prominent Feature Extraction for Sentiment Analysis. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-25343-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Taguchi, Y.-h. Unsupervised Feature Extraction Applied to Bioinformatics. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-22456-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Feature Extraction and Classification"

1

Cesare, Silvio, and Yang Xiang. "Feature Extraction." In Software Similarity and Classification, 57–61. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-2909-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dougherty, Geoff. "Feature Extraction and Selection." In Pattern Recognition and Classification, 123–41. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5323-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abe, Shigeo. "Feature Selection and Extraction." In Support Vector Machines for Pattern Classification, 331–41. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-098-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ghosh, Anil Kumar, and Smarajit Bose. "Feature Extraction for Nonlinear Classification." In Lecture Notes in Computer Science, 170–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11590316_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vallez, Noelia, Anibal Pedraza, Carlos Sánchez, Jesus Salido, Oscar Deniz, and Gloria Bueno. "Diatom Feature Extraction and Classification." In Modern Trends in Diatom Identification, 151–64. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39212-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Verma, B., and S. Kulkarni. "Texture Feature Extraction and Classification." In Computer Analysis of Images and Patterns, 228–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44692-3_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Das, Rik. "Content-Based Feature Extraction: Color Averaging." In Content-Based Image Classification, 39–59. First edition. | Boca Raton: C&H\CRC Press, 2021.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429352928-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Das, Rik. "Content-Based Feature Extraction: Image Binarization." In Content-Based Image Classification, 61–91. First edition. | Boca Raton: C&H\CRC Press, 2021.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429352928-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Das, Rik. "Content-Based Feature Extraction: Image Transforms." In Content-Based Image Classification, 93–115. First edition. | Boca Raton: C&H\CRC Press, 2021.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429352928-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Das, Rik. "Content-Based Feature Extraction: Morphological Operators." In Content-Based Image Classification, 117–31. First edition. | Boca Raton: C&H\CRC Press, 2021.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429352928-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Feature Extraction and Classification"

1

Stuhlsatz, Andre, Jens Lippel, and Thomas Zielke. "Feature Extraction for Simple Classification." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gorski, M., and J. Zarzycki. "Feature extraction in vehicle classification." In 2012 International Conference on Signals and Electronic Systems (ICSES 2012). IEEE, 2012. http://dx.doi.org/10.1109/icses.2012.6382264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sazdic-Jotic, Boban M., Danilo R. Obradovic, Dimitrije M. Bujakovic, and Boban P. Bondzulic. "Feature Extraction for Drone Classification." In 2019 14th International Conference on Advanced Technologies, Systems and Services in Telecommunications (TELSIKS). IEEE, 2019. http://dx.doi.org/10.1109/telsiks46999.2019.9002087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xia, Deshen, and Hua Li. "Region feature extraction and classification." In Photonics for Industrial Applications, edited by David P. Casasent. SPIE, 1994. http://dx.doi.org/10.1117/12.188933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsai, Min-Hsuan, Shen-Fu Tsai, and Thomas S. Huang. "Hierarchical image feature extraction and classification." In the international conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1873951.1874136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saidi, Rabie, Sabeur Aridhi, Engelbert Mephu Nguifo, and Mondher Maddouri. "Feature extraction in protein sequences classification." In the ACM Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2382936.2383060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Wei, Xiangyang Xue, Zichen Sun, Yue-Fei Guo, Mingmin Chi, and Hong Lu. "Efficient Feature Extraction for Image Classification." In 2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007. http://dx.doi.org/10.1109/iccv.2007.4409058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Uddin, M. P., M. A. Mamun, and M. A. Hossain. "Feature extraction for hyperspectral image classification." In 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC). IEEE, 2017. http://dx.doi.org/10.1109/r10-htc.2017.8288979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Goodman, S. "Feature extraction algorithms for pattern classification." In 9th International Conference on Artificial Neural Networks: ICANN '99. IEE, 1999. http://dx.doi.org/10.1049/cp:19991199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stoyle, P. "Flash and persistent point scatterer feature extraction from radar images." In IET Seminar on High Resolution Imaging and Target Classification. IEE, 2006. http://dx.doi.org/10.1049/ic:20060076.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Feature Extraction and Classification"

1

Carin, Lawrence. ICA Feature Extraction and SVM Classification of FLIR Imagery. Fort Belvoir, VA: Defense Technical Information Center, September 2005. http://dx.doi.org/10.21236/ada441506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bahrampour, Soheil, Asok Ray, Soumalya Sarka, Thyagaraju Damarla, and Nasser M. Nasrabadi. Performance Comparison of Feature Extraction Algorithms for Target Detection and Classification. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada580366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hurd, Harry L. Workstation Tools for Feature Extraction and Classification for Nonstationary and Transient Signals. Fort Belvoir, VA: Defense Technical Information Center, July 1992. http://dx.doi.org/10.21236/ada255389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pasion, Leonard. Feature Extraction and Classification of Magnetic and EMI Data, Camp Beale, CA. Fort Belvoir, VA: Defense Technical Information Center, May 2012. http://dx.doi.org/10.21236/ada569666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huynh, Quyen Q., Leon N. Cooper, Nathan Intrator, and Harel Shouval. Classification of Underwater Mammals using Feature Extraction Based on Time-Frequency Analysis and BCM Theory. Fort Belvoir, VA: Defense Technical Information Center, May 1996. http://dx.doi.org/10.21236/ada316962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

De Voir, Christopher. Wavelet Based Feature Extraction and Dimension Reduction for the Classification of Human Cardiac Electrogram Depolarization Waveforms. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Richardson, J. Automatic feature extraction and classification from digital x-ray images. Final report, period ending 1 May 1995. Office of Scientific and Technical Information (OSTI), December 1995. http://dx.doi.org/10.2172/224901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chowdhury, Shwetadwip. Automated extraction of cellular features for a potentially robust classification scheme. Gaithersburg, MD: National Institute of Standards and Technology, 2010. http://dx.doi.org/10.6028/nist.ir.7738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Billings, Stephen. Data Modeling, Feature Extraction, and Classification of Magnetic and EMI Data, ESTCP Discrimination Study, Camp Sibert, AL. Demonstration Report. Fort Belvoir, VA: Defense Technical Information Center, September 2008. http://dx.doi.org/10.21236/ada495600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Anderson, Dana Z., Gan Zhou, and Germano Montemezzani. Temporal Feature Extraction in Photorefractive Resonators. Fort Belvoir, VA: Defense Technical Information Center, December 1994. http://dx.doi.org/10.21236/ada292908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography