Dissertations / Theses on the topic 'Feature Extraction and Classification'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Feature Extraction and Classification.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Liu, Raymond. "Feature extraction in classification." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23634.
Full textGoodman, Steve. "Feature extraction and classification." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301872.
Full textElliott, Rodney Bruce. "Feature extraction techniques for grasp classification." Thesis, University of Canterbury. Mechanical Engineering, 1998. http://hdl.handle.net/10092/3447.
Full textChilo, José. "Feature extraction for low-frequency signal classification /." Stockholm : Fysik, Physics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4661.
Full textGraf, Arnulf B. A. "Classification and feature extraction in man and machine." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972533508.
Full textHamsici, Onur C. "Bayes Optimality in Classification, Feature Extraction and Shape Analysis." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218513562.
Full textNilsson, Mikael. "On feature extraction and classification in speech and image processing /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/forskinfo.nsf/allfirst2/fcbe16e84a9ba028c12573920048bce9?OpenDocument.
Full textCoath, Martin. "A computational model of auditory feature extraction and sound classification." Thesis, University of Plymouth, 2005. http://hdl.handle.net/10026.1/1822.
Full textBenn, David E. "Model-based feature extraction and classification for automatic face recognition." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324811.
Full textZheng, Yue Chu. "Feature extraction for chart pattern classification in financial time series." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950623.
Full textBekiroglu, Yasemi. "Nonstationary feature extraction techniques for automatic classification of impact acoustic signals." Thesis, Högskolan Dalarna, Datateknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:du-3592.
Full textMasip, Rodó David. "Face Classification Using Discriminative Features and Classifier Combination." Doctoral thesis, Universitat Autònoma de Barcelona, 2005. http://hdl.handle.net/10803/3051.
Full textPer altra banda, en la segon apart de la tesi explorem el rol de les característiques externes en el procés de classificació facial, i presentem un nou mètode per extreure un conjunt alineat de característiques a partir de la informació externa que poden ser combinades amb les tècniques clàssiques millorant els resultats globals de classificació.
As technology evolves, new applications dealing with face classification appear. In pattern recognition, faces are usually seen as points in a high dimensional spaces defined by their pixel values. This approach must deal with several problems such as: the curse of dimensionality, the presence of partial occlusions or local changes in the illumination. Traditionally, only the internal features of face images have been used for classification purposes, where usually a feature extraction step is performed. Feature extraction techniques allow to reduce the influence of the problems mentioned, reducing also the noise inherent from natural images and learning invariant characteristics from face images. In the first part of this thesis some internal feature extraction methods are presented: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Non Negative Matrix Factorization (NMF), and Fisher Linear Discriminant Analysis (FLD), all of them making some kind of the assumption on the data to classify. The main contribution of our work is a non parametric feature extraction family of techniques using the Adaboost algorithm. Our method makes no assumptions on the data to classify, and incrementally builds the projection matrix taking into account the most difficult samples.
On the other hand, in the second part of this thesis we also explore the role of external features in face classification purposes, and present a method for extracting an aligned feature set from external face information that can be combined with the classic internal features improving the global performance of the face classification task.
Magnusson, Ludvig, and Johan Rovala. "AI Approaches for Classification and Attribute Extraction in Text." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-67882.
Full textShang, Changjing. "Principal features based texture classification using artificial neural networks." Thesis, Heriot-Watt University, 1995. http://hdl.handle.net/10399/1323.
Full textDilger, Samantha Kirsten Nowik. "Pushing the boundaries: feature extraction from the lung improves pulmonary nodule classification." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/3071.
Full textSernheim, Mikael. "Experimental Study on ClassifierDesign and Text Feature Extraction for Short Text Classification." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-323214.
Full textFargeas, Aureline. "Classification, feature extraction and prediction of side effects in prostate cancer radiotherapy." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S022/document.
Full textProstate cancer is among the most common types of cancer worldwide. One of the standard treatments is external radiotherapy, which involves delivering ionizing radiation to a clinical target, in this instance the prostate and seminal vesicles. The goal of radiotherapy is to achieve a maximal local control while sparing neighboring organs (mainly the rectum and the bladder) to avoid normal tissue complications. Understanding the dose/toxicity relationships is a central question for improving treatment reliability at the inverse planning step. Normal tissue complication probability (NTCP) toxicity prediction models have been developed in order to predict toxicity events using dosimetric data. The main considered information are dose-volume histograms (DVH), which provide an overall representation of dose distribution based on the dose delivered per percentage of organ volume. Nevertheless, current dose-based models display limitations as they are not fully optimized; most of them do not include additional non-dosimetric information (patient, tumor and treatment characteristics). Furthermore, they do not provide any understanding of local relationships between dose and effect (dose-space/effect relationship) as they do not exploit the rich information from the 3D planning dose distributions. In the context of rectal bleeding prediction after prostate cancer external beam radiotherapy, the objectives of this thesis are: i) to extract relevant information from DVH and non-dosimetric variables, in order to improve existing NTCP models and ii) to analyze the spatial correlations between local dose and side effects allowing a characterization of 3D dose distribution at a sub-organ level. Thus, strategies aimed at exploiting the information from the radiotherapy planning (DVH and 3D planned dose distributions) were proposed. Firstly, based on independent component analysis, a new model for rectal bleeding prediction by combining dosimetric and non-dosimetric information in an original manner was proposed. Secondly, we have developed new approaches aimed at jointly taking advantage of the 3D planning dose distributions that may unravel the subtle correlation between local dose and side effects to classify and/or predict patients at risk of suffering from rectal bleeding, and identify regions which may be at the origin of this adverse event. More precisely, we proposed three stochastic methods based on principal component analysis, independent component analysis and discriminant nonnegative matrix factorization, and one deterministic method based on canonical polyadic decomposition of fourth order array containing planned dose. The obtained results show that our new approaches exhibit in general better performances than state-of-the-art predictive methods
Paraskevas, Ioannis. "Phase as a feature extraction tool for audio classification and signal localisation." Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/843856/.
Full textBrown, Dane. "Investigating combinations of feature extraction and classification for improved image-based multimodal biometric systems at the feature level." Thesis, Rhodes University, 2018. http://hdl.handle.net/10962/63470.
Full textRen, Bobby (Bobby B. ). "Calibration, feature extraction and classification of water contaminants using a differential mobility spectrometer." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/53163.
Full textIncludes bibliographical references (p. 87-89).
High-Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) is a chemical sensor that separates ions in the gaseous phase based on their mobility in high electric fields. A threefold approach was developed for both chemical type classification and concentration classification of water contaminants for FAIMS signals. The three steps in this approach are calibration, feature extraction, and classification. Calibration was carried out to remove baseline fluctation and other variations in FAIMS data sets. Four feature extraction algorithms were used to extract subsets of the signal that had high separation potential between two classes of signals. Finally, support vector machines were used for binary classification. The success of classification was measured both by using separability metrics to evaluate the separability of extracted features, and by the percent of correct classification (Pcc) in each task.
by Bobby Ren.
M.Eng.
Hapuarachchi, Pasan. "Feature selection and artifact removal in sleep stage classification." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2879.
Full textHowever, if some of these artifacts are removed prior to analysis, their job will be become easier. Furthermore, one of the biggest motivations, of our team's research is the construction of a portable device that can analyze the sleep data as they are being collected. For this task, the sleep data must be analyzed completely automatically in order to make the classifications.
The research presented in this thesis concerns itself with the denoising and the feature selection aspects of the teams' goals. Since humans are able to process artifacts and ignore them prior to classification, an automated system should have the same capabilities or close to them. As such, the denoising step is performed to condition the data prior to any other stages of the sleep stage neoclassicisms. As mentioned before, the denoising step, by itself, is useful to human EEG technicians as well.
The denoising step in this research mainly looks at EOG artifacts and artifacts isolated to a single EEG channel, such as electrode pop artifacts. The first two algorithms uses Wavelets exclusively (BWDA and WDA), while the third algorithm is a mixture of Wavelets and In- dependent Component Analysis (IDA). With the BWDA algorithm, determining consistent thresholds proved to be a difficult task. With the WDA algorithm, the performance was better, since the selection of the thresholds was more straight-forward and since there was more control over defining the duration of the artifacts. The IDA algorithm performed inferior to the WDA algorithm. This could have been due to the small number of measurement channels or the automated sub-classifier used to select the denoised EEG signal from the set of ICA demixed signals.
The feature selection stage is extremely important as it selects the most pertinent features to make a particular classification. Without such a step, the classifier will have to process useless data, which might result in a poorer classification. Furthermore, unnecessary features will take up valuable computer cycles as well. In a portable device, due to battery consumption, wasting computer cycles is not an option. The research presented in this thesis shows the importance of a systematic feature selection step in EEG classification. The feature selection step produced excellent results with a maximum use of just 5 features. During automated classification, this is extremely important as the automated classifier will only have to calculate 5 features for each given epoch.
Furuhashi, Takeshi, Tomohiro Yoshikawa, Kanta Tachibana, and Minh Tuan Pham. "Feature Extraction Based on Space Folding Model and Application to Machine Learning." 日本知能情報ファジィ学会, 2010. http://hdl.handle.net/2237/20689.
Full textSCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
Zilberman, Eric R. "Autonomous time-frequency cropping and feature-extraction algorithms for classification of LPI radar modulations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Jun%5FZilberman.pdf.
Full textSchnur, Steven R. "Identification and classification of OFDM based signals using preamble correlation and cyclostationary feature extraction." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FSchnur.pdf.
Full textThesis Advisor(s): Tummala, Murali ; McEachen, John. "September 2009." Description based on title screen as viewed on November 5, 2009. Author(s) subject terms: IEEE 802.11, IEEE 802.16, OFDM, Cyclostationary Feature Extraction, FFT Accumulation Method. Includes bibliographical references (p. 103-104). Also available in print.
Smith, R. S. "Angular feature extraction and ensemble classification method for 2D, 2.5D and 3D face recognition." Thesis, University of Surrey, 2008. http://epubs.surrey.ac.uk/843069/.
Full textLozano, Vega Gildardo. "Image-based detection and classification of allergenic pollen." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS031/document.
Full textThe correct classification of airborne pollen is relevant for medical treatment of allergies, and the regular manual process is costly and time consuming. An automatic processing would increase considerably the potential of pollen counting. Modern computer vision techniques enable the detection of discriminant pollen characteristics. In this thesis, a set of relevant image-based features for the recognition of top allergenic pollen taxa is proposed and analyzed. The foundation of our proposal is the evaluation of groups of features that can properly describe pollen in terms of shape, texture, size and apertures. The features are extracted on typical brightfield microscope images that enable the easy reproducibility of the method. A process of feature selection is applied to each group for the determination of relevance.Regarding apertures, a flexible method for detection, localization and counting of apertures of different pollen taxa with varying appearances is proposed. Aperture description is based on primitive images following the Bag-of-Words strategy. A confidence map is built from the classification confidence of sampled regions. From this map, aperture features are extracted, which include the count of apertures. The method is designed to be extended modularly to new aperture types employing the same algorithm to build individual classifiers.The feature groups are tested individually and jointly on of the most allergenic pollen taxa in Germany. They demonstrated to overcome the intra-class variance and inter-class similarity in a SVM classification scheme. The global joint test led to accuracy of 98.2%, comparable to the state-of-the-art procedures
Malkhare, Rohan V. "Scavenger: A Junk Mail Classification Program." Scholar Commons, 2003. https://scholarcommons.usf.edu/etd/1145.
Full textSaidi, Rabie. "Motif extraction from complex data : case of protein classification." Thesis, Clermont-Ferrand 2, 2012. http://www.theses.fr/2012CLF22272/document.
Full textThe classification of biological data is one of the significant challenges inbioinformatics, as well for protein as for nucleic data. The presence of these data in hugemasses, their ambiguity and especially the high costs of the in vitro analysis in terms oftime and resources, make the use of data mining rather a necessity than a rational choice.However, the data mining techniques, which often process data under the relational format,are confronted with the inappropriate format of the biological data. Hence, an inevitablestep of pre-processing must be established.This thesis deals with the protein data preprocessing as a preparation step before theirclassification. We present motif extraction as a reliable way to address that task. The extractedmotifs are used as descriptors to encode proteins into feature vectors. This enablesthe use of known data mining classifiers which require this format. However, designing asuitable feature space, for a set of proteins, is not a trivial task.We deal with two kinds of protein data i:e:, sequences and tri-dimensional structures. In thefirst axis i:e:, protein sequences, we propose a novel encoding method that uses amino-acidsubstitution matrices to define similarity between motifs during the extraction step. Wedemonstrate the efficiency of such approach by comparing it with several encoding methods,using some classifiers. We also propose new metrics to study the robustness of some ofthese methods when perturbing the input data. These metrics allow to measure the abilityof the method to reveal any change occurring in the input data and also its ability to targetthe interesting motifs. The second axis is dedicated to 3D protein structures which are recentlyseen as graphs of amino acids. We make a brief survey on the most used graph-basedrepresentations and we propose a naïve method to help with the protein graph making. Weshow that some existing and widespread methods present remarkable weaknesses and do notreally reflect the real protein conformation. Besides, we are interested in discovering recurrentsub-structures in proteins which can give important functional and structural insights.We propose a novel algorithm to find spatial motifs from proteins. The extracted motifsmatch a well-defined shape which is proposed based on a biological basis. We compare withsequential motifs and spatial motifs of recent related works. For all our contributions, theoutcomes of the experiments confirm the efficiency of our proposed methods to representboth protein sequences and protein 3D structures in classification tasks.Software programs developed during this research work are available on my home page http://fc.isima.fr/~saidi
De, Voir Christopher S. "Wavelet Based Feature Extraction and Dimension Reduction for the Classification of Human Cardiac Electrogram Depolarization Waveforms." PDXScholar, 2005. https://pdxscholar.library.pdx.edu/open_access_etds/1740.
Full textStromann, Oliver. "Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing Environment." Thesis, KTH, Geoinformatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-238727.
Full textKartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
Koc, Bengi. "Detection And Classification Of Qrs Complexes From The Ecg Recordings." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610328/index.pdf.
Full texts method that utilizes the morphological features of the ECG signal (Method III) and a neural network based QRS detection method (Method IV). Overall sensitivity and positive predictivity values above 99% are achieved with each method, which are compatible with the results reported in literature. Method III has the best overall performance among the others with a sensitivity of 99.93% and a positive predictivity of 100.00%. Based on the detected QRS complexes, some features were extracted and classification of some beat types were performed. In order to classify the detected beats, three methods were taken from literature and implemented in this thesis: a Kth nearest neighbor rule based method (Method I), a neural network based method (Method II) and a rule based method (Method III). Overall results of Method I and Method II have sensitivity values above 92.96%. These findings are also compatible with those reported in the related literature. The classification made by the rule based approach, Method III, did not coincide well with the annotations provided in the MIT-BIH database. The best results were achieved by Method II with the overall sensitivity value of 95.24%.
Eklund, Martin. "Comparing Feature Extraction Methods and Effects of Pre-Processing Methods for Multi-Label Classification of Textual Data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231438.
Full textDetta arbete ämnar att undersöka vilken effekt olika metoder för att extrahera särdrag ur textdata har när dessa används för att multi-tagga textdatan. Två metoder baserat på Bag of Words undersöks, närmare bestämt Count Vector-metoden samt TF-IDF-metoden. Även en metod som använder sig av word embessings undersöks, som kallas för GloVe-metoden. Multi-taggning av data kan vara användbart när datan, exempelvis musikaliska stycken eller nyhetsartiklar, kan tillhöra flera klasser eller områden. Även användandet av flera olika metoder för att förbehandla datan undersöks, såsom användandet utav N-gram, eliminering av icke-intressanta ord, samt transformering av ord med olika böjningsformer till gemensam stamform. Två olika klassificerare, en SVM samt en ANN, används för multi-taggningen genom använding utav en metod kallad Binary Relevance. Resultaten visar att valet av metod för extraktion av särdrag har en betydelsefull roll för den resulterande multi-taggningen, men att det inte finns en metod som ger bäst resultat genom alla tester. Istället indikerar resultaten att extraktionsmetoden baserad på GloVe presterar bäst när det gäller 'recall'-mätvärden, medan Bag of Words-metoderna presterar bäst gällade 'precision'-mätvärden.
Al-Qatawneh, Sokyna M. S. "3D Facial Feature Extraction and Recognition. An investigation of 3D face recognition: correction and normalisation of the facial data, extraction of facial features and classification using machine learning techniques." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4876.
Full textGao, Jiangning. "3D face recognition using multicomponent feature extraction from the nasal region and its environs." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.707585.
Full textParadzinets, Aliaksandr V. "Variable resolution transform-based music feature extraction and their applications for music information retrieval." Ecully, Ecole centrale de Lyon, 2007. http://www.theses.fr/2007ECDL0047.
Full textAs a major product for entertainment, there is a huge amount of digital musical content produced, broadcasted, distributed and exchanged. There is a rising demand for content-based music search services. Similarity-based music navigation is becoming crucial for enabling easy access to the evergrowing amount of digital music available to professionals and amateurs alike. This work presents new musical content descriptors and similarity measures which allow automatic musical content organizing (search by similarity, automatic playlist generating) and labeling (automatic genre classification). The work considers the problem of content descriptor building from the musical point of view in complement of low-level spectral similarity measures. Several aspects of music analysis are considered such as music signal analysis where a novel variable resolution transform is presented and described. Higher level processing touches upon the musical knowledge extraction. The thesis presents algorithms of beat detection and multiple fundamental frequency estimation which are based on the variable resolution transform. The information issued from these algorithms is then used for building musical descriptors, represented in form of histograms (novel 2D beat histogram which enables a direct tempo estimation, note succession and note profile histograms etc. ). Two major music information retrieval applications, namely music genre classification and music retrieval by similarity, which use aforementioned musical features are described and evaluated in this thesis
Salmon, Brian Paxton. "Improved hyper-temporal feature extraction methods for land cover change detection in satellite time series." Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/28199.
Full textThesis (PhD(Eng))--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
Gashayija, Jean Marie. "Image classification, storage and retrieval system for a 3 u cubesat." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1189.
Full textSmall satellites, such as CubeSats are mainly utilized for space and earth imaging missions. Imaging CubeSats are equipped with high resolution cameras for the capturing of digital images, as well as mass storage devices for storing the images. The captured images are transmitted to the ground station and subsequently stored in a database. The main problem with stored images in a large image database, identified by researchers and developers within the last number of years, is the retrieval of precise, clear images and overcoming the semantic gap. The semantic gap relates to the lack of correlation between the semantic categories the user requires and the low level features that a content-based image retrieval system offers. Clear images are needed to be usable for applications such as mapping, disaster monitoring and town planning. The main objective of this thesis is the design and development of an image classification, storage and retrieval system for a CubeSat. This system enables efficient classification, storing and retrieval of images that are received on a daily basis from an in-orbit CubeSat. In order to propose such a system, a specific research methodology was chosen and adopted. This entails extensive literature reviews on image classification techniques and image feature extraction techniques, to extract content embedded within an image, and include studies on image database systems, data mining techniques and image retrieval techniques. The literature study led to a requirement analysis followed by the analyses of software development models in order to design the system. The proposed design entails classifying images using content embedded in the image and also extracting image metadata such as date and time. Specific features extraction techniques are needed to extract required content and metadata. In order to achieve extraction of information embedded in the image, colour feature (colour histogram), shape feature (Mathematical Morphology) and texture feature (GLCM) techniques were used. Other major contributions of this project include a graphical user interface which enables users to search for similar images against those stored in the database. An automatic image extractor algorithm was also designed to classify images according to date and time, and colour, texture and shape features extractor techniques were proposed. These ensured that when a user wishes to query the database, the shape objects, colour quantities and contrast contained in an image are extracted and compared to those stored in the database. Implementation and test results concluded that the designed system is able to categorize images automatically and at the same time provide efficient and accurate results. The features extracted for each image depend on colour, shape and texture methods. Optimal values were also incorporated in order to reduce retrieval times. The mathematical morphological technique was used to compute shape objects using erosion and dilation operators, and the co-occurrence matrix was used to compute the texture feature of the image.
Avan, Selcuk Kazim. "Feature Set Evaluation For A Generic Missile Detection System." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608130/index.pdf.
Full textPattern Recognition&rsquo
problem of an MDS a hard task. Problem can be defined in two main parts such as &lsquo
Feature Set Evaluation&rsquo
(FSE) and &lsquo
Classifier&rsquo
designs. The main goal of feature set evaluation is to employ a dimensionality reduction process for the input data set, while not disturbing the classification performance in the result. In this thesis study, FSE approaches are investigated for the pattern recognition problem of a generic MDS. First, synthetic data generation is carried out in software environment by employing generic models and assumptions in order to reflect the nature of a realistic problem environment. Then, data sets are evaluated in order to draw a baseline for further feature set evaluation approaches. Further, a theoretical background including the concepts of Class Separability, Feature Selection and Feature Extraction is given. Several widely used methods are assessed in terms of convenience for the problem by giving necessary justifications depending on the data set characteristics. Upon this background, software implementations are performed regarding several feature set evaluation techniques. Simulations are carried out in order to process dimensionality reduction. For the evaluation of the resulting data sets in terms of classification performance, software implementation of a classifier is realized. Resulting classification performances of the applied approaches are compared and evaluated.
Wang, Xuechuan, and n/a. "Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030619.162803.
Full textDoo, Seung Ho. "Analysis, Modeling & Exploitation of Variability in Radar Images." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461256996.
Full textKhodjet-Kesba, Mahmoud. "Automatic target classification based on radar backscattered ultra wide band signals." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22506/document.
Full textThe objective of this thesis is the Automatic Target Classification (ATC) based on radar backscattered Ultra WideBand (UWB) signals. The classification of the targets is realized by making comparison between the deduced target properties and the different target features which are already recorded in a database. First, the study of scattering theory allows us to understand the physical meaning of the extracted features and describe them mathematically. Second, feature extraction methods are applied in order to extract signatures of the targets. A good choice of features is important to distinguish different targets. Different methods of feature extraction are compared including wavelet transform and high resolution techniques such as: Prony’s method, Root-Multiple SIgnal Classification (Root-MUSIC), Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) and Matrix Pencil Method (MPM). Third, an efficient method of supervised classification is necessary to classify unknown targets by using the extracted features. Different methods of classification are compared: Mahalanobis Distance Classifier (MDC), Naïve Bayes (NB), k-Nearest Neighbors (k-NN) and Support Vector Machine (SVM). A useful classifier design technique should have a high rate of accuracy in the presence of noisy data coming from different aspect angles. The different algorithms are demonstrated using simulated backscattered data from canonical objects and complex target geometries modeled by perfectly conducting thin wires. A method of ATC based on the use of Matrix Pencil Method in Frequency Domain (MPMFD) for feature extraction and MDC for classification is proposed. Simulation results illustrate that features extracted with MPMFD present a plausible solution to automatic target classification. In addition, we prove that the proposed method has better ability to tolerate noise effects in radar target classification. Finally, the different algorithms are validated on experimental data and real targets
Ersoy, Mehmet Okan. "Application Of A Natural-resonance Based Feature Extraction Technique To Small-scale Aircraft Modeled By Conducting Wires For Electromagnetic Target Classification." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605522/index.pdf.
Full textKachouri, Rostom. "Classification multi-modèles des images dans les bases Hétérogènes." Phd thesis, Université d'Evry-Val d'Essonne, 2010. http://tel.archives-ouvertes.fr/tel-00526649.
Full textChiou, Tzone-Kaie, and 邱宗楷. "Using Fuzzy Feature Extraction Fingerprint Classification." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/92760623300716087147.
Full text元智大學
資訊工程學系
89
Fingerprint classification is a useful task for a large database of fingerprint recognition system. Accurate classification can speed up the process of fingerprint recognition. The fingerprint classification method proposed in this paper is based on human thinking and uses fuzzy theory. The key point of human thinking to classify fingerprint is attempting to find out fingerprint ridge, singular points (cores or deltas), direction of ridge, wrinkles or scars as global features. Firstly, in order to determine the fingerprint ridge direction, we need to transform the fingerprint image into 50x50 direction pattern. Then we use a set of pre-defined fuzzy mask to find out the singular points. Finally we use relationship between the singular points to classify the fingerprint. The experimental results of our method exhibit the best performance, a very low sensitivity and good classification accuracy.
Jiun-Jin, Huang, and 黃俊錦. "Effects of Feature Extraction on Classification Accuracy." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/86200872473956722918.
Full text國立臺灣科技大學
管理技術研究所
86
Classification is an important area in pattern recognition. Feature extra ction for classification is equivalent to retaining informative features or eliminating redundant features. However, due to the nonlinearity of the decision boundary, which occurs in most cases, there exist no absolutely but approxima tely redundant features. Eliminating approximately redundant features results in a decrease in the classification accuracy. Even for two classes with multiv ariate normal distributions, classification accuracy is difficult to analyze s ince the classification function involves quadratic terms. One approach to all eviating this difficulty is to simultaneously diagonalize the covariance matri ces of the two classes which can be achieved by applying orthornormal and whit ening transformations to the measurement space. Once the covariance matrices are simultaneously diagonalized, the quadratic classification function is simplified and becomes much easier to analyze and the classification accuracy can be studied in terms of the eigenvalues of the covariance matrices of the two classes. Thus, the decrease in the classification accuracy incurred from eliminating approximately redundant features can be quantified.We empirically study the classification accuracy by varying the distributionparameters.
Najdi, Shirin. "Feature Extraction and Selection in Automatic Sleep Stage Classification." Doctoral thesis, 2018. http://hdl.handle.net/10362/66271.
Full textLin, Chia-Hsing, and 林家興. "Discriminative Feature Extraction for Robust Audio Event Classification." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/4v582f.
Full text國立臺北科技大學
電腦與通訊研究所
98
In Tradition, audio event classification relies heavily on MFCCs (Mel-Frequency Cepstral Coefficients) features. However, MFCCs is originally designed for automatic speech recognition. It is not sure whether MFCCs are still the best features for audio event classification or not. Besides, MFCCs are usually not so robust in noisy environment. Therefore, in this paper, several new feature extraction methods are proposed in the hope of getting better performance and robustness than MFCCs in noisy conditions. The proposed feature extraction methods are mainly based on the concept of match filters in spectro-temporal domain. Several methods to design the set of match filters are proposed including handmade gabor filters and three data-driven filters using PCA (Principle Component Analysis), LDA-based Eigen-space analysis (Linear Discriminative Analysis) and MCE (Minimum Classification Error) training. The robustness of the proposed method is evaluated on RWCP (Real World Computing Partnership) database with artificially added noise. There are 105 different audio events in RWCP. The experimental settings are similar to Aurora 2 multi-condition training task. Experimental results show that the lowest average error rate of 3.17% was achieved by MCE method and is superior to conventional MFCCs (4.13%). We thus confirm the superiority and robustness of the proposed audio feature extraction approaches.
Liu, Yu-Hsin, and 劉羽欣. "Feature extraction and classification of product advertising review." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/q67wnd.
Full text國立臺北科技大學
資訊工程系研究所
101
Web has become an important place for marketing in business. Many vendors offer bloggers or people their products or payment and ask them to write review of product using experience to promote their products. However, it’s hard to identify the truthfulness of these reviews. By using conventional text classification methods by content, it is difficult to distinguish between real and fake reviews. In this paper, we propose a feature extraction method and classification model for advertising reviews. Based on features like ratio of positive opinion terms, number of pictures, ratio of praiseful words, and publishes date; we train a SVM classifier for advertising review identification. In our experiment, we collected 2150 reviews in the “cosmetics” domain. For classifying advertising reviews in cosmetics domain and other articles, our method can perform 94% at F-measure. This result is comparable to the conventional approach of document classification using TF-IDF, and our method is more efficient in training. For classifying advertising and ordinary non-advertising reviews in cosmetics domain, our method also can achieve good classification accuracy. It shows the feasibility of practical use in advertising reviews classification.
Huang, Bin. "Compression ECG signals with feature extraction, classification, and browsability." 2004. http://hdl.handle.net/1993/16253.
Full textLi, Ting-Yi, and 李庭誼. "Hybrid Feature Extraction for Object-based Hyperspectral Image Classification." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/46541455753533757818.
Full text國立臺灣大學
土木工程學研究所
99
The purpose of feature extraction is to reduce the dimensionality of hyperspectral images to solve classification problems caused by limited training samples. In this study, a hybrid feature extraction method which integrates spectral features and spatial features simultaneously is proposed. Firstly, the spectral-feature images are calculated along the spectral dimension of hyperspectral images using wavelet decomposition because wavelet has been proven effective in extracting spectral features. Secondly, ten different kinds of spatial-features, which are calculated along the two spatial dimensions of hyperspectral images, are implemented on the wavelet spectral-feature images. Then a feature selection method based on the optimization of class separability is performed on the extracted spectral-spatial features to get the hybrid features which could be suitable for classification applications. In this study, the object-based image analysis (OBIA) is used for hyperspectral image classification. The experiment results showed that the overall accuracy for the classification of a real hyperspectral data set using our proposed approach could reach approximately 94%. Moreover, it is worth mentioning that the hybrid features and OBIA classification could significantly rise the overall accuracy of hyperspectral images which contain poor separability between classes, after the spectral features were extracted. The experiment result also showed that the overall accuracy would go up by 20% by using our proposed approach on hyperspectral images with poor class separability.