Academic literature on the topic 'Ensemble Based Classification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ensemble Based Classification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Ensemble Based Classification"

1

Gui, Wenli, Liping Jing, Liu Yang, and Jian Yu. "Unsupervised Cross-Language Classification with Stratified Sampling-Based Cluster Ensemble." International Journal of Machine Learning and Computing 5, no. 3 (June 2015): 165–71. http://dx.doi.org/10.7763/ijmlc.2015.v5.502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jurek, Anna, Yaxin Bi, Shengli Wu, and Chris Nugent. "A survey of commonly used ensemble-based classification techniques." Knowledge Engineering Review 29, no. 5 (May 3, 2013): 551–81. http://dx.doi.org/10.1017/s0269888913000155.

Full text
Abstract:
AbstractThe combination of multiple classifiers, commonly referred to as a classifier ensemble, has previously demonstrated the ability to improve classification accuracy in many application domains. As a result this area has attracted significant amount of research in recent years. The aim of this paper has therefore been to provide a state of the art review of the most well-known ensemble techniques with the main focus on bagging, boosting and stacking and to trace the recent attempts, which have been made to improve their performance. Within this paper, we present and compare an updated view on the different modifications of these techniques, which have specifically aimed to address some of the drawbacks of these methods namely the low diversity problem in bagging or the over-fitting problem in boosting. In addition, we provide a review of different ensemble selection methods based on both static and dynamic approaches. We present some new directions which have been adopted in the area of classifier ensembles from a range of recently published studies. In order to provide a deeper insight into the ensembles themselves a range of existing theoretical studies have been reviewed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
3

Kilimci, Zeynep H., and Selim Akyokus. "Deep Learning- and Word Embedding-Based Heterogeneous Classifier Ensembles for Text Classification." Complexity 2018 (October 9, 2018): 1–10. http://dx.doi.org/10.1155/2018/7130146.

Full text
Abstract:
The use of ensemble learning, deep learning, and effective document representation methods is currently some of the most common trends to improve the overall accuracy of a text classification/categorization system. Ensemble learning is an approach to raise the overall accuracy of a classification system by utilizing multiple classifiers. Deep learning-based methods provide better results in many applications when compared with the other conventional machine learning algorithms. Word embeddings enable representation of words learned from a corpus as vectors that provide a mapping of words with similar meaning to have similar representation. In this study, we use different document representations with the benefit of word embeddings and an ensemble of base classifiers for text classification. The ensemble of base classifiers includes traditional machine learning algorithms such as naïve Bayes, support vector machine, and random forest and a deep learning-based conventional network classifier. We analysed the classification accuracy of different document representations by employing an ensemble of classifiers on eight different datasets. Experimental results demonstrate that the usage of heterogeneous ensembles together with deep learning methods and word embeddings enhances the classification performance of texts.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Bo, Yu Kai Yao, Xiao Ping Wang, and Xiao Yun Chen. "PB-SVM Ensemble: A SVM Ensemble Algorithm Based on SVM." Applied Mechanics and Materials 701-702 (December 2014): 58–62. http://dx.doi.org/10.4028/www.scientific.net/amm.701-702.58.

Full text
Abstract:
As one of the most popular and effective classification algorithms, Support Vector Machine (SVM) has attracted much attention in recent years. Classifiers ensemble is a research direction in machine learning and statistics, it often gives a higher classification accuracy than the single classifier. This paper proposes a new ensemble algorithm based on SVM. The proposed classification algorithm PB-SVM Ensemble consists of some SVM classifiers produced by PCAenSVM and fifty classifiers trained using Bagging, the results are combined to make the final decision on testing set using majority voting. The performance of PB-SVM Ensemble are evaluated on six datasets which are from UCI repository, Statlog or the famous research. The results of the experiment are compared with LibSVM, PCAenSVM and Bagging. PB-SVM Ensemble outperform other three algorithms in classification accuracy, and at the same time keep a higher confidence of accuracy than Bagging.
APA, Harvard, Vancouver, ISO, and other styles
5

Alsawalqah, Hamad, Neveen Hijazi, Mohammed Eshtay, Hossam Faris, Ahmed Al Radaideh, Ibrahim Aljarah, and Yazan Alshamaileh. "Software Defect Prediction Using Heterogeneous Ensemble Classification Based on Segmented Patterns." Applied Sciences 10, no. 5 (March 3, 2020): 1745. http://dx.doi.org/10.3390/app10051745.

Full text
Abstract:
Software defect prediction is a promising approach aiming to improve software quality and testing efficiency by providing timely identification of defect-prone software modules before the actual testing process begins. These prediction results help software developers to effectively allocate their limited resources to the modules that are more prone to defects. In this paper, a hybrid heterogeneous ensemble approach is proposed for the purpose of software defect prediction. Heterogeneous ensembles consist of set of classifiers of different learning base methods in which each of them has its own strengths and weaknesses. The main idea of the proposed approach is to develop expert and robust heterogeneous classification models. Two versions of the proposed approach are developed and experimented. The first is based on simple classifiers, and the second is based on ensemble ones. For evaluation, 21 publicly available benchmark datasets are selected to conduct the experiments and benchmark the proposed approach. The evaluation results show the superiority of the ensemble version over other well-regarded basic and ensemble classifiers.
APA, Harvard, Vancouver, ISO, and other styles
6

KO, ALBERT HUNG-REN, ROBERT SABOURIN, and ALCEU DE SOUZA BRITTO. "COMPOUND DIVERSITY FUNCTIONS FOR ENSEMBLE SELECTION." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 04 (June 2009): 659–86. http://dx.doi.org/10.1142/s021800140900734x.

Full text
Abstract:
An effective way to improve a classification method's performance is to create ensembles of classifiers. Two elements are believed to be important in constructing an ensemble: (a) the performance of each individual classifier; and (b) diversity among the classifiers. Nevertheless, most works based on diversity suggest that there exists only weak correlation between classifier performance and ensemble accuracy. We propose compound diversity functions which combine the diversities with the performance of each individual classifier, and show that there is a strong correlation between the proposed functions and ensemble accuracy. Calculation of the correlations with different ensemble creation methods, different problems and different classification algorithms on 0.624 million ensembles suggests that most compound diversity functions are better than traditional diversity measures. The population-based Genetic Algorithm was used to search for the best ensembles on a handwritten numerals recognition problem and to evaluate 42.24 million ensembles. The statistical results indicate that compound diversity functions perform better than traditional diversity measures, and are helpful in selecting the best ensembles.
APA, Harvard, Vancouver, ISO, and other styles
7

Alizadeh Moghaddam, S. H., M. Mokhtarzade, and S. A. Alizadeh Moghaddam. "A NEW MULTIPLE CLASSIFIER SYSTEM BASED ON A PSO ALGORITHM FOR THE CLASSIFICATION OF HYPERSPECTRAL IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 71–75. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-71-2019.

Full text
Abstract:
Abstract. Multiple classifier systems (MCSs) have shown great performance for the classification of hyperspectral images. The requirements for a successful MCS are 1) diversity between ensembles and 2) good classification accuracy of each ensemble. In this paper, we develop a new MCS method based on a particle swarm optimization (PSO) algorithm. Firstly, in each ensemble of the proposed method, called PSO-MCS, PSO identifies a subset of the spectral bands with a high J2 value, which is a measure of class-separability. Then, an SVM classifier is used to classify the input image, applying the selected features in each ensemble. Finally, the classification results of the entire ensembles are integrated using a majority voting strategy. Having the benefit of the PSO algorithm, PSO-MCS selects appropriate features. In addition, due to the fact that different features are selected in different runs of PSO, diversity between the ensembles is provided. Experimental results on an AVIRIS Indian Pine image show the superiority of the proposed method over its competitor, named random feature selection method.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Ruihan, Songbin Zhou, Yisen Liu, and Zhiri Tang. "Margin-Based Pareto Ensemble Pruning: An Ensemble Pruning Algorithm That Learns to Search Optimized Ensembles." Computational Intelligence and Neuroscience 2019 (June 3, 2019): 1–12. http://dx.doi.org/10.1155/2019/7560872.

Full text
Abstract:
The ensemble pruning system is an effective machine learning framework that combines several learners as experts to classify a test set. Generally, ensemble pruning systems aim to define a region of competence based on the validation set to select the most competent ensembles from the ensemble pool with respect to the test set. However, the size of the ensemble pool is usually fixed, and the performance of an ensemble pool heavily depends on the definition of the region of competence. In this paper, a dynamic pruning framework called margin-based Pareto ensemble pruning is proposed for ensemble pruning systems. The framework explores the optimized ensemble pool size during the overproduction stage and finetunes the experts during the pruning stage. The Pareto optimization algorithm is used to explore the size of the overproduction ensemble pool that can result in better performance. Considering the information entropy of the learners in the indecision region, the marginal criterion for each learner in the ensemble pool is calculated using margin criterion pruning, which prunes the experts with respect to the test set. The effectiveness of the proposed method for classification tasks is assessed using datasets. The results show that margin-based Pareto ensemble pruning can achieve smaller ensemble sizes and better classification performance in most datasets when compared with state-of-the-art models.
APA, Harvard, Vancouver, ISO, and other styles
9

Onan, Aytug. "Hybrid supervised clustering based ensemble scheme for text classification." Kybernetes 46, no. 2 (February 6, 2017): 330–48. http://dx.doi.org/10.1108/k-10-2016-0300.

Full text
Abstract:
Purpose The immense quantity of available unstructured text documents serve as one of the largest source of information. Text classification can be an essential task for many purposes in information retrieval, such as document organization, text filtering and sentiment analysis. Ensemble learning has been extensively studied to construct efficient text classification schemes with higher predictive performance and generalization ability. The purpose of this paper is to provide diversity among the classification algorithms of ensemble, which is a key issue in the ensemble design. Design/methodology/approach An ensemble scheme based on hybrid supervised clustering is presented for text classification. In the presented scheme, supervised hybrid clustering, which is based on cuckoo search algorithm and k-means, is introduced to partition the data samples of each class into clusters so that training subsets with higher diversities can be provided. Each classifier is trained on the diversified training subsets and the predictions of individual classifiers are combined by the majority voting rule. The predictive performance of the proposed classifier ensemble is compared to conventional classification algorithms (such as Naïve Bayes, logistic regression, support vector machines and C4.5 algorithm) and ensemble learning methods (such as AdaBoost, bagging and random subspace) using 11 text benchmarks. Findings The experimental results indicate that the presented classifier ensemble outperforms the conventional classification algorithms and ensemble learning methods for text classification. Originality/value The presented ensemble scheme is the first to use supervised clustering to obtain diverse ensemble for text classification
APA, Harvard, Vancouver, ISO, and other styles
10

Ku Abd. Rahim, Ku, I. Elamvazuthi, Lila Izhar, and Genci Capi. "Classification of Human Daily Activities Using Ensemble Methods Based on Smartphone Inertial Sensors." Sensors 18, no. 12 (November 26, 2018): 4132. http://dx.doi.org/10.3390/s18124132.

Full text
Abstract:
Increasing interest in analyzing human gait using various wearable sensors, which is known as Human Activity Recognition (HAR), can be found in recent research. Sensors such as accelerometers and gyroscopes are widely used in HAR. Recently, high interest has been shown in the use of wearable sensors in numerous applications such as rehabilitation, computer games, animation, filmmaking, and biomechanics. In this paper, classification of human daily activities using Ensemble Methods based on data acquired from smartphone inertial sensors involving about 30 subjects with six different activities is discussed. The six daily activities are walking, walking upstairs, walking downstairs, sitting, standing and lying. It involved three stages of activity recognition; namely, data signal processing (filtering and segmentation), feature extraction and classification. Five types of ensemble classifiers utilized are Bagging, Adaboost, Rotation forest, Ensembles of nested dichotomies (END) and Random subspace. These ensemble classifiers employed Support vector machine (SVM) and Random forest (RF) as the base learners of the ensemble classifiers. The data classification is evaluated with the holdout and 10-fold cross-validation evaluation methods. The performance of each human daily activity was measured in terms of precision, recall, F-measure, and receiver operating characteristic (ROC) curve. In addition, the performance is also measured based on the comparison of overall accuracy rate of classification between different ensemble classifiers and base learners. It was observed that overall, SVM produced better accuracy rate with 99.22% compared to RF with 97.91% based on a random subspace ensemble classifier.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Ensemble Based Classification"

1

WANDEKOKEN, E. D. "Support Vector Machine Ensemble Based on Feature and Hyperparameter Variation." Universidade Federal do Espírito Santo, 2011. http://repositorio.ufes.br/handle/10/4234.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:33:14Z (GMT). No. of bitstreams: 1 tese_4163_.pdf: 479699 bytes, checksum: 04f01a137084c0859b4494de6db8b3ac (MD5) Previous issue date: 2011-02-23
Classificadores do tipo máquina de vetores de suporte (SVM) são atualmente considerados uma das técnicas mais poderosas para se resolver problemas de classificação com duas classes. Para aumentar o desempenho alcançado por classificadores SVM individuais, uma abordagem bem estabelecida é usar uma combinação de SVMs, a qual corresponde a um conjunto de classificadores SVMs que são, simultaneamente, individualmente precisos e coletivamente divergentes em suas decisões. Este trabalho propõe uma abordagem para se criar combinações de SVMs, baseada em um processo de três estágios. Inicialmente, são usadas execuções complementares de uma busca baseada em algoritmos genéticos (GEFS), com o objetivo de investigar globalmente o espaço de características para definir um conjunto de subconjuntos de características. Em seguida, para cada um desses subconjuntos de características definidos, uma SVM que usa parâmetros otimizados é construída. Por fim, é empregada uma busca local com o objetivo de selecionar um subconjunto otimizado dessas SVMs, e assim formar a combinação de SVMs que é finalmente produzida. Os experimentos foram realizados num contexto de detecção de defeitos em máquinas industriais. Foram usados 2000 exemplos de sinais de vibração de moto bombas instaladas em plataformas de petróleo. Os experimentos realizados mostram que o método proposto para se criar combinação de SVMs apresentou um desempenho superior em comparação a outras abordagens de classificação bem estabelecidas.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Enezi, Jamal. "Artificial immune systems based committee machine for classification application." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6826.

Full text
Abstract:
A new adaptive learning Artificial Immune System (AIS) based committee machine is developed in this thesis. The new proposed approach efficiently tackles the general problem of clustering high-dimensional data. In addition, it helps on deriving useful decision and results related to other application domains such classification and prediction. Artificial Immune System (AIS) is a branch of computational intelligence field inspired by the biological immune system, and has gained increasing interest among researchers in the development of immune-based models and techniques to solve diverse complex computational or engineering problems. This work presents some applications of AIS techniques to health problems, and a thorough survey of existing AIS models and algorithms. The main focus of this research is devoted to building an ensemble model integrating different AIS techniques (i.e. Artificial Immune Networks, Clonal Selection, and Negative Selection) for classification applications to achieve better classification results. A new AIS-based ensemble architecture with adaptive learning features is proposed by integrating different learning and adaptation techniques to overcome individual limitations and to achieve synergetic effects through the combination of these techniques. Various techniques related to the design and enhancements of the new adaptive learning architecture are studied, including a neuro-fuzzy based detector and an optimizer using particle swarm optimization method to achieve enhanced classification performance. An evaluation study was conducted to show the performance of the new proposed adaptive learning ensemble and to compare it to alternative combining techniques. Several experiments are presented using different medical datasets for the classification problem and findings and outcomes are discussed. The new adaptive learning architecture improves the accuracy of the ensemble. Moreover, there is an improvement over the existing aggregation techniques. The outcomes, assumptions and limitations of the proposed methods with its implications for further research in this area draw this research to its conclusion.
APA, Harvard, Vancouver, ISO, and other styles
3

Börthas, Lovisa, and Sjölander Jessica Krange. "Machine Learning Based Prediction and Classification for Uplift Modeling." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266379.

Full text
Abstract:
The desire to model the true gain from targeting an individual in marketing purposes has lead to the common use of uplift modeling. Uplift modeling requires the existence of a treatment group as well as a control group and the objective hence becomes estimating the difference between the success probabilities in the two groups. Efficient methods for estimating the probabilities in uplift models are statistical machine learning methods. In this project the different uplift modeling approaches Subtraction of Two Models, Modeling Uplift Directly and the Class Variable Transformation are investigated. The statistical machine learning methods applied are Random Forests and Neural Networks along with the standard method Logistic Regression. The data is collected from a well established retail company and the purpose of the project is thus to investigate which uplift modeling approach and statistical machine learning method that yields in the best performance given the data used in this project. The variable selection step was shown to be a crucial component in the modeling processes as so was the amount of control data in each data set. For the uplift to be successful, the method of choice should be either the Modeling Uplift Directly using Random Forests, or the Class Variable Transformation using Logistic Regression. Neural network - based approaches are sensitive to uneven class distributions and is hence not able to obtain stable models given the data used in this project. Furthermore, the Subtraction of Two Models did not perform well due to the fact that each model tended to focus too much on modeling the class in both data sets separately instead of modeling the difference between the class probabilities. The conclusion is hence to use an approach that models the uplift directly, and also to use a great amount of control data in each data set.
Behovet av att kunna modellera den verkliga vinsten av riktad marknadsföring har lett till den idag vanligt förekommande metoden inkrementell responsanalys. För att kunna utföra denna typ av metod krävs förekomsten av en existerande testgrupp samt kontrollgrupp och målet är således att beräkna differensen mellan de positiva utfallen i de två grupperna. Sannolikheten för de positiva utfallen för de två grupperna kan effektivt estimeras med statistiska maskininlärningsmetoder. De inkrementella responsanalysmetoderna som undersöks i detta projekt är subtraktion av två modeller, att modellera den inkrementella responsen direkt samt en klassvariabeltransformation. De statistiska maskininlärningsmetoderna som tillämpas är random forests och neurala nätverk samt standardmetoden logistisk regression. Datan är samlad från ett väletablerat detaljhandelsföretag och målet är därmed att undersöka vilken inkrementell responsanalysmetod och maskininlärningsmetod som presterar bäst givet datan i detta projekt. De mest avgörande aspekterna för att få ett bra resultat visade sig vara variabelselektionen och mängden kontrolldata i varje dataset. För att få ett lyckat resultat bör valet av maskininlärningsmetod vara random forests vilken används för att modellera den inkrementella responsen direkt, eller logistisk regression tillsammans med en klassvariabeltransformation. Neurala nätverksmetoder är känsliga för ojämna klassfördelningar och klarar därmed inte av att erhålla stabila modeller med den givna datan. Vidare presterade subtraktion av två modeller dåligt på grund av att var modell tenderade att fokusera för mycket på att modellera klassen i båda dataseten separat, istället för att modellera differensen mellan dem. Slutsatsen är således att en metod som modellerar den inkrementella responsen direkt samt en relativt stor kontrollgrupp är att föredra för att få ett stabilt resultat.
APA, Harvard, Vancouver, ISO, and other styles
4

Feng, Wei. "Investigation of training data issues in ensemble classification based on margin concept : application to land cover mapping." Thesis, Bordeaux 3, 2017. http://www.theses.fr/2017BOR30016/document.

Full text
Abstract:
La classification a été largement étudiée en apprentissage automatique. Les méthodes d’ensemble, qui construisent un modèle de classification en intégrant des composants d’apprentissage multiples, atteignent des performances plus élevées que celles d’un classifieur individuel. La précision de classification d’un ensemble est directement influencée par la qualité des données d’apprentissage utilisées. Cependant, les données du monde réel sont souvent affectées par les problèmes de bruit d’étiquetage et de déséquilibre des données. La marge d'ensemble est un concept clé en apprentissage d'ensemble. Elle a été utilisée aussi bien pour l'analyse théorique que pour la conception d'algorithmes d'apprentissage automatique. De nombreuses études ont montré que la performance de généralisation d'un classifieur ensembliste est liée à la distribution des marges de ses exemples d'apprentissage. Ce travail se focalise sur l'exploitation du concept de marge pour améliorer la qualité de l'échantillon d'apprentissage et ainsi augmenter la précision de classification de classifieurs sensibles au bruit, et pour concevoir des ensembles de classifieurs efficaces capables de gérer des données déséquilibrées. Une nouvelle définition de la marge d'ensemble est proposée. C'est une version non supervisée d'une marge d'ensemble populaire. En effet, elle ne requière pas d'étiquettes de classe. Les données d'apprentissage mal étiquetées sont un défi majeur pour la construction d'un classifieur robuste que ce soit un ensemble ou pas. Pour gérer le problème d'étiquetage, une méthode d'identification et d'élimination du bruit d'étiquetage utilisant la marge d'ensemble est proposée. Elle est basée sur un algorithme existant d'ordonnancement d'instances erronées selon un critère de marge. Cette méthode peut atteindre un taux élevé de détection des données mal étiquetées tout en maintenant un taux de fausses détections aussi bas que possible. Elle s'appuie sur les valeurs de marge des données mal classifiées, considérant quatre différentes marges d'ensemble, incluant la nouvelle marge proposée. Elle est étendue à la gestion de la correction du bruit d'étiquetage qui est un problème plus complexe. Les instances de faible marge sont plus importantes que les instances de forte marge pour la construction d'un classifieur fiable. Un nouvel algorithme, basé sur une fonction d'évaluation de l'importance des données, qui s'appuie encore sur la marge d'ensemble, est proposé pour traiter le problème de déséquilibre des données. Cette méthode est évaluée, en utilisant encore une fois quatre différentes marges d'ensemble, vis à vis de sa capacité à traiter le problème de déséquilibre des données, en particulier dans un contexte multi-classes. En télédétection, les erreurs d'étiquetage sont inévitables car les données d'apprentissage sont typiquement issues de mesures de terrain. Le déséquilibre des données d'apprentissage est un autre problème fréquent en télédétection. Les deux méthodes d'ensemble proposées, intégrant la définition de marge la plus pertinente face à chacun de ces deux problèmes majeurs affectant les données d'apprentissage, sont appliquées à la cartographie d'occupation du sol
Classification has been widely studied in machine learning. Ensemble methods, which build a classification model by integrating multiple component learners, achieve higher performances than a single classifier. The classification accuracy of an ensemble is directly influenced by the quality of the training data used. However, real-world data often suffers from class noise and class imbalance problems. Ensemble margin is a key concept in ensemble learning. It has been applied to both the theoretical analysis and the design of machine learning algorithms. Several studies have shown that the generalization performance of an ensemble classifier is related to the distribution of its margins on the training examples. This work focuses on exploiting the margin concept to improve the quality of the training set and therefore to increase the classification accuracy of noise sensitive classifiers, and to design effective ensemble classifiers that can handle imbalanced datasets. A novel ensemble margin definition is proposed. It is an unsupervised version of a popular ensemble margin. Indeed, it does not involve the class labels. Mislabeled training data is a challenge to face in order to build a robust classifier whether it is an ensemble or not. To handle the mislabeling problem, we propose an ensemble margin-based class noise identification and elimination method based on an existing margin-based class noise ordering. This method can achieve a high mislabeled instance detection rate while keeping the false detection rate as low as possible. It relies on the margin values of misclassified data, considering four different ensemble margins, including the novel proposed margin. This method is extended to tackle the class noise correction which is a more challenging issue. The instances with low margins are more important than safe samples, which have high margins, for building a reliable classifier. A novel bagging algorithm based on a data importance evaluation function relying again on the ensemble margin is proposed to deal with the class imbalance problem. In our algorithm, the emphasis is placed on the lowest margin samples. This method is evaluated using again four different ensemble margins in addressing the imbalance problem especially on multi-class imbalanced data. In remote sensing, where training data are typically ground-based, mislabeled training data is inevitable. Imbalanced training data is another problem frequently encountered in remote sensing. Both proposed ensemble methods involving the best margin definition for handling these two major training data issues are applied to the mapping of land covers
APA, Harvard, Vancouver, ISO, and other styles
5

Alshahrani, Saeed Sultan. "Detection, classification and control of power quality disturbances based on complementary ensemble empirical mode decomposition and artificial neural networks." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xin. "Gaze based weakly supervised localization for image classification : application to visual recognition in a food dataset." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066577/document.

Full text
Abstract:
Dans cette dissertation, nous discutons comment utiliser les données du regard humain pour améliorer la performance du modèle d'apprentissage supervisé faible dans la classification des images. Le contexte de ce sujet est à l'ère de la technologie de l'information en pleine croissance. En conséquence, les données à analyser augmentent de façon spectaculaire. Étant donné que la quantité de données pouvant être annotées par l'humain ne peut pas tenir compte de la quantité de données elle-même, les approches d'apprentissage supervisées bien développées actuelles peuvent faire face aux goulets d'étranglement l'avenir. Dans ce contexte, l'utilisation de annotations faibles pour les méthodes d'apprentissage à haute performance est digne d'étude. Plus précisément, nous essayons de résoudre le problème à partir de deux aspects: l'un consiste à proposer une annotation plus longue, un regard de suivi des yeux humains, comme une annotation alternative par rapport à l'annotation traditionnelle longue, par exemple boîte de délimitation. L'autre consiste à intégrer l'annotation du regard dans un système d'apprentissage faiblement supervisé pour la classification de l'image. Ce schéma bénéficie de l'annotation du regard pour inférer les régions contenant l'objet cible. Une propriété utile de notre modèle est qu'elle exploite seulement regardez pour la formation, alors que la phase de test est libre de regard. Cette propriété réduit encore la demande d'annotations. Les deux aspects isolés sont liés ensemble dans nos modèles, ce qui permet d'obtenir des résultats expérimentaux compétitifs
In this dissertation, we discuss how to use the human gaze data to improve the performance of the weak supervised learning model in image classification. The background of this topic is in the era of rapidly growing information technology. As a consequence, the data to analyze is also growing dramatically. Since the amount of data that can be annotated by the human cannot keep up with the amount of data itself, current well-developed supervised learning approaches may confront bottlenecks in the future. In this context, the use of weak annotations for high-performance learning methods is worthy of study. Specifically, we try to solve the problem from two aspects: One is to propose a more time-saving annotation, human eye-tracking gaze, as an alternative annotation with respect to the traditional time-consuming annotation, e.g. bounding box. The other is to integrate gaze annotation into a weakly supervised learning scheme for image classification. This scheme benefits from the gaze annotation for inferring the regions containing the target object. A useful property of our model is that it only exploits gaze for training, while the test phase is gaze free. This property further reduces the demand of annotations. The two isolated aspects are connected together in our models, which further achieve competitive experimental results
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Junshi. "Multiple classifier systems for the classification of hyperspectral data." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT047/document.

Full text
Abstract:
Dans cette thèse, nous proposons plusieurs nouvelles techniques pour la classification d'images hyperspectrales basées sur l'apprentissage d'ensemble. Le cadre proposé introduit des innovations importantes par rapport aux approches précédentes dans le même domaine, dont beaucoup sont basées principalement sur un algorithme individuel. Tout d'abord, nous proposons d'utiliser la Forêt de Rotation (Rotation Forest) avec différentes techiniques d'extraction de caractéristiques linéaire et nous comparons nos méthodes avec les approches d'ensemble traditionnelles, tels que Bagging, Boosting, Sous-espace Aléatoire et Forêts Aléatoires. Ensuite, l'intégration des machines à vecteurs de support (SVM) avec le cadre de sous-espace de rotation pour la classification de contexte est étudiée. SVM et sous-espace de rotation sont deux outils puissants pour la classification des données de grande dimension. C'est pourquoi, la combinaison de ces deux méthodes peut améliorer les performances de classification. Puis, nous étendons le travail de la Forêt de Rotation en intégrant la technique d'extraction de caractéristiques locales et l'information contextuelle spatiale avec un champ de Markov aléatoire (MRF) pour concevoir des méthodes spatio-spectrale robustes. Enfin, nous présentons un nouveau cadre général, ensemble de sous-espace aléatoire, pour former une série de classifieurs efficaces, y compris les arbres de décision et la machine d'apprentissage extrême (ELM), avec des profils multi-attributs étendus (EMaPS) pour la classification des données hyperspectrales. Six méthodes d'ensemble de sous-espace aléatoire, y compris les sous-espaces aléatoires avec les arbres de décision, Forêts Aléatoires (RF), la Forêt de Rotation (RoF), la Forêt de Rotation Aléatoires (Rorf), RS avec ELM (RSELM) et sous-espace de rotation avec ELM (RoELM), sont construits par multiples apprenants de base. L'efficacité des techniques proposées est illustrée par la comparaison avec des méthodes de l'état de l'art en utilisant des données hyperspectrales réelles dans de contextes différents
In this thesis, we propose several new techniques for the classification of hyperspectral remote sensing images based on multiple classifier system (MCS). Our proposed framework introduces significant innovations with regards to previous approaches in the same field, many of which are mainly based on an individual algorithm. First, we propose to use Rotation Forests with several linear feature extraction and compared them with the traditional ensemble approaches, such as Bagging, Boosting, Random subspace and Random Forest. Second, the integration of the support vector machines (SVM) with Rotation subspace framework for context classification is investigated. SVM and Rotation subspace are two powerful tools for high-dimensional data classification. Therefore, combining them can further improve the classification performance. Third, we extend the work of Rotation Forests by incorporating local feature extraction technique and spatial contextual information with Markov random Field (MRF) to design robust spatial-spectral methods. Finally, we presented a new general framework, Random subspace ensemble, to train series of effective classifiers, including decision trees and extreme learning machine (ELM), with extended multi-attribute profiles (EMAPs) for classifying hyperspectral data. Six RS ensemble methods, including Random subspace with DT (RSDT), Random Forest (RF), Rotation Forest (RoF), Rotation Random Forest (RoRF), RS with ELM (RSELM) and Rotation subspace with ELM (RoELM), are constructed by the multiple base learners. The effectiveness of the proposed techniques is illustrated by comparing with state-of-the-art methods by using real hyperspectral data sets with different contexts
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Mter, Yusur. "Automatic Prediction of Human Age based on Heart Rate Variability Analysis using Feature-Based Methods." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166139.

Full text
Abstract:
Heart rate variability (HRV) is the time variation between adjacent heartbeats. This variation is regulated by the autonomic nervous system (ANS) and its two branches, the sympathetic and parasympathetic nervous system. HRV is considered as an essential clinical tool to estimate the imbalance between the two branches, hence as an indicator of age and cardiac-related events.This thesis focuses on the ECG recordings during nocturnal rest to estimate the influence of HRV in predicting the age decade of healthy individuals. Time and frequency domains, as well as non-linear methods, are explored to extract the HRV features. Three feature-based methods (support vector machine (SVM), random forest, and extreme gradient boosting (XGBoost)) were employed, and the overall test accuracy achieved in capturing the actual class was relatively low (lower than 30%). SVM classifier had the lowest performance, while random forests and XGBoost performed slightly better. Although the difference is negligible, the random forest had the highest test accuracy, approximately 29%, using a subset of ten optimal HRV features. Furthermore, to validate the findings, the original dataset was shuffled and used as a test set and compared the performance to other related research outputs.
APA, Harvard, Vancouver, ISO, and other styles
9

Thames, John Lane. "Advancing cyber security with a semantic path merger packet classification algorithm." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45872.

Full text
Abstract:
This dissertation investigates and introduces novel algorithms, theories, and supporting frameworks to significantly improve the growing problem of Internet security. A distributed firewall and active response architecture is introduced that enables any device within a cyber environment to participate in the active discovery and response of cyber attacks. A theory of semantic association systems is developed for the general problem of knowledge discovery in data. The theory of semantic association systems forms the basis of a novel semantic path merger packet classification algorithm. The theoretical aspects of the semantic path merger packet classification algorithm are investigated, and the algorithm's hardware-based implementation is evaluated along with comparative analysis versus content addressable memory. Experimental results show that the hardware implementation of the semantic path merger algorithm significantly outperforms content addressable memory in terms of energy consumption and operational timing.
APA, Harvard, Vancouver, ISO, and other styles
10

Ekelund, Måns. "Uncertainty Estimation for Deep Learning-based LPI Radar Classification : A Comparative Study of Bayesian Neural Networks and Deep Ensembles." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301653.

Full text
Abstract:
Deep Neural Networks (DNNs) have shown promising results in classifying known Low-probability-of-intercept (LPI) radar signals in noisy environments. However, regular DNNs produce low-quality confidence and uncertainty estimates, making them unreliable, which inhibit deployment in real-world settings. Hence, the need for robust uncertainty estimation methods has grown, and two categories emerged, Bayesian approximation and ensemble learning. As autonomous LPI radar classification is deployed in safety-critical environments, this study compares Bayesian Neural Networks (BNNs) and Deep Ensembles (DEs) as uncertainty estimation methods. We synthetically generate a training and test data set, as well as a shifted data set where subtle changes are made to the signal parameters. The methods are evaluated on predictive performance, relevant confidence and uncertainty estimation metrics, and method-related metrics such as model size, training, and inference time. Our results show that our DE achieves slightly higher predictive performance than the BNN on both in-distribution and shifted data with an accuracy of 74% and 32%, respectively. Further, we show that both methods exhibit more cautiousness in their predictions compared to a regular DNN for in-distribution data, while the confidence quality significantly degrades on shifted data. Uncertainty in predictions is evaluated as predictive entropy, and we show that both methods exhibit higher uncertainty on shifted data. We also show that the signal-to-noise ratio affects uncertainty compared to a regular DNN. However, none of the methods exhibit uncertainty when making predictions on unseen signal modulation patterns, which is not a desirable behavior. Further, we conclude that the amount of available resources could influence the choice of the method since DEs are resource-heavy, requiring more memory than a regular DNN or BNN. On the other hand, the BNN requires a far longer training time.
Tidigare studier har visat att djupa neurala nätverk (DNN) kan klassificera signalmönster för en speciell typ av radar (LPI) som är skapad för att vara svår att identifiera och avlyssna. Traditionella neurala nätverk saknar dock ett naturligt sätt att skatta osäkerhet, vilket skadar deras pålitlighet och förhindrar att de används i säkerhetskritiska miljöer. Osäkerhetsskattning för djupinlärning har därför vuxit och på senare tid blivit ett stort område med två tydliga kategorier, Bayesiansk approximering och ensemblemetoder. LPI radarklassificering är av stort intresse för försvarsindustrin, och tekniken kommer med största sannolikhet att appliceras i säkerhetskritiska miljöer. I denna studie jämför vi Bayesianska neurala nätverk och djupa ensembler för LPI radarklassificering. Resultaten från studien pekar på att en djup ensemble uppnår högre träffsäkerhet än ett Bayesianskt neuralt nätverk och att båda metoderna uppvisar återhållsamhet i sina förutsägelser jämfört med ett traditionellt djupt neuralt nätverk. Vi skattar osäkerhet som entropi och visar att osäkerheten i metodernas slutledningar ökar både på höga brusnivåer och på data som är något förskjuten från den kända datadistributionen. Resultaten visar dock att metodernas osäkerhet inte ökar jämfört med ett vanligt nätverk när de får se tidigare osedda signal mönster. Vi visar också att val av metod kan influeras av tillgängliga resurser, eftersom djupa ensembler kräver mycket minne jämfört med ett traditionellt eller Bayesianskt neuralt nätverk.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Ensemble Based Classification"

1

Zirnbauer, Martin R. Symmetry classes. Edited by Gernot Akemann, Jinho Baik, and Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.3.

Full text
Abstract:
This article examines the notion of ‘symmetry class’, which expresses the relevance of symmetries as an organizational principle. In his 1962 paper The threefold way: algebraic structure of symmetry groups and ensembles in quantum mechanics, Dyson introduced the prime classification of random matrix ensembles based on a quantum mechanical setting with symmetries. He described three types of independent irreducible ensembles: complex Hermitian, real symmetric, and quaternion self-dual. This article first reviews Dyson’s threefold way from a modern perspective before considering a minimal extension of his setting to incorporate the physics of chiral Dirac fermions and disordered superconductors. In this minimally extended setting, Hilbert space is replaced by Fock space equipped with the anti-unitary operation of particle-hole conjugation, and symmetry classes are in one-to-one correspondence with the large families of Riemannian symmetric spaces.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Ensemble Based Classification"

1

Herrera, Francisco, Francisco Charte, Antonio J. Rivera, and María J. del Jesus. "Ensemble-Based Classifiers." In Multilabel Classification, 101–13. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41111-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raimundo, Marcos M., and Fernando J. Von Zuben. "Many-Objective Ensemble-Based Multilabel Classification." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 365–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75193-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bock, K. W. De, K. Coussement, and D. Cielen. "An Overview of Multiple Classifier Systems Based on Generalized Additive Models." In Ensemble Classification Methods with Applicationsin R, 175–86. Chichester, UK: John Wiley & Sons, Ltd, 2018. http://dx.doi.org/10.1002/9781119421566.ch11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schaefer, Gerald, Bartosz Krawczyk, M. Emre Celebi, Hitoshi Iyatomi, and Aboul Ella Hassanien. "Melanoma Classification Based on Ensemble Classification of Dermoscopy Image Features." In Communications in Computer and Information Science, 291–98. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13461-1_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deuse, Jochen, Mario Wiegand, and Kirsten Weisner. "Continuous Process Monitoring Through Ensemble-Based Anomaly Detection." In Studies in Classification, Data Analysis, and Knowledge Organization, 289–301. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25147-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sultana, Naznin, and Mohammad Mohaiminul Islam. "Meta Classifier-Based Ensemble Learning For Sentiment Classification." In Proceedings of International Joint Conference on Computational Intelligence, 73–84. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7564-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Hui, Shu-guang Huang, Min Zhang, Zu-lie Pan, Fan Shi, Cheng Huang, and Beibei Li. "Classification of Malware Variant Based on Ensemble Learning." In Machine Learning for Cyber Security, 125–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62223-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Petković, Matej, Sašo Džeroski, and Dragi Kocev. "Ensemble-Based Feature Ranking for Semi-supervised Classification." In Discovery Science, 290–305. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-33778-0_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anisetty, Manikanta Durga Srinivas, Gagan K Shetty, Srinidhi Hiriyannaiah, Siddesh Gaddadevara Matt, K. G. Srinivasa, and Anita Kanavalli. "Content-Based Music Classification Using Ensemble of Classifiers." In Intelligent Human Computer Interaction, 285–92. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04021-5_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yiyang, Lei Su, Jun Chen, and Liwei Yuan. "Semi-supervised Question Classification Based on Ensemble Learning." In Advances in Swarm and Computational Intelligence, 341–48. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20472-7_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Ensemble Based Classification"

1

Zhiwen Yu, Xing Wang, and Hau-San Wong. "Ensemble based 3D human motion classification." In 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4633839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Jonathan, Hong Lu, Paulo Lopez Meyer, Hector Cordourier, and Juan Del Hoyo Ontiveros. "Acoustic Scene Classification Using Deep Learning-based Ensemble Averaging." In 4th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2019). New York University, 2019. http://dx.doi.org/10.33682/8rd2-g787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krishna Veni, C. V., and T. Sobha Rani. "Ensemble based classification using small training sets : A novel approach." In 2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL). IEEE, 2014. http://dx.doi.org/10.1109/ciel.2014.7015738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Qi, and Zhengdao Wang. "Ensemble classification based on Random linear base classifiers." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Odinokikh, Nikita, and Vladimir Berikov. "Cluster Ensemble Kernel for Kernel-based Classification." In 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON). IEEE, 2019. http://dx.doi.org/10.1109/sibircon48586.2019.8958184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jia, Keliang, Kang Chen, Xiaozhong Fan, and Yu Zhang. "Chinese Question Classification Based on Ensemble Learning." In Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007). IEEE, 2007. http://dx.doi.org/10.1109/snpd.2007.183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jin, Yuxin, Ze Yang, Ying He, Xianyu Bao, and Gongqing Wu. "Ensemble Classification Method Based on Truth Discovery." In 2019 IEEE International Conference on Big Knowledge (ICBK). IEEE, 2019. http://dx.doi.org/10.1109/icbk.2019.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Vitor F., Roberto M. Barbosa, Pedro M. Vieira, and Carlos S. Lima. "Ensemble learning based classification for BCI applications." In 2017 IEEE 5th Portuguese Meeting on Bioengineering (ENBENG). IEEE, 2017. http://dx.doi.org/10.1109/enbeng.2017.7889483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ben Ayed, Abdelkarim, Marwa Benhammouda, Mohamed Ben Halima, and Adel M. Alimi. "Random forest ensemble classification based fuzzy logic." In Ninth International Conference on Machine Vision, edited by Antanas Verikas, Petia Radeva, Dmitry P. Nikolaev, Wei Zhang, and Jianhong Zhou. SPIE, 2017. http://dx.doi.org/10.1117/12.2268564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deeksha, Deeksha, Rajesh Bhatia, Shikhar Bhardwaj, Manish Kumar, Kashish Bhatia, and Shabeg Singh Gill. "Stacking Ensemble-based Automatic Web Page Classification." In 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT). IEEE, 2021. http://dx.doi.org/10.1109/ccict53244.2021.00042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography