Academic literature on the topic 'Ensemble learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ensemble learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Ensemble learning"

1

Abbasian, Houman. "Inner Ensembles: Using Ensemble Methods in Learning Step." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31127.

Full text
Abstract:
A pivotal moment in machine learning research was the creation of an important new research area, known as Ensemble Learning. In this work, we argue that ensembles are a very general concept, and though they have been widely used, they can be applied in more situations than they have been to date. Rather than using them only to combine the output of an algorithm, we can apply them to decisions made inside the algorithm itself, during the learning step. We call this approach Inner Ensembles. The motivation to develop Inner Ensembles was the opportunity to produce models with the similar advantages as regular ensembles, accuracy and stability for example, plus additional advantages such as comprehensibility, simplicity, rapid classification and small memory footprint. The main contribution of this work is to demonstrate how broadly this idea can be applied, and highlight its potential impact on all types of algorithms. To support our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning methods: supervised and un-supervised. For the former we chose Bayesian network, and for the latter K-Means clustering. Our results show that 1) the overall performance of Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles provide similar performance improvements as regular ensembles.
APA, Harvard, Vancouver, ISO, and other styles
2

Henley, Jennie. "The learning ensemble : musical learning through participation." Thesis, Birmingham City University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527426.

Full text
Abstract:
This thesis is an examination of the learning processes employed by adults who learn to play an instrument within an ensemble. The alms of the research were threefold. Firstly, to discover how a person learns in a group and what the role of the soclo-cultural environment is In learning. Secondly, to Investigate the role that Identity plays In learning and whether the students regard themselves as musicians. Rnally, to explore the role of the performance in the musical learning process. The research has been carried out using case-study research and a four-year autoethnographlc study. The theoretical framework Is provided by literature from the fields of cultural psychology, music psychology and adult learning. Activity Theory has been used as the main analytical tool. The discussion firstly considers the leaming process in order to construct an activity system of muslcalleamlng within an ensemble. Then, using this activity system, the motivational factors inherent In the learning ensemble and the role of Identity In generating motivation are considered. Through analysing motivation and Identity In relation to the activity system, I have demonstrated how the activity system can be developed into a three-dimensional system by Incorporating Identity as a constituent, thus stabilising the activity system. A three-dimensional system then allows for multiple activities to be analysed through the construction of activity constellations. The result of this study is a model of partidpative learning. Partidpative learning takes Into consideration the purpose of learning and the soclo-cultural environment so that musical leaming Is embedded In social music making. This then provides music education with a new model for leamlng a musical Instrument.
APA, Harvard, Vancouver, ISO, and other styles
3

Shoemaker, Larry. "Ensemble Learning With Imbalanced Data." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3589.

Full text
Abstract:
We describe an ensemble approach to learning salient spatial regions from arbitrarily partitioned simulation data. Ensemble approaches for anomaly detection are also explored. The partitioning comes from the distributed processing requirements of large-scale simulations. The volume of the data is such that classifiers can train only on data local to a given partition. Since the data partition reflects the needs of the simulation, the class statistics can vary from partition to partition. Some classes will likely be missing from some or even most partitions. We combine a fast ensemble learning algorithm with scaled probabilistic majority voting in order to learn an accurate classifier from such data. Since some simulations are difficult to model without a considerable number of false positive errors, and since we are essentially building a search engine for simulation data, we order predicted regions to increase the likelihood that most of the top-ranked predictions are correct (salient). Results from simulation runs of a canister being torn and from a casing being dropped show that regions of interest are successfully identified in spite of the class imbalance in the individual training sets. Lift curve analysis shows that the use of data driven ordering methods provides a statistically significant improvement over the use of the default, natural time step ordering. Significant time is saved for the end user by allowing an improved focus on areas of interest without the need to conventionally search all of the data. We have also found that using random forests weighted and distance-based outlier ensemble methods for supervised learning of anomaly detection provide significant accuracy improvements when compared to existing methods on the same dataset. Further, distance-based outlier and local outlier factor ensemble methods for unsupervised learning of anomaly detection also compare favorably to existing methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Rooney, Niall. "Ensemble meta-learning for regression." Thesis, University of Ulster, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Raharjo, Agus Budi. "Reliability in ensemble learning and learning from crowds." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0606.

Full text
Abstract:
La combinaison de plusieurs annotateurs experts est considérée pour prendre des décisions fiables dans le cas de données non étiquetées, bien que l’estimation des annotations d’experts ne soit pas une tâche facile en raison de divers niveaux de performance. Dans l’apprentissage supervisé, la performance contrastée des annotateurs peut se produire dans l’apprentissage ensembliste ou lorsque les vérités terrains sont absente. Dans l’apprentissage ensembliste, lorsque les données d'entraînement sont disponibles, différents classificateurs de base comme annotateurs fournissent des prévisions incertaines dans la phase de test. Alors que dans un cas où il n’y a pas des vérités terrains dans la phase d'entraînement, nous considérons les étiquettes proposées par plusieurs annotateurs sur les foules comme une pseudo-vérité de fond. Dans cette thèse, la première contribution basée sur le vote pondéré dans l’apprentissage ensembliste est proposée pour fournir des prédictions de combinaison fiables. Notre algorithme transforme les scores de confiance obtenus pendant le processus d'apprentissage en scores fiables. Lorsqu’il est difficile de trouver des experts comme les vérités terrains, une approche fondée sur l'estimation du maximum de vraisemblance et l'espérance-maximisation est proposée comme deuxième contribution pour sélectionner des annotateurs fiables. De plus, nous optimisons le temps de calcul de nos cadres afin d’adapter un grand nombre de données. Enfin, nos contributions visent à fournir des décisions fiables compte tenu des prédictions incertaines des classificateurs dans l’apprentissage ensembliste ou des annotations incertaines dans l’apprentissage de la foule<br>The combination of several human expert labels is generally used to make reliable decisions. However, using humans or learning systems to improve the overall decision is a crucial problem. Indeed, several human experts or machine learning have not necessarily the same performance. Hence, a great effort is made to deal with this performance problem in the presence of several actors, i.e., humans or classifiers. In this thesis, we present the combination of reliable classifiers in ensemble learning and learning from crowds. The first contribution is a method, based on weighted voting, which allows selecting a reliable combination of classifications. Our algorithm RelMV transforms confidence scores, obtained during the training phase, into reliable scores. By using these scores, it determines a set of reliable candidates through both static and dynamic selection process. When it is hard to find expert labels as ground truth, we propose an approach based on Bayesian and expectation-maximization (EM) as our second contribution. The aim is to evaluate the reliability degree of each annotator and to aggregate the appropriate labels carefully. We optimize the computation time of the algorithm in order to adapt a large number of data collected from crowds. The obtained outcomes show better accuracy, stability, and computation time compared to the previous methods. Also, we conduct an experiment considering the melanoma diagnosis problem using a real-world medical dataset consisting of a set of skin lesions images, which is annotated by multiple dermatologists
APA, Harvard, Vancouver, ISO, and other styles
6

Sinsel, Erik W. "Ensemble learning for ranking interesting attributes." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4400.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2005.<br>Title from document title page. Document formatted into pages; contains viii, 81 p. : ill. Includes abstract. Includes bibliographical references (p. 72-74).
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Shuo. "Ensemble diversity for class imbalance learning." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/1793/.

Full text
Abstract:
This thesis studies the diversity issue of classification ensembles for class imbalance learning problems. Class imbalance learning refers to learning from imbalanced data sets, in which some classes of examples (minority) are highly under-represented comparing to other classes (majority). The very skewed class distribution degrades the learning ability of many traditional machine learning methods, especially in the recognition of examples from the minority classes, which are often deemed to be more important and interesting. Although quite a few ensemble learning approaches have been proposed to handle the problem, no in-depth research exists to explain why and when they can be helpful. Our objectives are to understand how ensemble diversity affects the classification performance for a class imbalance problem according to single-class and overall performance measures, and to make best use of diversity to improve the performance. As the first stage, we study the relationship between ensemble diversity and generalization performance for class imbalance problems. We investigate mathematical links between single-class performance and ensemble diversity. It is found that how the single-class measures change along with diversity falls into six different situations. These findings are then verified in class imbalance scenarios through empirical studies. The impact of diversity on overall performance is also investigated empirically. Strong correlations between diversity and the performance measures are found. Diversity shows a positive impact on the recognition of the minority class and benefits the overall performance of ensembles in class imbalance learning. Our results help to understand if and why ensemble diversity can help to deal with class imbalance problems. Encouraged by the positive role of diversity in class imbalance learning, we then focus on a specific ensemble learning technique, the negative correlation learning (NCL) algorithm, which considers diversity explicitly when creating ensembles and has achieved great empirical success. We propose a new learning algorithm based on the idea of NCL, named AdaBoost.NC, for classification problems. An ``ambiguity" term decomposed from the 0-1 error function is introduced into the training framework of AdaBoost. It demonstrates superiority in both effectiveness and efficiency. Its good generalization performance is explained by theoretical and empirical evidences. It can be viewed as the first NCL algorithm specializing in classification problems. Most existing ensemble methods for class imbalance problems suffer from the problems of overfitting and over-generalization. To improve this situation, we address the class imbalance issue by making use of ensemble diversity. We investigate the generalization ability of NCL algorithms, including AdaBoost.NC, to tackle two-class imbalance problems. We find that NCL methods integrated with random oversampling are effective in recognizing minority class examples without losing the overall performance, especially the AdaBoost.NC tree ensemble. This is achieved by providing smoother and less overfitting classification boundaries for the minority class. The results here show the usefulness of diversity and open up a novel way to deal with class imbalance problems. Since the two-class imbalance is not the only scenario in real-world applications, multi-class imbalance problems deserve equal attention. To understand what problems multi-class can cause and how it affects the classification performance, we study the multi-class difficulty by analyzing the multi-minority and multi-majority cases respectively. Both lead to a significant performance reduction. The multi-majority case appears to be more harmful. The results reveal possible issues that a class imbalance learning technique could have when dealing with multi-class tasks. Following this part of analysis and the promising results of AdaBoost.NC on two-class imbalance problems, we apply AdaBoost.NC to a set of multi-class imbalance domains with the aim of solving them effectively and directly. Our method shows good generalization in minority classes and balances the performance across different classes well without using any class decomposition schemes. Finally, we conclude this thesis with how the study has contributed to class imbalance learning and ensemble learning, and propose several possible directions for future research that may improve and extend this work.
APA, Harvard, Vancouver, ISO, and other styles
8

Soares, Rodrigo Gabriel Ferreira. "Cluster-based semi-supervised ensemble learning." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4818/.

Full text
Abstract:
Semi-supervised classification consists of acquiring knowledge from both labelled and unlabelled data to classify test instances. The cluster assumption represents one of the potential relationships between true classes and data distribution that semi-supervised algorithms assume in order to use unlabelled data. Ensemble algorithms have been widely and successfully employed in both supervised and semi-supervised contexts. In this Thesis, we focus on the cluster assumption to study ensemble learning based on a new cluster regularisation technique for multi-class semi-supervised classification. Firstly, we introduce a multi-class cluster-based classifier, the Cluster-based Regularisation (Cluster- Reg) algorithm. ClusterReg employs a new regularisation mechanism based on posterior probabilities generated by a clustering algorithm in order to avoid generating decision boundaries that traverses high-density regions. Such a method possesses robustness to overlapping classes and to scarce labelled instances on uncertain and low-density regions, when data follows the cluster assumption. Secondly, we propose a robust multi-class boosting technique, Cluster-based Boosting (CBoost), which implements the proposed cluster regularisation for ensemble learning and uses ClusterReg as base learner. CBoost is able to overcome possible incorrect pseudo-labels and produces better generalisation than existing classifiers. And, finally, since there are often datasets with a large number of unlabelled instances, we propose the Efficient Cluster-based Boosting (ECB) for large multi-class datasets. ECB extends CBoost and has lower time and memory complexities than state-of-the-art algorithms. Such a method employs a sampling procedure to reduce the training set of base learners, an efficient clustering algorithm, and an approximation technique for nearest neighbours to avoid the computation of pairwise distance matrix. Hence, ECB enables semi-supervised classification for large-scale datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Miskin, James William. "Ensemble learning for independent component analysis." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lind, Simon. "Distributed Ensemble Learning With Apache Spark." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-274323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography