Academic literature on the topic 'Ensemble learning methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ensemble learning methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Ensemble learning methods"

1

Qutub, Aseel, Asmaa Al-Mehmadi, Munirah Al-Hssan, Ruyan Aljohani, and Hanan S. Alghamdi. "Prediction of Employee Attrition Using Machine Learning and Ensemble Methods." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 110–14. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1022.

Full text
Abstract:
Employees are the most valuable resources for any organization. The cost associated with professional training, the developed loyalty over the years and the sensitivity of some organizational positions, all make it very essential to identify who might leave the organization. Many reasons can lead to employee attrition. In this paper, several machine learning models are developed to automatically and accurately predict employee attrition. IBM attrition dataset is used in this work to train and evaluate machine learning models; namely Decision Tree, Random Forest Regressor, Logistic Regressor, Adaboost Model, and Gradient Boosting Classifier models. The ultimate goal is to accurately detect attrition to help any company to improve different retention strategies on crucial employees and boost those employee satisfactions.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdillah, Abid Famasya, Cornelius Bagus Purnama Putra, Apriantoni Apriantoni, Safitri Juanita, and Diana Purwitasari. "Ensemble-based Methods for Multi-label Classification on Biomedical Question-Answer Data." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (April 26, 2022): 42–50. http://dx.doi.org/10.20473/jisebi.8.1.42-50.

Full text
Abstract:
Background: Question-answer (QA) is a popular method to seek health-related information and biomedical data. Such questions can refer to more than one medical entity (multi-label) so determining the correct tags is not easy. The question classification (QC) mechanism in a QA system can narrow down the answers we are seeking. Objective: This study develops a multi-label classification using the heterogeneous ensembles method to improve accuracy in biomedical data with long text dimensions. Methods: We used the ensemble method with heterogeneous deep learning and machine learning for multi-label extended text classification. There are 15 various single models consisting of three deep learning (CNN, LSTM, and BERT) and four machine learning algorithms (SVM, kNN, Decision Tree, and Naïve Bayes) with various text representations (TF-IDF, Word2Vec, and FastText). We used the bagging approach with a hard voting mechanism for the decision-making. Results: The result shows that deep learning is more powerful than machine learning as a single multi-label biomedical data classification method. Moreover, we found that top-three was the best number of base learners by combining the ensembles method. Heterogeneous-based ensembles with three learners resulted in an F1-score of 82.3%, which is better than the best single model by CNN with an F1-score of 80%. Conclusion: A multi-label classification of biomedical QA using ensemble models is better than single models. The result shows that heterogeneous ensembles are more potent than homogeneous ensembles on biomedical QA data with long text dimensions. Keywords: Biomedical Question Classification, Ensemble Method, Heterogeneous Ensembles, Multi-Label Classification, Question Answering
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Boyu, Ji Xiang, and Xin Wang. "Network representation learning with ensemble methods." Neurocomputing 380 (March 2020): 141–49. http://dx.doi.org/10.1016/j.neucom.2019.10.098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tolmidis, Avraam Th, and Loukas Petrou. "Ensemble Methods for Cooperative Robotic Learning." International Journal of Intelligent Systems 32, no. 5 (October 26, 2016): 502–25. http://dx.doi.org/10.1002/int.21858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Evangelista, Edmund De Leon, and Benedict Descargar Sy. "An approach for improved students’ performance prediction using homogeneous and heterogeneous ensemble methods." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 5 (October 1, 2022): 5226. http://dx.doi.org/10.11591/ijece.v12i5.pp5226-5235.

Full text
Abstract:
<span lang="EN-US">Web-based learning technologies of educational institutions store a massive amount of interaction data which can be helpful to predict students’ performance through the aid of machine learning algorithms. With this, various researchers focused on studying ensemble learning methods as it is known to improve the predictive accuracy of traditional classification algorithms. This study proposed an approach for enhancing the performance prediction of different single classification algorithms by using them as base classifiers of homogeneous ensembles (bagging and boosting) and heterogeneous ensembles (voting and stacking). The model utilized various single classifiers such as multilayer perceptron or neural networks (NN), random forest (RF), naïve Bayes (NB), J48, JRip, OneR, logistic regression (LR), k-nearest neighbor (KNN), and support vector machine (SVM) to determine the base classifiers of the ensembles. In addition, the study made use of the University of California Irvine (UCI) open-access student dataset to predict students’ performance. The comparative analysis of the model’s accuracy showed that the best-performing single classifier’s accuracy increased further from 93.10% to 93.68% when used as a base classifier of a voting ensemble method. Moreover, results in this study showed that voting heterogeneous ensemble performed slightly better than bagging and boosting homogeneous ensemble methods.</span>
APA, Harvard, Vancouver, ISO, and other styles
6

TURAN, SELIN CEREN, and MEHMET ALI CENGIZ. "ENSEMBLE LEARNING ALGORITHMS." Journal of Science and Arts 22, no. 2 (June 30, 2022): 459–70. http://dx.doi.org/10.46939/j.sci.arts-22.2-a18.

Full text
Abstract:
Artificial intelligence is a method that is increasingly becoming widespread in all areas of life and enables machines to imitate human behavior. Machine learning is a subset of artificial intelligence techniques that use statistical methods to enable machines to evolve with experience. As a result of the advancement of technology and developments in the world of science, the interest and need for machine learning is increasing day by day. Human beings use machine learning techniques in their daily life without realizing it. In this study, ensemble learning algorithms, one of the machine learning techniques, are mentioned. The methods used in this study are Bagging and Adaboost algorithms which are from Ensemble Learning Algorithms. The main purpose of this study is to find the best performing classifier with the Classification and Regression Trees (CART) basic classifier on three different data sets taken from the UCI machine learning database and then to obtain the ensemble learning algorithms that can make this performance better and more determined using two different ensemble learning algorithms. For this purpose, the performance measures of the single basic classifier and the ensemble learning algorithms were compared
APA, Harvard, Vancouver, ISO, and other styles
7

Alhazmi, Omar H., and Mohammed Zubair Khan. "Software Effort Prediction Using Ensemble Learning Methods." Journal of Software Engineering and Applications 13, no. 07 (2020): 143–60. http://dx.doi.org/10.4236/jsea.2020.137010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Fayao, Ruizhi Qiao, Chunhua Shen, and Lei Luo. "Designing ensemble learning algorithms using kernel methods." International Journal of Machine Intelligence and Sensory Signal Processing 2, no. 1 (2017): 1. http://dx.doi.org/10.1504/ijmissp.2017.088165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luo, Lei, Fayao Liu, Ruizhi Qiao, and Chunhua Shen. "Designing ensemble learning algorithms using kernel methods." International Journal of Machine Intelligence and Sensory Signal Processing 2, no. 1 (2017): 1. http://dx.doi.org/10.1504/ijmissp.2017.10009116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

HASEGAWA, Hironobu, Toshiyuki NAITO, Mikiharu ARIMURA, and Tohru TAMURA. "MODAL CHOICE ANALYSIS USING ENSEMBLE LEARNING METHODS." Journal of Japan Society of Civil Engineers, Ser. D3 (Infrastructure Planning and Management) 68, no. 5 (2012): I_773—I_780. http://dx.doi.org/10.2208/jscejipm.68.i_773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Ensemble learning methods"

1

Abbasian, Houman. "Inner Ensembles: Using Ensemble Methods in Learning Step." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31127.

Full text
Abstract:
A pivotal moment in machine learning research was the creation of an important new research area, known as Ensemble Learning. In this work, we argue that ensembles are a very general concept, and though they have been widely used, they can be applied in more situations than they have been to date. Rather than using them only to combine the output of an algorithm, we can apply them to decisions made inside the algorithm itself, during the learning step. We call this approach Inner Ensembles. The motivation to develop Inner Ensembles was the opportunity to produce models with the similar advantages as regular ensembles, accuracy and stability for example, plus additional advantages such as comprehensibility, simplicity, rapid classification and small memory footprint. The main contribution of this work is to demonstrate how broadly this idea can be applied, and highlight its potential impact on all types of algorithms. To support our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning methods: supervised and un-supervised. For the former we chose Bayesian network, and for the latter K-Means clustering. Our results show that 1) the overall performance of Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles provide similar performance improvements as regular ensembles.
APA, Harvard, Vancouver, ISO, and other styles
2

Velka, Elina. "Loss Given Default Estimation with Machine Learning Ensemble Methods." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279846.

Full text
Abstract:
This thesis evaluates the performance of three machine learning methods in prediction of the Loss Given Default (LGD). LGD can be seen as the opposite of the recovery rate, i.e. the ratio of an outstanding loan that the loan issuer would not be able to recover in case the customer would default. The methods investigated are decision trees, random forest and boosted methods. All of the methods investigated performed well in predicting the cases were the loan is not recovered, LGD = 1 (100%), or the loan is totally recovered, LGD = 0 (0% ). When the performance of the models was evaluated on a dataset where the observations with LGD = 1 were removed, a significant decrease in performance was observed. The random forest model built on an unbalanced training dataset showed better performance on the test dataset that included values LGD = 1 and the random forest model built on a balanced training dataset performed better on the test set where the observations of LGD = 1 were removed. Boosted models evaluated in this study showed less accurate predictions than other methods used. Overall, the performance of random forest models showed slightly better results than the performance of decision tree models, although the computational time (the cost) was considerably longer when running the random forest models. Therefore decision tree models would be suggested for prediction of the Loss Given Default.
Denna uppsats undersöker och jämför tre maskininlärningsmetoder som estimerar förlust vid fallissemang (Loss Given Default, LGD). LGD kan ses som motsatsen till återhämtningsgrad, dvs. andelen av det utstående lånet som långivaren inte skulle återfå ifall kunden skulle fallera. Maskininlärningsmetoder som undersöks i detta arbete är decision trees, random forest och boosted metoder. Alla metoder fungerade väl vid estimering av lån som antingen inte återbetalas, dvs. LGD = 1 (100%), eller av lån som betalas i sin helhet, LGD = 0 (0%). En tydlig minskning i modellernas träffsäkerhet påvisades när modellerna kördes med ett dataset där observationer med LGD = 1 var borttagna. Random forest modeller byggda på ett obalanserat träningsdataset presterade bättre än de övriga modellerna på testset som inkluderade observationer där LGD = 1. Då observationer med LGD = 1 var borttagna visade det sig att random forest modeller byggda på ett balanserat träningsdataset presterade bättre än de övriga modellerna. Boosted modeller visade den svagaste träffsäkerheten av de tre metoderna som blev undersökta i denna studie. Totalt sett visade studien att random forest modeller byggda på ett obalanserat träningsdataset presterade en aning bättre än decision tree modeller, men beräkningstiden (kostnaden) var betydligt längre när random forest modeller kördes. Därför skulle decision tree modeller föredras vid estimering av förlust vid fallissemang.
APA, Harvard, Vancouver, ISO, and other styles
3

Conesa, Gago Agustin. "Methods to combine predictions from ensemble learning in multivariate forecasting." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-103600.

Full text
Abstract:
Making predictions nowadays is of high importance for any company, whether small or large, as thanks to the possibility to analyze the data available, new market opportunities can be found, risks and costs can be reduced, among others. Machine learning algorithms for time series can be used for predicting future values of interest. However, choosing the appropriate algorithm and tuning its metaparameters require a great level of expertise. This creates an adoption barrier for small and medium enterprises which could not afford hiring a machine learning expert to their IT team. For these reasons, this project studies different possibilities to make good predictions based on machine learning algorithms, but without requiring great theoretical knowledge from the users. Moreover, a software package that implements the prediction process has been developed. The software is an ensemble method that first predicts a value taking into account different algorithms at the same time, and then it combines their results considering also the previous performance of each algorithm to obtain a final prediction of the value. Moreover, the solution proposed and implemented in this project can also predict according to a concrete objective (e.g., optimize the prediction, or do not exceed the real value) because not every prediction problem is subject to the same constraints. We have experimented and validated the implementation with three different cases. In all of them, a better performance has been obtained in comparison with each of the algorithms involved, reaching improvements of 45 to 95%.
APA, Harvard, Vancouver, ISO, and other styles
4

Kanneganti, Alekhya. "Using Ensemble Machine Learning Methods in Estimating Software Development Effort." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20691.

Full text
Abstract:
Background: Software Development Effort Estimation is a process that focuses on estimating the required effort to develop a software project with a minimal budget. Estimating effort includes interpretation of required manpower, resources, time and schedule. Project managers are responsible for estimating the required effort. A model that can predict software development effort efficiently comes in hand and acts as a decision support system for the project managers to enhance the precision in estimating effort. Therefore, the context of this study is to increase the efficiency in estimating software development effort. Objective: The main objective of this thesis is to identify an effective ensemble method to build and implement it, in estimating software development effort. Apart from this, parameter tuning is also implemented to improve the performance of the model. Finally, we compare the results of the developed model with the existing models. Method: In this thesis, we have adopted two research methods. Initially, a Literature Review was conducted to gain knowledge on the existing studies, machine learning techniques, datasets, ensemble methods that were previously used in estimating Software Development Effort. Then a controlled Experiment was conducted in order to build an ensemble model and to evaluate the performance of the ensemble model for determining if the developed model has a better performance when compared to the existing models.   Results: After conducting literature review and collecting evidence, we have decided to build and implement stacked generalization ensemble method in this thesis, with the help of individual machine learning techniques like Support vector regressor (SVR), K-Nearest Neighbors regressor (KNN), Decision Tree Regressor (DTR), Linear Regressor (LR), Multi-Layer Perceptron Regressor (MLP) Random Forest Regressor (RFR), Gradient Boosting Regressor (GBR), AdaBoost Regressor (ABR), XGBoost Regressor (XGB). Likewise, we have decided to implement Randomized Parameter Optimization and SelectKbest function to implement feature section. Datasets like COCOMO81, MAXWELL, ALBERCHT, DESHARNAIS were used. Results of the experiment show that the developed ensemble model performs at its best, for three out of four datasets. Conclusion: After evaluating and analyzing the results obtained, we can conclude that the developed model works well with the datasets that have continuous, numeric type of values. We can also conclude that the developed ensemble model outperforms other existing models when implemented with COCOMO81, MAXWELL, ALBERCHT datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Bustos, Ricardo Gacitua. "OntoLancs : An evaluation framework for ontology learning by ensemble methods." Thesis, Lancaster University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.533089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elahi, Haroon. "A Boosted-Window Ensemble." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5658.

Full text
Abstract:
Context. The problem of obtaining predictions from stream data involves training on the labeled instances and suggesting the class values for the unseen stream instances. The nature of the data-stream environments makes this task complicated. The large number of instances, the possibility of changes in the data distribution, presence of noise and drifting concepts are just some of the factors that add complexity to the problem. Various supervised-learning algorithms have been designed by putting together efficient data-sampling, ensemble-learning, and incremental-learning methods. The performance of the algorithm is dependent on the chosen methods. This leaves an opportunity to design new supervised-learning algorithms by using different combinations of constructing methods. Objectives. This thesis work proposes a fast and accurate supervised-learning algorithm for performing predictions on the data-streams. This algorithm is called as Boosted-Window Ensemble (BWE), which is invented using the mixture-of-experts technique. BWE uses Sliding Window, Online Boosting and incremental-learning for data-sampling, ensemble-learning, and maintaining a consistent state with the current stream data, respectively. In this regard, a sliding window method is introduced. This method uses partial-updates for sliding the window on the data-stream and is called Partially-Updating Sliding Window (PUSW). The investigation is carried out to compare two variants of sliding window and three different ensemble-learning methods for choosing the superior methods. Methods. The thesis uses experimentation approach for evaluating the Boosted-Window Ensemble (BWE). CPU-time and the Prediction accuracy are used as performance indicators, where CPU-time is the execution time in seconds. The benchmark algorithms include: Accuracy-Updated Ensemble1 (AUE1), Accuracy-Updated Ensemble2 (AUE2), and Accuracy-Weighted Ensemble (AWE). The experiments use nine synthetic and five real-world datasets for generating performance estimates. The Asymptotic Friedman test and the Wilcoxon Signed-Rank test are used for hypothesis testing. The Wilcoxon-Nemenyi-McDonald-Thompson test is used for performing post-hoc analysis. Results. The hypothesis testing suggests that: 1) both for the synthetic and real-wrold datasets, the Boosted Window Ensemble (BWE) has significantly lower CPU-time values than two benchmark algorithms (Accuracy-updated Ensemble1 (AUE1) and Accuracy-weighted Ensemble (AWE). 2) BWE returns similar prediction accuracy as AUE1 and AWE for synthetic datasets. 3) BWE returns similar prediction accuracy as the three benchmark algorithms for the real-world datasets. Conclusions. Experimental results demonstrate that the proposed algorithm can be as accurate as the state-of-the-art benchmark algorithms, while obtaining predictions from the stream data. The results further show that the use of Partially-Updating Sliding Window has resulted in lower CPU-time for BWE as compared with the chunk-based sliding window method used in AUE1, AUE2, and AWE.
APA, Harvard, Vancouver, ISO, and other styles
7

King, Michael Allen. "Ensemble Learning Techniques for Structured and Unstructured Data." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51667.

Full text
Abstract:
This research provides an integrated approach of applying innovative ensemble learning techniques that has the potential to increase the overall accuracy of classification models. Actual structured and unstructured data sets from industry are utilized during the research process, analysis and subsequent model evaluations. The first research section addresses the consumer demand forecasting and daily capacity management requirements of a nationally recognized alpine ski resort in the state of Utah, in the United States of America. A basic econometric model is developed and three classic predictive models evaluated the effectiveness. These predictive models were subsequently used as input for four ensemble modeling techniques. Ensemble learning techniques are shown to be effective. The second research section discusses the opportunities and challenges faced by a leading firm providing sponsored search marketing services. The goal for sponsored search marketing campaigns is to create advertising campaigns that better attract and motivate a target market to purchase. This research develops a method for classifying profitable campaigns and maximizing overall campaign portfolio profits. Four traditional classifiers are utilized, along with four ensemble learning techniques, to build classifier models to identify profitable pay-per-click campaigns. A MetaCost ensemble configuration, having the ability to integrate unequal classification cost, produced the highest campaign portfolio profit. The third research section addresses the management challenges of online consumer reviews encountered by service industries and addresses how these textual reviews can be used for service improvements. A service improvement framework is introduced that integrates traditional text mining techniques and second order feature derivation with ensemble learning techniques. The concept of GLOW and SMOKE words is introduced and is shown to be an objective text analytic source of service defects or service accolades.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Thanh Tien. "Ensemble Learning Techniques and Applications in Pattern Classification." Thesis, Griffith University, 2017. http://hdl.handle.net/10072/366342.

Full text
Abstract:
It is widely known that the best classifier for a given problem is often problem dependent and there is no one classification algorithm that is the best for all classification tasks. A natural question that arise is: can we combine multiple classification algorithms to achieve higher classification accuracy than a single one? That is the idea behind a class of methods called ensemble method. Ensemble method is defined as the combination of several classifiers with the aim of achieving lower classification error rate than using a single classifier. Ensemble methods have been applying to various applications ranging from computer aided medical diagnosis, computer vision, software engineering, to information retrieval. In this study, we focus on heterogeneous ensemble methods in which a fixed set of diverse learning algorithms are learned on the same training set to generate the different classifiers and the class prediction is then made based on the output of these classifiers (called Level1 data or meta-data). The research on heterogeneous ensemble methods is mainly focused on two aspects: (i) to propose efficient classifiers combining methods on meta-data to achieve high accuracy, and (ii) to optimize the ensemble by performing feature and classifier selection. Although various approaches related to heterogeneous ensemble methods have been proposed, some research gaps still exist First, in ensemble learning, the meta-data of an observation reflects the agreement and disagreement between the different base classifiers.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Zhe. "Semi-supervised Ensemble Learning Methods for Enhanced Prognostics and Health Management." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1522420632837268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Slawek, Janusz. "Inferring Gene Regulatory Networks from Expression Data using Ensemble Methods." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3396.

Full text
Abstract:
High-throughput technologies for measuring gene expression made inferring of the genome-wide Gene Regulatory Networks an active field of research. Reverse-engineering of systems of transcriptional regulations became an important challenge in molecular and computational biology. Because such systems model dependencies between genes, they are important in understanding of cell behavior, and can potentially turn observed expression data into the new biological knowledge and practical applications. In this dissertation we introduce a set of algorithms, which infer networks of transcriptional regulations from variety of expression profiles with superior accuracy compared to the state-of-the-art techniques. The proposed methods make use of ensembles of trees, which became popular in many scientific fields, including genetics and bioinformatics. However, originally they were motivated from the perspective of classification, regression, and feature selection theory. In this study we exploit their relative variable importance measure as an indication of the presence or absence of a regulatory interaction between genes. We further analyze their predictions on a set of the universally recognized benchmark expression data sets, and achieve favorable results in compare with the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Ensemble learning methods"

1

Zhang, Cha. Ensemble Machine Learning: Methods and Applications. Boston, MA: Springer US, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pattern classification using ensemble methods. Singapore: World Scientific, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ensemble methods: Foundations and algorithms. Boca Raton, FL: Taylor & Francis, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baruque, Bruno, and Emilio Corchado. Fusion Methods for Unsupervised Learning Ensembles. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-16205-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baruque, Bruno. Fusion methods for unsupervised learning ensembles. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rokach, Lior. Ensemble Learning: Pattern Classification Using Ensemble Methods. World Scientific Publishing Co Pte Ltd, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ensemble Methods for Machine Learning. Manning Publications Co. LLC, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Zhi-Hua. Ensemble Methods: Foundations and Algorithms. Taylor & Francis Group, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Zhi-Hua. Ensemble Methods: Foundations and Algorithms. Taylor & Francis Group, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Zhi-Hua. Ensemble Methods: Foundations and Algorithms. Taylor & Francis Group, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Ensemble learning methods"

1

Bisong, Ekaba. "Ensemble Methods." In Building Machine Learning and Deep Learning Models on Google Cloud Platform, 269–86. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pajankar, Ashwin, and Aditya Joshi. "Ensemble Learning Methods." In Hands-on Machine Learning with Python, 167–84. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-7921-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, Artur J., and Mário A. T. Figueiredo. "Boosting Algorithms: A Review of Methods, Theory, and Applications." In Ensemble Machine Learning, 35–85. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4419-9326-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geetha, T. V., and S. Sendhilkumar. "Performance Evaluation and Ensemble Methods." In Machine Learning, 191–210. Boca Raton: Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003290100-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brazdil, Pavel, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren. "Metalearning in Ensemble Methods." In Metalearning, 189–200. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-67024-5_10.

Full text
Abstract:
AbstractThis chapter discusses some approaches that exploit metalearning methods in ensemble learning. It starts by presenting a set of issues, such as the ensemble method used, which affect the process of ensemble learning and the resulting ensemble. In this chapter we discuss various lines of research that were followed. Some approaches seek an ensemble-based solution for the whole dataset, others for individual instances. Regarding the first group, we focus on metalearning in the construction, pruning and integration phase. Modeling the interdependence of models plays an important part in this process. In the second group, the dynamic selection of models is carried out for each instance. A separate section is dedicated to hierarchical ensembles and some methods used in their design. As this area involves potentially very large configuration spaces, recourse to advanced methods, including metalearning, is advantageous. It can be exploited to define the competence regions of different models and the dependencies between them.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Xu-Ying, and Zhi-Hua Zhou. "Ensemble Methods for Class Imbalance Learning." In Imbalanced Learning, 61–82. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118646106.ch4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dietterich, Thomas G. "Ensemble Methods in Machine Learning." In Multiple Classifier Systems, 1–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45014-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Džeroski, Sašo, Panče Panov, and Bernard Ženko. "Machine Learning, Ensemble Methods in." In Computational Complexity, 1781–89. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-1800-9_114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Džeroski, Sašo, Panče Panov, and Bernard Ženko. "Machine Learning, Ensemble Methods in." In Encyclopedia of Complexity and Systems Science, 5317–25. New York, NY: Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-30440-3_315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rokach, Lior. "Ensemble Methods in Supervised Learning." In Data Mining and Knowledge Discovery Handbook, 959–79. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-09823-4_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Ensemble learning methods"

1

Cheung, Catherine, and Zouhair Hamaimou. "Ensemble Integration Methods for Load Estimation." In Vertical Flight Society 78th Annual Forum & Technology Display. The Vertical Flight Society, 2022. http://dx.doi.org/10.4050/f-0078-2022-17553.

Full text
Abstract:
Helicopter component load estimation can be achieved through a variety of machine learning techniques and algorithms. To increase confidence in the load estimation process, ensemble methods are employed combining multiple individual load estimators that increase predictive stability across flights and add robustness to noisy data. In this work, several load estimation methods are applied to a variety of machine learning algorithms to build a large library of individual load estimation models for main rotor yoke loads from 28 flight state and control system parameters. This paper explores several ensemble integration methods including simple averaging, weighted averaging using rank sum, and forward selection. From the 426 individual models, 25 top models were selected based on four ranking metrics, root mean squared error (RMSE), correlation coefficient, and interquartile ranges of these two metrics. All ensembles achieved improved performance for these four metrics compared to the best individual model, with the forward selection ensemble obtaining the lowest RMSE, highest correlation, and closest load signal prediction visually of all models.
APA, Harvard, Vancouver, ISO, and other styles
2

Ameksa, Mohammed, Hajar Mousannif, Hassan Al Moatassime, and Zouhair Elamrani Abou Elassad. "Crash Prediction using Ensemble Methods." In INTERNATIONAL CONFERENCE ON BIG DATA, MODELLING AND MACHINE LEARNING (BML'21). SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010731200003101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wan, Shaohua, and Hua Yang. "Comparison among Methods of Ensemble Learning." In 2013 International Symposium on Biometrics and Security Technologies (ISBAST). IEEE, 2013. http://dx.doi.org/10.1109/isbast.2013.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Ling. "Ensemble Learning Methods for Dirty Data." In CIKM '22: The 31st ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3511808.3558584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Elsabawy, Nourhan, Alaa Elnakeeb, Mohammed Baher, and Nora Elrashidy. "Ensemble Machine learning for Breast Cancer Detection." In 2023 Intelligent Methods, Systems, and Applications (IMSA). IEEE, 2023. http://dx.doi.org/10.1109/imsa58542.2023.10217633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fan, Yue, Mark A. Kon, and Charles DeLisi. "Ensemble Machine Methods for DNA Binding." In 2008 Seventh International Conference on Machine Learning and Applications. IEEE, 2008. http://dx.doi.org/10.1109/icmla.2008.114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shih, Po-Yuan, Chia-Ping Chen, and Chung-Hsien Wu. "Speech emotion recognition with ensemble learning methods." In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. http://dx.doi.org/10.1109/icassp.2017.7952658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jose, Joyal P., T. Ananthan, and N. Krishna Prakash. "Ensemble Learning Methods for Machine Fault Diagnosis." In 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT). IEEE, 2022. http://dx.doi.org/10.1109/icicict54557.2022.9917966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Avelar, Gustavo de P., Guilherme O. Campos, and Wagner Meira Jr. "Characterizing and understanding ensemble-based anomaly-detection." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/kdmile.2021.17473.

Full text
Abstract:
Anomaly Detection (AD) has grown in importance in recent years, as a result of an increasing digitalization of services and data storage, and abnormal behavior detection has become a key task. However, discovering abnormal data that is mixed with the huge amount of data available is a daunting problem and the efficacy of the current methods depends on a wide range of assumptions. One effective strategy for detecting anomalies is to combine multiple models, which are called "ensembles", but the factors that determine their performance are often hard to determine, making their calibration and improvement a challenging task. In this paper we address these problems by employing a four-step method for the characterization and understanding of ensemble-based anomaly-detection task. We start by characterizing several datasets and analyzing the factors that make it hard to detect their anomalies. We then evaluate to what extent existing algorithms are able to detect anomalies in the same datasets. On the basis of both analyses, we propose a stacking-based ensemble that outperformed a state-of-the-art baseline, Isolation Forest. Finally, we examine the benefits and drawbacks of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
10

Teti, Emily S., Rollin Lakis, and Vlad Henzl. "Unsupervised methods and ensemble learning to classify vibration sensor data." In Applications of Machine Learning 2023, edited by Barath Narayanan Narayanan, Michael E. Zelinski, Tarek M. Taha, and Jonathan Howe. SPIE, 2023. http://dx.doi.org/10.1117/12.2677375.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Ensemble learning methods"

1

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
2

Douglas, Thomas, and Caiyun Zhang. Machine learning analyses of remote sensing measurements establish strong relationships between vegetation and snow depth in the boreal forest of Interior Alaska. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41222.

Full text
Abstract:
The seasonal snowpack plays a critical role in Arctic and boreal hydrologic and ecologic processes. Though snow depth can be different from one season to another there are repeated relationships between ecotype and snowpack depth. Alterations to the seasonal snowpack, which plays a critical role in regulating wintertime soil thermal conditions, have major ramifications for near-surface permafrost. Therefore, relationships between vegetation and snowpack depth are critical for identifying how present and projected future changes in winter season processes or land cover will affect permafrost. Vegetation and snow cover areal extent can be assessed rapidly over large spatial scales with remote sensing methods, however, measuring snow depth remotely has proven difficult. This makes snow depth–vegetation relationships a potential means of assessing snowpack characteristics. In this study, we combined airborne hyperspectral and LiDAR data with machine learning methods to characterize relationships between ecotype and the end of winter snowpack depth. Our results show hyperspectral measurements account for two thirds or more of the variance in the relationship between ecotype and snow depth. An ensemble analysis of model outputs using hyperspectral and LiDAR measurements yields the strongest relationships between ecotype and snow depth. Our results can be applied across the boreal biome to model the coupling effects between vegetation and snowpack depth.
APA, Harvard, Vancouver, ISO, and other styles
3

Aguilar, G., H. Waqa-Sakiti, and L. Winder. Using Predicted Locations and an Ensemble Approach to Address Sparse Data Sets for Species Distribution Modelling: Long-horned Beetles (Cerambycidae) of the Fiji Islands. Unitec ePress, December 2016. http://dx.doi.org/10.34074/book.008.

Full text
Abstract:
In response to unique species in Fiji which are threatened or endangered, and in critical need of effective conservation measures to ensure their survival, author Glenn Aguilar has produced an eMedia publication and learning research tool, called GIS For Conservation.The eMedia website hosts tutorial material, videos and modelling results for conservation management and planning purposes. Users will learn spatial analytical skills, species distribution modelling and other relevant GIS tools, as well as enhance ArcMap skills and the species distribution modelling tool Maxent. Accompanying the GIS For Conservation website is a peer-reviewed research report. The report details the case study and research methods that have informed the eMedia publication, focusing on the development of maps predicting the suitability of the Fiji Islands for longhorned beetles (Cerambycidae) that include endemic and endangered species such as the Giant Fijian Beetle Xixuthrus heros.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography