Academic literature on the topic 'Boosting and bagging'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Boosting and bagging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Boosting and bagging"

1

Machova, Kristina, Miroslav Puszta, Frantisek Barcak, and Peter Bednar. "A comparison of the bagging and the boosting methods using the decision trees classifiers." Computer Science and Information Systems 3, no. 2 (2006): 57–72. http://dx.doi.org/10.2298/csis0602057m.

Full text
Abstract:
In this paper we present an improvement of the precision of classification algorithm results. Two various approaches are known: bagging and boosting. This paper describes a set of experiments with bagging and boosting methods. Our use of these methods aims at classification algorithms generating decision trees. Results of performance tests focused on the use of the bagging and boosting methods in connection with binary decision trees are presented. The minimum number of decision trees, which enables an improvement of the classification performed by the bagging and boosting methods, was found. The tests were carried out using the Reuter?s 21578 collection of documents as well as documents from an Internet portal of TV broadcasting company Mark?za. The comparison of our results on testing the bagging and boosting algorithms is presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Taser, Pelin Yildirim. "Application of Bagging and Boosting Approaches Using Decision Tree-Based Algorithms in Diabetes Risk Prediction." Proceedings 74, no. 1 (March 4, 2021): 6. http://dx.doi.org/10.3390/proceedings2021074006.

Full text
Abstract:
Diabetes is a serious condition that leads to high blood sugar and the prediction of this disease at an early stage is of great importance for reducing the risk of some significant diabetes complications. In this study, bagging and boosting approaches using six different decision tree-based (DTB) classifiers were implemented on experimental data for diabetes prediction. This paper also compares applied individual implementation, bagging, and boosting of DTB classifiers in terms of accuracy rates. The results indicate that the bagging and boosting approaches outperform the individual DTB classifiers, and real Adaptive Boosting (AdaBoost) and bagging using Naive Bayes Tree (NBTree) present the best accuracy score of 98.65%.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiao Bo. "Contrast Research of Two Kinds of Integrated Sorting Algorithms." Advanced Materials Research 433-440 (January 2012): 4025–31. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.4025.

Full text
Abstract:
Boosting and Bagging are two kinds of important voting sorting algorithms. Boosting algorithm can generate multiple classifiers by serialization through adjustment of sample weight; Bagging can generate multiple classifiers by parallelization. Different algorithms are composed of different loss and different integration mode, through integration of Bagging and Boosting algorithm and naïve Bayes algorithm, the Bagging NB and AdaBoost NB algorithms are constructed. Through experiment contrast of UCI data set, the result shows Bagging NB algorithm is relatively stable, it can produce the sorting result superior than that of NB algorithm, AdaBoost NB algorithm is greatly affected by the singular value in data distribution, the result with foundation of NB algorithm is relatively poor on part of data set, and that might have negative influence on the classifier algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Ćwiklińska-Jurkowska, Małgorzata M. "Performance of Resampling Methods Based on Decision Trees, Parametric and Nonparametric Bayesian Classifiers for Three Medical Datasets." Studies in Logic, Grammar and Rhetoric 35, no. 1 (December 1, 2013): 71–86. http://dx.doi.org/10.2478/slgr-2013-0045.

Full text
Abstract:
Abstract The figures visualizing single and combined classifiers coming from decision trees group and Bayesian parametric and nonparametric discriminant functions show the importance of diversity of bagging or boosting combined models and confirm some theoretical outcomes suggested by other authors. For the three medical sets examined, decision trees, as well as linear and quadratic discriminant functions are useful for bagging and boosting. Classifiers, which do not show an increasing tendency for resubstitution errors in subsequent boosting deterministic procedures loops, are not useful for fusion, e.g. kernel discriminant function. For the success of resampling classifiers’ fusion, the compromise be- tween accuracy and diversity is needed. Diversity important in the success of boosting and bagging may be assessed by concordance of base classifiers with the learning vector.
APA, Harvard, Vancouver, ISO, and other styles
5

Martínez-Muñoz, Gonzalo, and Alberto Suárez. "Using boosting to prune bagging ensembles." Pattern Recognition Letters 28, no. 1 (January 2007): 156–65. http://dx.doi.org/10.1016/j.patrec.2006.06.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anctil, F., and N. Lauzon. "Generalisation for neural networks through data sampling and training procedures, with applications to streamflow predictions." Hydrology and Earth System Sciences 8, no. 5 (October 31, 2004): 940–58. http://dx.doi.org/10.5194/hess-8-940-2004.

Full text
Abstract:
Abstract. Since the 1990s, neural networks have been applied to many studies in hydrology and water resources. Extensive reviews on neural network modelling have identified the major issues affecting modelling performance; one of the most important is generalisation, which refers to building models that can infer the behaviour of the system under study for conditions represented not only in the data employed for training and testing but also for those conditions not present in the data sets but inherent to the system. This work compares five generalisation approaches: stop training, Bayesian regularisation, stacking, bagging and boosting. All have been tested with neural networks in various scientific domains; stop training and stacking having been applied regularly in hydrology and water resources for some years, while Bayesian regularisation, bagging and boosting have been less common. The comparison is applied to streamflow modelling with multi-layer perceptron neural networks and the Levenberg-Marquardt algorithm as training procedure. Six catchments, with diverse hydrological behaviours, are employed as test cases to draw general conclusions and guidelines on the use of the generalisation techniques for practitioners in hydrology and water resources. All generalisation approaches provide improved performance compared with standard neural networks without generalisation. Stacking, bagging and boosting, which affect the construction of training sets, provide the best improvement from standard models, compared with stop-training and Bayesian regularisation, which regulate the training algorithm. Stacking performs better than the others although the benefit in performance is slight compared with bagging and boosting; furthermore, it is not consistent from one catchment to another. For a good combination of improvement and stability in modelling performance, the joint use of stop training or Bayesian regularisation with either bagging or boosting is recommended. Keywords: neural networks, generalisation, stacking, bagging, boosting, stop-training, Bayesian regularisation, streamflow modelling
APA, Harvard, Vancouver, ISO, and other styles
7

Sadorsky, Perry. "Predicting Gold and Silver Price Direction Using Tree-Based Classifiers." Journal of Risk and Financial Management 14, no. 5 (April 29, 2021): 198. http://dx.doi.org/10.3390/jrfm14050198.

Full text
Abstract:
Gold is often used by investors as a hedge against inflation or adverse economic times. Consequently, it is important for investors to have accurate forecasts of gold prices. This paper uses several machine learning tree-based classifiers (bagging, stochastic gradient boosting, random forests) to predict the price direction of gold and silver exchange traded funds. Decision tree bagging, stochastic gradient boosting, and random forests predictions of gold and silver price direction are much more accurate than those obtained from logit models. For a 20-day forecast horizon, tree bagging, stochastic gradient boosting, and random forests produce accuracy rates of between 85% and 90% while logit models produce accuracy rates of between 55% and 60%. Stochastic gradient boosting accuracy is a few percentage points less than that of random forests for forecast horizons over 10 days. For those looking to forecast the direction of gold and silver prices, tree bagging and random forests offer an attractive combination of accuracy and ease of estimation. For each of gold and silver, a portfolio based on the random forests price direction forecasts outperformed a buy and hold portfolio.
APA, Harvard, Vancouver, ISO, and other styles
8

Akhand, M. A. H., Pintu Chandra Shill, and Kazuyuki Murase. "Hybrid Ensemble Construction with Selected Neural Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 6 (August 20, 2011): 652–61. http://dx.doi.org/10.20965/jaciii.2011.p0652.

Full text
Abstract:
A Neural Network Ensemble (NNE) is convenient for improving classification task performance. Among the remarkable number of methods based on different techniques for constructing NNEs, Negative Correlation Learning (NCL), bagging, and boosting are the most popular. None of them, however, could show better performance for all problems. To improve performance combining the complementary strengths of the individual methods, we propose two different ways to construct hybrid ensembles combining NCL with bagging and boosting. One produces a pool of predefined numbers of networks using standard NCL and bagging (or boosting) and then uses a genetic algorithm to select an optimal network subset for an NNE from the pool. Results of experiments confirmed that our proposals show consistently better performance with concise ensembles than conventional methods when tested using a suite of 25 benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Arrahimi, Ahmad Rusadi, Muhammad Khairi Ihsan, Dwi Kartini, Mohammad Reza Faisal, and Fatma Indriani. "Teknik Bagging Dan Boosting Pada Algoritma CART Untuk Klasifikasi Masa Studi Mahasiswa." Jurnal Sains dan Informatika 5, no. 1 (July 14, 2019): 21–30. http://dx.doi.org/10.34128/jsi.v5i1.171.

Full text
Abstract:
Undergraduate Students data in academic information systems always increases every year. Data collected can be processed using data mining to gain new knowledge. The author tries to mine undergraduate students data to classify the study period on time or not on time. The data is analyzed using CART with bagging techniqu, and CART with boosting technique. The classification results using 49 testing data, in the CART algorithm with bagging techniques 13 data (26.531%) entered into the classification on time and 36 data (73.469%) entered into the classification not on time. In the CART algorithm with boosting technique 16 data (32,653%) entered into the classification on time and 33 data (67,347%) entered into the classification not on time. The accuracy value of the classification of study period of undergraduate students using the CART algorithm is 79.592%, the CART algorithm with bagging technique is 81.633%, and the CART algorithm with boosting technique is 87.755%. In this study, the CART algorithm with boosting technique has the best accuracy value.
APA, Harvard, Vancouver, ISO, and other styles
10

Islam, M. M., Xin Yao, S. M. Shahriar Nirjon, M. A. Islam, and K. Murase. "Bagging and Boosting Negatively Correlated Neural Networks." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 38, no. 3 (June 2008): 771–84. http://dx.doi.org/10.1109/tsmcb.2008.922055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Boosting and bagging"

1

Nascimento, Diego Silveira Costa. "Configuração heterogênea de ensembles de classificadores : investigação em bagging, boosting e multiboosting." Universidade de Fortaleza, 2009. http://dspace.unifor.br/handle/tede/83562.

Full text
Abstract:
Made available in DSpace on 2019-03-29T23:22:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2009-12-21
This work presents a study on the characterization and evaluation of six new heterogeneous committees machines algorithms, which are aimed at solving problems of pattern classification. These algorithms are extensions of models which are already found in the literature and have been successfully applied in different fields of research. Following two approaches, evolutionary and constructive, different machine learning algorithms (inductors) can be used for induction of components of the ensemble to be trained by standard Bagging, Boosting or MultiBoosting on the resampled data, aiming at the increasing of the diversity of the resulting composite model. As a means of automatic configuration of different types of components, we adopt a customized genetic algorithm for the first approach and greedy search for the second approach. For purposes of validation of the proposal, an empirical study has been conducted involving 10 different types of inductors and 18 classification problems taken from the UCI repository. The acuity values obtained by the evolutionary and constructive heterogeneous ensembles are analyzed based on those produced by models of homogeneous ensembles composed of the 10 types of inductors we have utilized, and the majority of the results evidence a gain in performance from both approaches. Keywords: Machine learning, Committee machines, Bagging, Wagging, Boosting, MultiBoosting, Genetic algorithm.
Este trabalho apresenta um estudo quanto à caracterização e avaliação de seis novos algoritmos de comitês de máquinas heterogêneos, sendo estes destinados à resolução de problemas de classificação de padrões. Esses algoritmos são extensões de modelos já encontrados na literatura e que vêm sendo aplicados com sucesso em diferentes domínios de pesquisa. Seguindo duas abordagens, uma evolutiva e outra construtiva, diferentes algoritmos de aprendizado de máquina (indutores) podem ser utilizados para fins de indução dos componentes do ensemble a serem treinados por Bagging, Boosting ou MultiBoosting padrão sobre os dados reamostrados, almejando-se o incremento da diversidade do modelo composto resultante. Como meio de configuração automática dos diferentes tipos de componentes, adota-se um algoritmo genético customizado para a primeira abordagem e uma busca de natureza gulosa para a segunda abordagem. Para fins de validação da proposta, foi conduzido um estudo empírico envolvendo 10 diferentes tipos de indutores e 18 problemas de classificação extraídos do repositório UCI. Os valores de acuidade obtidos via ensembles heterogêneos evolutivos e construtivos são analisados com base naqueles produzidos por modelos de ensembles homogêneos compostos pelos 10 tipos de indutores utilizados, sendo que em grande parte dos casos os resultados evidenciam ganhos de desempenho de ambas as abordagens. Palavras-chave: Aprendizado de máquina, Comitês de máquinas, Bagging, Wagging, Boosting, MultiBoosting, Algoritmo genético.
APA, Harvard, Vancouver, ISO, and other styles
2

Rubesam, Alexandre. "Estimação não parametrica aplicada a problemas de classificação via Bagging e Boosting." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306510.

Full text
Abstract:
Orientador: Ronaldo Dias
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-03T20:17:33Z (GMT). No. of bitstreams: 1 Rubesam_Alexandre_M.pdf: 3561307 bytes, checksum: 136856548e218dc25a0ba4ee178b63a7 (MD5) Previous issue date: 2004
Resumo: Alguns dos métodos mais modernos e bem sucedidos de classificação são bagging, boosting e SVM (Support Vector M achines ). B agging funciona combinando classificadores ajustados em amostras bootstrap dos dados; boosting funciona aplicando-se seqüencialmente um algoritmo de classificação a versões reponderadas do conjunto de dados de treinamento, dando maior peso às observações classificadas erroneamente no passo anterior, e SVM é um método que transforma os dados originais de maneira não linear para um espaço de dimensão maior, e procura um hiperplano separador neste espaço transformado. N este trabalho estudamos os métodos descritos acima, e propusemos dois métodos de classificação, um baseado em regressão não paramétrica por Hsplines (também proposto aqui) e boosting, e outro que é uma modificação de um algoritmo de boosting baseado no algoritmo MARS. Os métodos foram aplicados em dados simulados e em dados reais
Abstract: Some of the most modern and well succeeded classification methods are bagging, boosting and SVM (Support Vector Machines). Bagging combines classifiers fitted to bootstrap samples of the training data; boosting sequentially applies a classification algorithm to reweighted versions of the training data, increasing in each step the weights of the observations that were misclassified in the previous step, and SVM is a method that transforms the data in a nonlinear way to a space of greater dimension than that of the original data, and searches for a separating hyperplane in this transformed space. In this work we have studied the methods described above. We propose two classification methods: one of them is based on a nonparametric regression method via H-splines (also proposed here) and boosting, and the other is a modification of a boosting algorithm, based on the MARS algorithm. The methods were applied to both simulated and real data
Mestrado
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
3

Boshoff, Lusilda. "Boosting, bagging and bragging applied to nonparametric regression : an empirical approach / Lusilda Boshoff." Thesis, North-West University, 2009. http://hdl.handle.net/10394/4337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Llerena, Nils Ever Murrugarra. "Ensembles na classificação relacional." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18102011-095113/.

Full text
Abstract:
Em diversos domínios, além das informações sobre os objetos ou entidades que os compõem, existem, também, informaçõoes a respeito das relações entre esses objetos. Alguns desses domínios são, por exemplo, as redes de co-autoria, e as páginas Web. Nesse sentido, é natural procurar por técnicas de classificação que levem em conta estas informações. Dentre essas técnicas estão as denominadas classificação baseada em grafos, que visam classificar os exemplos levando em conta as relações existentes entre eles. Este trabalho aborda o desenvolvimento de métodos para melhorar o desempenho de classificadores baseados em grafos utilizando estratégias de ensembles. Um classificador ensemble considera um conjunto de classificadores cujas predições individuais são combinadas de alguma forma. Este classificador normalmente apresenta um melhor desempenho do que seus classificadores individualmente. Assim, foram desenvolvidas três técnicas: a primeira para dados originalmente no formato proposicional e transformados para formato relacional baseado em grafo e a segunda e terceira para dados originalmente já no formato de grafo. A primeira técnica, inspirada no algoritmo de boosting, originou o algoritmo KNN Adaptativo Baseado em Grafos (A-KNN). A segunda ténica, inspirada no algoritmo de Bagging originou trê abordagens de Bagging Baseado em Grafos (BG). Finalmente, a terceira técnica, inspirada no algoritmo de Cross-Validated Committees, originou o Cross-Validated Committees Baseado em Grafos (CVCG). Os experimentos foram realizados em 38 conjuntos de dados, sendo 22 conjuntos proposicionais e 16 conjuntos no formato relacional. Na avaliação foi utilizado o esquema de 10-fold stratified cross-validation e para determinar diferenças estatísticas entre classificadores foi utilizado o método proposto por Demsar (2006). Em relação aos resultados, as três técnicas melhoraram ou mantiveram o desempenho dos classificadores bases. Concluindo, ensembles aplicados em classificadores baseados em grafos apresentam bons resultados no desempenho destes
In many fields, besides information about the objects or entities that compose them, there is also information about the relationships between objects. Some of these fields are, for example, co-authorship networks and Web pages. Therefore, it is natural to search for classification techniques that take into account this information. Among these techniques are the so-called graphbased classification, which seek to classify examples taking into account the relationships between them. This paper presents the development of methods to improve the performance of graph-based classifiers by using strategies of ensembles. An ensemble classifier considers a set of classifiers whose individual predictions are combined in some way. This combined classifier usually performs better than its individual classifiers. Three techniques have been developed: the first applied for originally propositional data transformed to relational format based on graphs and the second and the third applied for data originally in graph format. The first technique, inspired by the boosting algorithm originated the Adaptive Graph-Based K-Nearest Neighbor (A-KNN). The second technique, inspired by the bagging algorithm led to three approaches of Graph-Based Bagging (BG). Finally the third technique, inspired by the Cross- Validated Committees algorithm led to the Graph-Based Cross-Validated Committees (CVCG). The experiments were performed on 38 data sets, 22 datasets in propositional format and 16 in relational format. Evaluation was performed using the scheme of 10-fold stratified cross-validation and to determine statistical differences between the classifiers it was used the method proposed by Demsar (2006). Regarding the results, these three techniques improved or at least maintain the performance of the base classifiers. In conclusion, ensembles applied to graph-based classifiers have good results in the performance of them
APA, Harvard, Vancouver, ISO, and other styles
5

Barrow, Devon K. "Active model combination : an evaluation and extension of bagging and boosting for time series forecasting." Thesis, Lancaster University, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.659174.

Full text
Abstract:
Since the seminal work by Bates and Granger (1969), the practice of combining two or more models, rather than selecting the single best, has consistently been shown to lead to improvements in accuracy. In forecasting, model combination aims to find an optimal weighting given a set of precalculated forecasts. In contrast, machine learning includes methods which simultaneously optimise individual models and the weights used to combine them. Bagging and boosting combine the results of complementary and diverse models generated by actively perturbing, reweighting and resampling training data. Despite large gains in predictive accuracy in classification, limited research assesses their efficacy on time series data. This thesis provides a critical review of, the combination literature, and is the first literature survey of boosting for time series forecasting. The lack of rigorous empirical evidence on forecast accuracy of Bagging and boosting is identified as a major gap. To address this, a rigorous evaluation of Bagging and boosting adhering to recommendations of the forecasting literature is performed using robust error measures on a large set of real time series, exhibiting a representative set of features and dataset properties. Additionally there is a narrow focus on marginal extensions of boosting, and limited evidence of any gains in accuracy. A novel framework is proposed to explore the impact of varying boosting meta-parameters, and to evaluate the empirical accuracy of the resulting 96 boosting variants. The choice of base model and combination size are found to have the largest impact on forecast accuracy. Findings show that boosting overfits to noisy data, however no existing study investigates this crucial issue. New noise robust boosting methods are developed and evaluated for time series forecast models. They are found to significantly improve accuracy above current boosting approaches and Bagging, while neural network model averaging is found to perform best.
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Yue. "A Comparative Study of Bagging and Boosting of Supervised and Unsupervised Classifiers For Outliers Detection." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1502475855457354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bourel, Mathias. "Agrégation de modèles en apprentissage statistique pour l'estimation de la densité et la classification multiclasse." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4076/document.

Full text
Abstract:
Les méthodes d'agrégation en apprentissage statistique combinent plusieurs prédicteurs intermédiaires construits à partir du même jeu de données dans le but d'obtenir un prédicteur plus stable avec une meilleure performance. Celles-ci ont été amplement étudiées et ont données lieu à plusieurs travaux, théoriques et empiriques dans plusieurs contextes, supervisés et non supervisés. Dans ce travail nous nous intéressons dans un premier temps à l'apport de ces méthodes au problème de l'estimation de la densité. Nous proposons plusieurs estimateurs simples obtenus comme combinaisons linéaires d'histogrammes. La principale différence entre ceux-ci est quant à la nature de l'aléatoire introduite à chaque étape de l'agrégation. Nous comparons ces techniques à d'autres approches similaires et aux estimateurs classiques sur un choix varié de modèles, et nous démontrons les propriétés asymptotiques pour un de ces algorithmes (Random Averaged Shifted Histogram). Une seconde partie est consacrée aux extensions du Boosting pour le cas multiclasse. Nous proposons un nouvel algorithme (Adaboost.BG) qui fournit un classifieur final en se basant sur un calcul d'erreur qui prend en compte la marge individuelle de chaque modèle introduit dans l'agrégation. Nous comparons cette méthode à d'autres algorithmes sur plusieurs jeu de données artificiels classiques
Ensemble methods in statistical learning combine several base learners built from the same data set in order to obtain a more stable predictor with better performance. Such methods have been extensively studied in the supervised context for regression and classification. In this work we consider the extension of these approaches to density estimation. We suggest several new algorithms in the same spirit as bagging and boosting. We show the efficiency of combined density estimators by extensive simulations. We give also the theoretical results for one of our algorithms (Random Averaged Shifted Histogram) by mean of asymptotical convergence under milmd conditions. A second part is devoted to the extensions of the Boosting algorithms for the multiclass case. We propose a new algorithm (Adaboost.BG) accounting for the margin of the base classifiers and show its efficiency by simulations and comparing it to the most used methods in this context on several datasets from the machine learning benchmark. Partial theoretical results are given for our algorithm, such as the exponential decrease of the learning set misclassification error to zero
APA, Harvard, Vancouver, ISO, and other styles
8

Siqueira, Vânia Rosatti de. "Um modelo de credit scoring para microcrédito: uma inovação no mercado brasileiro." Universidade Presbiteriana Mackenzie, 2011. http://tede.mackenzie.br/jspui/handle/tede/546.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:25:42Z (GMT). No. of bitstreams: 1 Vania Rosatti de Siqueira.pdf: 636275 bytes, checksum: a16be8a6db840089b4bb3645148a7376 (MD5) Previous issue date: 2011-02-10
The Grameen Bank experiences with microcredit operations have been imitated in various countries, mainly the ones related to the two great innovations in this market: the credit agent s role and the solidary group mechanism. The massification of the operations and the reduction in their costs become vital for economies of scale to be achieved, as well as a greater appetite for the MFIs to expand their activity in the microcredit market. In this context, the next great innovation in the microcredit market will be the introduction of credit scoring models in such operations. This will speed up the process, reduce the risks and consequently the costs. Historical information about microcredit operations was taken into account for the creation of a credit model. It was then possible to identify key variables that help to distinguish between the good and the bad borrowers. The results show that as machine learning techniques bagging and boosting are added to the traditional methods of credit analysis discriminant analysis and logistic regression , an improvement in the performance of the credit scoring models for microcredit can be achieved.
As experiências do Grameen Bank com operações de microcrédito têm sido reproduzidas em vários países, principalmente as relacionadas com as duas grandes inovações neste mercado: o papel do agente de crédito e o mecanismo de grupo solidário. A massificação das operações e a redução de custos tornam-se imprescindíveis para que haja economia de escala e maior apetite para as IMFs ampliarem sua atuação neste mercado. Neste cenário, a implantação de modelos de credit scoring será a próxima inovação do microcrédito e proporcionará agilidade, redução de riscos e, conseqüentemente, redução dos custos. Com base em informações históricas de operações de microcrédito foi elaborado um modelo de crédito. Foram identificadas variáveis chave que permitem distinguir os bons e maus pagadores. Os resultados mostram que, acoplando-se técnicas de linguagem de máquina bagging e boosting aos métodos tradicionais de análise de crédito análise discriminante e regressão logística , obtém-se melhora na performance dos modelos de credit scoring para microcrédito.
APA, Harvard, Vancouver, ISO, and other styles
9

Lopes, Neilson Soares. "Modelos de classificação de risco de crédito para financiamentos imobiliários: regressão logística, análise discriminante, árvores de decisão, bagging e boosting." Universidade Presbiteriana Mackenzie, 2011. http://tede.mackenzie.br/jspui/handle/tede/527.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:25:35Z (GMT). No. of bitstreams: 1 Neilson Soares Lopes.pdf: 983372 bytes, checksum: 2233d489295cd76cb2d8dcbd78e1e5de (MD5) Previous issue date: 2011-08-08
Fundo Mackenzie de Pesquisa
This study applied the techniques of traditional parametric discriminant analysis and logistic regression analysis of credit real estate financing transactions where borrowers may or may not have a payroll loan transaction. It was the hit rate compared these methods with the non-parametric techniques based on classification trees, and the methods of meta-learning bagging and boosting that combine classifiers for improved accuracy in the algorithms.In a context of high housing deficit, especially in Brazil, the financing of real estate can still be very encouraged. The impacts of sustainable growth in the mortgage not only bring economic benefits and social. The house is, for most individuals, the largest source of expenditure and the most valuable asset that will have during her lifetime.At the end of the study concluded that the computational techniques of decision trees are more effective for the prediction of payers (94.2% correct), followed by bagging (80.7%) and boosting (or arcing , 75.2%). For the prediction of bad debtors in mortgages, the techniques of logistic regression and discriminant analysis showed the worst results (74.6% and 70.7%, respectively). For the good payers, the decision tree also showed the best predictive power (75.8%), followed by discriminant analysis (75.3%) and boosting (72.9%). For the good paying mortgages, bagging and logistic regression showed the worst results (72.1% and 71.7%, respectively). Logistic regression shows that for a borrower with payroll loans, the chance to be a bad credit is 2.19 higher than if the borrower does not have such type of loan.The presence of credit between the payroll operations of mortgage borrowers also has relevance in the discriminant analysis.
Neste estudo foram aplicadas as técnicas paramétricas tradicionais de análise discriminante e regressão logística para análise de crédito de operações de financiamento imobiliário. Foi comparada a taxa de acertos destes métodos com as técnicas não-paramétricas baseadas em árvores de classificação, além dos métodos de meta-aprendizagem BAGGING e BOOSTING, que combinam classificadores para obter uma melhor precisão nos algoritmos.Em um contexto de alto déficit de moradias, em especial no caso brasileiro, o financiamento de imóveis ainda pode ser bastante fomentado. Os impactos de um crescimento sustentável no crédito imobiliário trazem benefícios não só econômicos como sociais. A moradia é, para grande parte dos indivíduos, a maior fonte de despesas e o ativo mais valioso que terão durante sua vida. Ao final do estudo, concluiu-se que as técnicas computacionais de árvores de decisão se mostram mais efetivas para a predição de maus pagadores (94,2% de acerto), seguida do BAGGING (80,7%) e do BOOSTING (ou ARCING, 75,2%). Para a predição de maus pagadores em financiamentos imobiliários, as técnicas de regressão logística e análise discriminante apresentaram os piores resultados (74,6% e 70,7%, respectivamente). Para os bons pagadores, a árvore de decisão também apresentou o melhor poder preditivo (75,8%), seguida da análise discriminante (75,3%) e do BOOSTING (72,9%). Para os bons pagadores de financiamentos imobiliários, BAGGING e regressão logística apresentaram os piores resultados (72,1% e 71,7%, respectivamente).A regressão logística mostra que, para um tomador com crédito consignado, a chance se ser um mau pagador é 2,19 maior do que se este tomador não tivesse tal modalidade de empréstimo. A presença de crédito consignado entre as operações dos tomadores de financiamento imobiliário também apresenta relevância na análise discriminante.
APA, Harvard, Vancouver, ISO, and other styles
10

Shire, Norah J. "Boosting, Bagging, and Classification Analysis to Improve Noninvasive Liver Fibrosis Prediction in HCV/HIV Coinfected Subjects: An Analysis of the AIDS Clinical Trials Group (ACTG) 5178." Cincinnati, Ohio : University of Cincinnati, 2007. http://rave.ohiolink.edu/etdc/view.cgi?acc_num=ucin1172860066.

Full text
Abstract:
Thesis (Ph.D.)--University of Cincinnati, 2007.
Advisor: Charles Ralph Buncher. Title from electronic thesis title page (viewed April 23, 2009). Keywords: Coinfection; Boosting and bagging; Classification analysis; HIV; Viral hepatitis. Includes abstract. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Boosting and bagging"

1

Kumar, Alok, and Mayank Jain. Ensemble Learning for AI Developers: Learn Bagging, Stacking, and Boosting Methods with Use Cases. Apress, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Boosting and bagging"

1

Bühlmann, Peter. "Bagging, Boosting and Ensemble Methods." In Handbook of Computational Statistics, 985–1022. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21551-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Frochte, Jörg. "Ensemble Learning mittels Bagging und Boosting." In Maschinelles Lernen, 329–51. München: Carl Hanser Verlag GmbH & Co. KG, 2020. http://dx.doi.org/10.3139/9783446463554.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Deshmukh, Jyoti, Mukul Jangid, Shreeshail Gupte, Siddhartha Ghosh, and Shubham Ingle. "Ensemble Method Combination: Bagging and Boosting." In Algorithms for Intelligent Systems, 399–409. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3242-9_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baskin, Igor I., Gilles Marcou, Dragos Horvath, and Alexandre Varnek. "Bagging and Boosting of Classification Models." In Tutorials in Chemoinformatics, 241–47. Chichester, UK: John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119161110.ch15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baskin, Igor I., Gilles Marcou, Dragos Horvath, and Alexandre Varnek. "Bagging and Boosting of Regression Models." In Tutorials in Chemoinformatics, 249–55. Chichester, UK: John Wiley & Sons, Ltd, 2017. http://dx.doi.org/10.1002/9781119161110.ch16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yi, Will N. Browne, and Bing Xue. "Adapting Bagging and Boosting to Learning Classifier Systems." In Applications of Evolutionary Computation, 405–20. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77538-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lomte, Santosh S., and Sanket G. Torambekar. "Classifier Ensembling: Dataset Learning Using Bagging and Boosting." In Lecture Notes in Networks and Systems, 83–93. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7150-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kotsianti, S. B., and D. Kanellopoulos. "Combining Bagging, Boosting and Dagging for Classification Problems." In Lecture Notes in Computer Science, 493–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74827-4_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tsymbal, Alexey, and Seppo Puuronen. "Bagging and Boosting with Dynamic Integration of Classifiers." In Principles of Data Mining and Knowledge Discovery, 116–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45372-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Richter, Stefan. "Regressions- und Klassifikationsbäume; Bagging, Boosting und Random Forests." In Statistisches und maschinelles Lernen, 163–220. Berlin, Heidelberg: Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/978-3-662-59354-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Boosting and bagging"

1

Jain, Kavita, and Sushil Kulkarni. "Incorporating Bagging into Boosting." In 2012 12th International Conference on Hybrid Intelligent Systems (HIS). IEEE, 2012. http://dx.doi.org/10.1109/his.2012.6421375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rosset, Saharon. "Robust boosting and its relation to bagging." In Proceeding of the eleventh ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1081870.1081900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Herouane, Omar, Lahcen Moumoun, and Taoufiq Gadi. "Using bagging and boosting algorithms for 3D object labeling." In 2016 7th International Conference on Information and Communication Systems (ICICS). IEEE, 2016. http://dx.doi.org/10.1109/iacs.2016.7476070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Atir, Mohd, and Mark Haydoutov. "Tree-Based Bagging and Boosting Algorithms for Proactive Invoice Management." In 2019 International Conference on Advances in the Emerging Computing Technologies (AECT). IEEE, 2020. http://dx.doi.org/10.1109/aect47998.2020.9194200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spanakis, Gerasimos, Gerhard Weiss, and Anne Roefs. "Enhancing Classification of Ecological Momentary Assessment Data Using Bagging and Boosting." In 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2016. http://dx.doi.org/10.1109/ictai.2016.0066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Singhal, Yash, Ayushi Jain, Shrey Batra, Yash Varshney, and Megha Rathi. "Review of Bagging and Boosting Classification Performance on Unbalanced Binary Classification." In 2018 IEEE 8th International Advance Computing Conference (IACC). IEEE, 2018. http://dx.doi.org/10.1109/iadcc.2018.8692138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oza, Nikunj C., and Stuart Russell. "Experimental comparisons of online and batch versions of bagging and boosting." In the seventh ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/502512.502565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xie, Yuan-Cheng, and Jing-Yu Yang. "Using Boosting and Clustering to Prune Bagging and Detect Noisy Data." In 2009 Chinese Conference on Pattern Recognition (CCPR). IEEE, 2009. http://dx.doi.org/10.1109/ccpr.2009.5344126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Xinsheng, Xinbo Gao, and Minghu Wang. "MCs detection approach using Bagging and Boosting based twin support vector machine." In 2009 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2009. http://dx.doi.org/10.1109/icsmc.2009.5346375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Winata, Genta Indra, and Masayu Leylia Khodra. "Handling imbalanced dataset in multi-label text categorization using Bagging and Adaptive Boosting." In 2015 International Conference on Electrical Engineering and Informatics (ICEEI). IEEE, 2015. http://dx.doi.org/10.1109/iceei.2015.7352552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography