Добірка наукової літератури з теми "Forêts de survie aléatoires"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Forêts de survie aléatoires".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Forêts de survie aléatoires":
Willekens, Frans. "La microsimulation dans les projections de population." Articles 40, no. 2 (July 30, 2012): 267–97. http://dx.doi.org/10.7202/1011542ar.
Biard, L., R. Porcher, and M. Resch-Rigon. "Test de permutation pour effets aléatoires dans les modèles de survie." Revue d'Épidémiologie et de Santé Publique 62 (February 2014): S42—S43. http://dx.doi.org/10.1016/j.respe.2013.12.037.
LAYELMAM, Mohammed. "Production des cartes de probabilité de présence des criquets pèlerins sur le territoire marocain à partir des données de télédétection." Revue Française de Photogrammétrie et de Télédétection, no. 216 (April 19, 2018): 49–59. http://dx.doi.org/10.52638/rfpt.2018.324.
Matsaguim Nguimdo, Cédric Aurélien, and Emmanuel D. Tiomo. "FORET D'ARBRES ALEATOIRES ET CLASSIFICATION D'IMAGES SATELLITES : RELATION ENTRE LA PRECISION DU MODELE D'ENTRAINEMENT ET LA PRECISION GLOBALE DE LA CLASSIFICATION." Revue Française de Photogrammétrie et de Télédétection, no. 222 (November 26, 2020): 3–14. http://dx.doi.org/10.52638/rfpt.2020.477.
Haurez, Barbara, Yves Brostaux, Charles-Albert Petre, and Jean-Louis Doucet. "IS THE WESTERN LOWLAND GORILLA A GOOD GARDENER? EVIDENCE FOR DIRECTED DISPERSAL IN SOUTHEAST GABON." BOIS & FORETS DES TROPIQUES 324, no. 324 (March 17, 2015): 39. http://dx.doi.org/10.19182/bft2015.324.a31265.
Dekhili, M. "Parametres phenotypiques et genetiques de la reproduction de la Brebis Ouled-Djellal (Algérie)." Archivos de Zootecnia 63, no. 242 (April 9, 2013): 269–75. http://dx.doi.org/10.21071/az.v63i242.543.
Moore, Jean-David, Rock Ouimet, and Patrick Bolhen. "Effet du chaulage sur la survie et la reproduction de 3 espèces de vers de terre exotiques potentiellement envahissantes dans les érablières du Québec." Le Naturaliste canadien 139, no. 2 (May 25, 2015): 14–19. http://dx.doi.org/10.7202/1030817ar.
Beguet, Benoît, Nesrine Chehata, Samia Boukir, and Dominique Guyon. "Quantification et cartographie de la structure forestière à partir de la texture des images Pléiades." Revue Française de Photogrammétrie et de Télédétection, no. 208 (September 5, 2014): 83–88. http://dx.doi.org/10.52638/rfpt.2014.126.
Chehata, Nesrine, Karim Ghariani, Arnaud Le Bris, and Philippe Lagacherie. "Apport des images pléiades pour la délimitation des parcelles agricoles à grande échelle." Revue Française de Photogrammétrie et de Télédétection, no. 209 (January 29, 2015): 165–71. http://dx.doi.org/10.52638/rfpt.2015.220.
Le Bris, Arnaud, Cyril Wendl, Nesrine Chehata, Anne Puissant, and Tristan Postadjian. "Fusion tardive d'images SPOT-6/7 et de données multi-temporelles Sentinel-2 pour la détection de la tâche urbaine." Revue Française de Photogrammétrie et de Télédétection, no. 217-218 (September 21, 2018): 87–97. http://dx.doi.org/10.52638/rfpt.2018.415.
Дисертації з теми "Forêts de survie aléatoires":
Morvan, Ludivine. "Prédiction de la progression du myélome multiple par imagerie TEP : Adaptation des forêts de survie aléatoires et de réseaux de neurones convolutionnels." Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0045.
The aim of this work is to provide a model for survival prediction and biomarker identification in the context of multiple myeloma (MM) using PET (Positron Emission Tomography) imaging and clinical data. This PhD is divided into two parts: The first part provides a model based on Random Survival Forests (RSF). The second part is based on the adaptation of deep learning to survival and to our data. The main contributions are the following: 1) Production of a model based on RSF and PET images allowing the prediction of a risk group for multiple myeloma patients. 2) Determination of biomarkers using this model.3) Demonstration of the interest of PET radiomics.4) Extension of the state of the art of methods for the adaptation of deep learning to a small database and small images. 5) Study of the cost functions used in survival. In addition, we are, to our knowledge, the first to investigate the use of RSFs in the context of MM and PET images, to use self-supervised pre-training with PET images, and, with a survival task, to fit the triplet cost function to survival and to fit a convolutional neural network to MM survival from PET lesions
Duroux, Roxane. "Inférence pour les modèles statistiques mal spécifiés, application à une étude sur les facteurs pronostiques dans le cancer du sein." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066224/document.
The thesis focuses on inference of statistical misspecified models. Every result finds its application in a prognostic factors study for breast cancer, thanks to the data collection of Institut Curie. We consider first non-proportional hazards models, and make use of the marginal survival of the failure time. This model allows a time-varying regression coefficient, and therefore generalizes the proportional hazards model. On a second time, we study step regression models. We propose an inference method for the changepoint of a two-step regression model, and an estimation method for a multiple-step regression model. Then, we study the influence of the subsampling rate on the performance of median forests and try to extend the results to random survival forests through an application. Finally, we present a new dose-finding method for phase I clinical trials, in case of partial ordering
Etourneau, Thomas. "Les forêts Lyman alpha du relevé eBOSS : comprendre les fonctions de corrélation et les systématiques." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP029.
This PhD thesis is part of eBOSS and DESI projects. These projects, among other tracers, use the Lyman-α (Lyα) absorption to probe the matter distribution in the universe and measure thebaryon acoustic oscillations (BAO) scale. The measurement of the BAO scale to the sound horizon ratio allows to constrain the universe expansion and so the ΛCDM model, the standard model of cosmology. This thesis presents the development of mock data sets used in order to check the BAO analyses carried out by the Lyα group within the eBOSS and DESI collaborations. These mocks make use of gaussian random fields (GRF). GRF allow to generate a density field δ. From this density field, quasar (QSO) positions are drawn. From each quasar, a line of sight is constructed. Then, the density field δ is interpolated along each line of sight. Finally, the fluctuating Gunn Peterson approximation (FGPA) is used to convert the interpolated density into the optical depth τ , and then into the Lyα absorption. Thanks to a program developed by the DESI community, a continuum is added to each Lyα forest in order to produce quasar synthetic spectra. The mocks presented in the manuscript provide a survey of quasars whose Lyα forests in the quasar spectra have the correct Lyα×Lyα auto-correlation, Lyα×QSO cross-correlation, as well as the correct QSO×QSO and HCD×HCD (High Column Density systems) auto-correlation functions. The study of these mocks shows that the BAO analysis run on the whole Lyα eBOSS data set produces a non-biaised measurement of the BAO parameters αk et α⊥. In addition, the analysis of the model used to fit the correlation functions shows that the shape of the Lyα×Lyα auto-correlation, which is linked to the bias bLyα and redshift space distorsions (RSD) parameter βLyα, are understood up to 80 %. The systematics affecting the measurement of the Lyα parameters (bLyα et βLyα) come from two different effects. The first one originates from thedistortion matrix which does not capture all the distortions produced by the quasar continuum fittingprocedure. The second one is linked to the HCD modelling. The modelling of these strong absorbers is not perfect and affects the measurement of the Lyα parameters, especially the RSD parameter βLyα. Thus, the analysis of these mocks allows to validate the systematic control of the BAO analyses done with the Lyα. However, a better understanding of the measurement of the Lyα parameters is required in order to consider using the Lyα, which means combining the Lyα×Lyα autocorrelation and Lyα×QSO cross-correlation, to do a RSD analysis
Zirakiza, Brice. "Forêts Aléatoires PAC-Bayésiennes." Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/29815/29815.pdf.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
Scornet, Erwan. "Apprentissage et forêts aléatoires." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066533/document.
This is devoted to a nonparametric estimation method called random forests, introduced by Breiman in 2001. Extensively used in a variety of areas, random forests exhibit good empirical performance and can handle massive data sets. However, the mathematical forces driving the algorithm remain largely unknown. After reviewing theoretical literature, we focus on the link between infinite forests (theoretically analyzed) and finite forests (used in practice) aiming at narrowing the gap between theory and practice. In particular, we propose a way to select the number of trees such that the errors of finite and infinite forests are similar. On the other hand, we study quantile forests, a type of algorithms close in spirit to Breiman's forests. In this context, we prove the benefit of trees aggregation: while each tree of quantile forest is not consistent, with a proper subsampling step, the forest is. Next, we show the connection between forests and some particular kernel estimates, which can be made explicit in some cases. We also establish upper bounds on the rate of convergence for these kernel estimates. Then we demonstrate two theorems on the consistency of both pruned and unpruned Breiman forests. We stress the importance of subsampling to demonstrate the consistency of the unpruned Breiman's forests. At last, we present the results of a Dreamchallenge whose goal was to predict the toxicity of several compounds for several patients based on their genetic profile
Zirakiza, Brice, and Brice Zirakiza. "Forêts Aléatoires PAC-Bayésiennes." Master's thesis, Université Laval, 2013. http://hdl.handle.net/20.500.11794/24036.
Dans ce mémoire de maîtrise, nous présentons dans un premier temps un algorithme de l'état de l'art appelé Forêts aléatoires introduit par Léo Breiman. Cet algorithme effectue un vote de majorité uniforme d'arbres de décision construits en utilisant l'algorithme CART sans élagage. Par après, nous introduisons l'algorithme que nous avons nommé SORF. L'algorithme SORF s'inspire de l'approche PAC-Bayes, qui pour minimiser le risque du classificateur de Bayes, minimise le risque du classificateur de Gibbs avec un régularisateur. Le risque du classificateur de Gibbs constitue en effet, une fonction convexe bornant supérieurement le risque du classificateur de Bayes. Pour chercher la distribution qui pourrait être optimale, l'algorithme SORF se réduit à être un simple programme quadratique minimisant le risque quadratique de Gibbs pour chercher une distribution Q sur les classificateurs de base qui sont des arbres de la forêt. Les résultasts empiriques montrent que généralement SORF est presqu'aussi bien performant que les forêts aléatoires, et que dans certains cas, il peut même mieux performer que les forêts aléatoires.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
In this master's thesis, we present at first an algorithm of the state of the art called Random Forests introduced by Léo Breiman. This algorithm construct a uniformly weighted majority vote of decision trees built using the CART algorithm without pruning. Thereafter, we introduce an algorithm that we called SORF. The SORF algorithm is based on the PAC-Bayes approach, which in order to minimize the risk of Bayes classifier, minimizes the risk of the Gibbs classifier with a regularizer. The risk of Gibbs classifier is indeed a convex function which is an upper bound of the risk of Bayes classifier. To find the distribution that would be optimal, the SORF algorithm is reduced to being a simple quadratic program minimizing the quadratic risk of Gibbs classifier to seek a distribution Q of base classifiers which are trees of the forest. Empirical results show that generally SORF is almost as efficient as Random forests, and in some cases, it can even outperform Random forests.
Le, Faou Yohann. "Contributions à la modélisation des données de durée en présence de censure : application à l'étude des résiliations de contrats d'assurance santé." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS527.
In this thesis, we study duration models in the context of the analysis of contract termination time in health insurance. Identified from the 17th century and the original work of Graunt J. (1662) on mortality, the bias induced by the censoring of duration data observed in this context must be corrected by the statistical models used. Through the problem of the measure of dependence between successives durations, and the problem of the prediction of contract termination time in insurance, we study the theoretical and practical properties of different estimators that rely on a proper weighting of the observations (the so called IPCW method) designed to compensate this bias. The application of these methods to customer value estimation is also carefully discussed
Genuer, Robin. "Forêts aléatoires : aspects théoriques, sélection de variables et applications." Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00550989.
Poterie, Audrey. "Arbres de décision et forêts aléatoires pour variables groupées." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0011/document.
In many problems in supervised learning, inputs have a known and/or obvious group structure. In this context, elaborating a prediction rule that takes into account the group structure can be more relevant than using an approach based only on the individual variables for both prediction accuracy and interpretation. The goal of this thesis is to develop some tree-based methods adapted to grouped variables. Here, we propose two new tree-based approaches which use the group structure to build decision trees. The first approach allows to build binary decision trees for classification problems. A split of a node is defined according to the choice of both a splitting group and a linear combination of the inputs belonging to the splitting group. The second method, which can be used for prediction problems in both regression and classification, builds a non-binary tree in which each split is a binary tree. These two approaches build a maximal tree which is next pruned. To this end, we propose two pruning strategies, one of which is a generalization of the minimal cost-complexity pruning algorithm. Since decisions trees are known to be unstable, we introduce a method of random forests that deals with groups of inputs. In addition to the prediction purpose, these new methods can be also use to perform group variable selection thanks to the introduction of some measures of group importance, This thesis work is supplemented by an independent part in which we consider the unsupervised framework. We introduce a new clustering algorithm. Under some classical regularity and sparsity assumptions, we obtain the rate of convergence of the clustering risk for the proposed alqorithm
Ciss, Saïp. "Forêts uniformément aléatoires et détection des irrégularités aux cotisations sociales." Thesis, Paris 10, 2014. http://www.theses.fr/2014PA100063/document.
We present in this thesis an application of machine learning to irregularities in the case of social contributions. These are, in France, all contributions due by employees and companies to the "Sécurité sociale", the french system of social welfare (alternative incomes in case of unemployement, Medicare, pensions, ...). Social contributions are paid by companies to the URSSAF network which in charge to recover them. Our main goal was to build a model that would be able to detect irregularities with a little false positive rate. We, first, begin the thesis by presenting the URSSAF and how irregularities can appear, how can we handle them and what are the data we can use. Then, we talk about a new machine learning algorithm we have developped for, "random uniform forests" (and its R package "randomUniformForest") which are a variant of Breiman "random Forests" (tm), since they share the same principles but in in a different way. We present theorical background of the model and provide several examples. Then, we use it to show, when irregularities are fraud, how financial situation of firms can affect their propensity for fraud. In the last chapter, we provide a full evaluation for declarations of social contributions of all firms in Ile-de-France for year 2013, by using the model to predict if declarations present irregularities or not