Dissertations / Theses on the topic 'Méthodes Bayésiennes'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Méthodes Bayésiennes.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bazot, Cécile. "Méthodes bayésiennes pour l'analyse génétique." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10573/1/bazot.pdf.
Full textEches, Olivier. "Méthodes Bayésiennes pour le démélange d'images hyperspectrales." Thesis, Toulouse, INPT, 2010. http://www.theses.fr/2010INPT0067/document.
Full textHyperspectral imagery has been widely used in remote sensing for various civilian and military applications. A hyperspectral image is acquired when a same scene is observed at different wavelengths. Consequently, each pixel of such image is represented as a vector of measurements (reflectances) called spectrum. One major step in the analysis of hyperspectral data consists of identifying the macroscopic components (signatures) that are present in the sensored scene and the corresponding proportions (concentrations). The latest techniques developed for this analysis do not properly model these images. Indeed, these techniques usually assume the existence of pure pixels in the image, i.e. pixels containing a single pure material. However, a pixel is rarely composed of pure spectrally elements, distinct from each other. Thus, such models could lead to weak estimation performance. The aim of this thesis is to propose new estimation algorithms with the help of a model that is better suited to the intrinsic properties of hyperspectral images. The unknown model parameters are then infered within a Bayesian framework. The use of Markov Chain Monte Carlo (MCMC) methods allows one to overcome the difficulties related to the computational complexity of these inference methods
Launay, Tristan. "Méthodes bayésiennes pour la prévision de consommation l'électricité." Phd thesis, Université de Nantes, 2012. http://tel.archives-ouvertes.fr/tel-00766237.
Full textGrazian, Clara. "Contributions aux méthodes bayésiennes approchées pour modèles complexes." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED001.
Full textRecently, the great complexity of modern applications, for instance in genetics,computer science, finance, climatic science etc., has led to the proposal of newmodels which may realistically describe the reality. In these cases, classical MCMCmethods fail to approximate the posterior distribution, because they are too slow toinvestigate the full parameter space. New algorithms have been proposed to handlethese situations, where the likelihood function is unavailable. We will investigatemany features of complex models: how to eliminate the nuisance parameters fromthe analysis and make inference on key quantities of interest, both in a Bayesianand not Bayesian setting, and how to build a reference prior
Launay, Tristan. "Méthodes Bayésiennes pour la prévision de consommation d’électricité." Nantes, 2012. http://www.theses.fr/2012NANT2074.
Full textIn this manuscript, we develop Bayesian statistics tools to forecast the French electricity load. We first prove the asymptotic normality of the posterior distribution (Bernstein-von Mises theorem) for the piecewise linear regression model used to describe the heating effect and the consistency of the Bayes estimator. We then build a a hierarchical informative prior to help improve the quality of the predictions for a high dimension model with a short dataset. We typically show, with two examples involving the non metered EDF customers, that the method we propose allows a more robust estimation of the model with regard to the lack of data. Finally, we study a new nonlinear dynamic model to predict the electricity load online. We develop a particle filter algorithm to estimate the model et compare the predictions obtained with operationnal predictions from EDF
Usureau, Emmanuel. "Application des méthodes bayésiennes pour l'optimisation des coûts de développement des produits nouveaux." Angers, 2001. http://www.theses.fr/2001ANGE0017.
Full textSalomond, Jean-Bernard. "Propriétés fréquentistes des méthodes Bayésiennes semi-paramétriques et non paramétriques." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090034/document.
Full textResearch on Bayesian nonparametric methods has received a growing interest for the past twenty years, especially since the development of powerful simulation algorithms which makes the implementation of complex Bayesian methods possible. From that point it is necessary to understand from a theoretical point of view the behaviour of Bayesian nonparametric methods. This thesis presents various contributions to the study of frequentist properties of Bayesian nonparametric procedures. Although studying these methods from an asymptotic angle may seems restrictive, it allows to grasp the operation of the Bayesian machinery in extremely complex models. Furthermore, this approach is particularly useful to detect the characteristics of the prior that are strongly influential in the inference. Many general results have been proposed in the literature in this setting, however the more complex and realistic the models the further they get from the usual assumptions. Thus many models that are of great interest in practice are not covered by the general theory. If the study of a model that does not fall under the general theory has an interest on its owns, it also allows for a better understanding of the behaviour of Bayesian nonparametric methods in a general setting
Mariani, Vincenzo. "Méthodes bayésiennes et d'apprentissage supervisé pour la construction d'orbites planétaires." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5059.
Full textIn this thesis we explored Bayesian and supervised learning methods applied to planetary orbitography. Firstly, we introduce the general problem of planetary orbit construction and the mathematical toolkit used. We describe the Metropolis-Hastings (MH) algorithm and Gaussian process regression (GPR) in order to obtain the posterior distribution of a parameter involved in planetary dynamics, producing a Markov chain Monte Carlo (MCMC). Additionally, the boosting decision trees (BDT) technique has been explained, and it is used to obtain a ranking of a given set of parameters involved in the planetary orbitography fit. We present the results on the use of MCMC and GPR combined. We used the GPR to obtain an approximation of the chi^2 to be used in the MH algorithm. We show an improvement for the detection limit of the mass of the graviton m_g using solar system dynamics, starting from a posterior probability distribution for m_g. From the posterior obtained, we can provide an upper bound of m_g < 1.01 × 10^{-24} eVc^{-2} at 99.7% C.L., improving of one order of magnitude such a limit. Additionally, we have shown, with a change of prior (from uniform to half-Laplace), that no significant information can be detected for masses smaller than this limit, by using the current observational datasets. These results have been published in Mariani et al. (2023). In using the same methodology, we also provide the latest constraint from planetary orbitography on the Brans-Dicke theory of gravity. In this context, we find that |1 - gamma| < 1.92 × 10^{-5} at the 66.7% C.L., while the previous best constraints from ranging data of the Cassini spacecraft on gamma led to |1-gamma| < 4.4 × 10^{-5} at 66.7% C.L. Moreover, we report marginal evidence suggesting that the effect of the violation on the strong equivalence principle might be detected, if any, with the current accuracy of planetary orbitography. These results have been published in Mariani et al. (2024). We also used the BDT to provide a ranking by relative importance of the 343 MBA masses currently used in the planetary ephemerides fit. We showed how to use a decision tree and investigated one possibility of construction of a training set. These preliminary results are promising since they show consistency among them as well as with previous works. The validity of the ranking has been confirmed by checking its impact on planetary orbit construction using the full set of observations available, in removing the least important asteroids from the MAB modeling cumulatively, and fitting the remaining parameters. The results presented validate the approach used. Finally, we propose new approaches to continue the investigations started within the current work, generalising and extending the techniques already presented
Ruggiero, Michèle. "Analyse semi-paramétrique des modèles de durées : l'apport des méthodes bayésiennes." Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX24008.
Full textWe propose a semiparametric analysis of duration models. In this special class of regression models, the dependant variable is the time spent by a person in a particular state - the duration of an unemployment spell for instance - and the explanatory variables are the personal characteristics of this person. The semiparametric analysis of these models consists in specifying the relation between the duration and the explanatory variables (duration is supposed to be a specified function of the explanatory variables, depending on a finite number of unknown parameters) without specifying the data distribution. The parameters involved in this relation are then considered as parameters of interest, and the data distribution is a nuisance parameter. The thesis begins with a survey of nonbayesian semiparametric methods of estimation; it seems that these methods fail in discarding the nuisance data distribution. We then suggest a bayesian method, the principle of which is to give a prior distribution on the nuisance parameter - the data distribution. We then get semiparametric estimators for the parameters of interest, by computing their posterior distribution, conditional on the data and integrated with respect to the nuisance parameter. The thesis ends with a simulation, to check the robustness of the estimators we propose
Puengnim, Anchalee. "Classification de modulations linéaires et non-linéaires à l'aide de méthodes bayésiennes." Toulouse, INPT, 2008. http://ethesis.inp-toulouse.fr/archive/00000676/.
Full textThis thesis studies classification of digital linear and nonlinear modulations using Bayesian methods. Modulation recognition consists of identifying, at the receiver, the type of modulation signals used by the transmitter. It is important in many communication scenarios, for example, to secure transmissions by detecting unauthorized users, or to determine which transmitter interferes the others. The received signal is generally affected by a number of impairments. We propose several classification methods that can mitigate the effects related to imperfections in transmission channels. More specifically, we study three techniques to estimate the posterior probabilities of the received signals conditionally to each modulation
Sedki, Mohammed. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00769095.
Full textSedki, Mohammed Amechtoh. "Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20041/document.
Full textThis thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model
Jay, Flora. "Méthodes bayésiennes en génétique des populations : relations entre structure génétique des populations et environnement." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENS026/document.
Full textWe introduce a new method to study the relationships between population genetic structure and environment. This method is based on Bayesian hierarchical models which use both multi-loci genetic data, and spatial, environmental, and/or cultural data. Our method provides the inference of population genetic structure, the evaluation of the relationships between the structure and non-genetic covariates, and the prediction of population genetic structure based on these covariates. We present two applications of our Bayesian method. First, we used human genetic data to evaluate the role of geography and languages in shaping Native American population structure. Second, we studied the population genetic structure of 20 Alpine plant species and we forecasted intra-specific changes in response to global warming. STAR
Jay, Flora. "Méthodes bayésiennes pour la génétique des populations : relations entre structure génétique des populations et environnement." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00648601.
Full textPetit-Graffard, Claude. "Les Méthodes bayésiennes dans les essais cliniques multicritères à visée pharmaco-économique : cas d'un essai sur la schizophrénie." Paris 11, 1999. http://www.theses.fr/1999PA11T003.
Full textGuin, Ophélie. "Méthodes bayésiennes semi-paramétriques d'extraction et de sélection de variables dans le cadre de la dendroclimatologie." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00636704.
Full textHajj, Paméla El. "Méthodes d'aide à la décision thérapeutique dans les cas des maladies rares : intérêt des méthodes bayésiennes et application à la maladie de Horton." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS037/document.
Full textIn recent years, scientists have difficulties to study rare diseases by conventional methods, because the sample size needed in such studies to meet a conventional frequentist power is not adapted to the number of available patients. After systemically searching in literature and characterizing different methods used in the contest of rare diseases, we remarked that most of the proposed methods are deterministic and are globally unsatisfactory because it is difficult to correct the insufficient statistical power.More attention has been placed on Bayesian models which through a prior distribution combined with a current study enable to draw decisionsfrom a posterior distribution. Determination of the prior distribution in a Bayesian model is challenging, we will describe the process of determining the prior including the possibility of considering information from some historical controlled trials and/or data coming from other studies sufficiently close to the subject of interest.First, we describe a Bayesian model that aims to test the hypothesis of the non-inferiority trial based on the hypothesis that methotrexate is more effective than corticosteroids alone.On the other hand, our work rests on the use of the epsilon-contamination method, which is based on contaminating an a priori not entirely satisfactory by a series of distributions drawn from information on other studies sharing close conditions,treatments or even populations. Contamination is a way to include the proximity of information provided bythese studies
Nguyen, Chi Cong. "Amélioration des approches bayésiennes MCMC pour l'analyse régionale des crues." Nantes, 2012. http://archive.bu.univ-nantes.fr/pollux/show.action?id=178da900-8dd8-491b-8720-011608382b98.
Full textThis thesis presents additional developments of an approach initially proposed by Gaume (2010), that aims to incorporate available information on extreme floods at ungauged sites in a regional flood frequency analyses (RFFA). The performances and robustness of this approach are tested and compared to a reference approach proposed by Hosking & Wallis (1997). The comparisons are based both, on simulations and case studies. The inference procedure is based on a GEV distribution associated with a specific likelihood formulation and a Bayesian MCMC algorithm for the estimation of the parameters. First, the inference results obtained without incorporating extreme floods are compared based on simulations, with a focus on the effects of possible heterogeneities in the considered regions. Next, both approaches are applied to two regions. This application finally confirms the very positive impact of the incorporation of information on extreme floods in RFFA, that enables to outperform the results based on a conventional regional approach
Faure, Charly. "Approches bayésiennes appliquées à l’identification d’efforts vibratoires par la méthode de Résolution Inverse." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1002.
Full textIncreasingly accurate models are developped to predict the vibroacoustic behavior of structures and to propose adequate treatments.Vibration sources used as input of these models are still broadly unknown. In simulation, an error on vibration sources produces a bias on the vibroacoustic predictions. A way to reduce this bias is to characterize experimentally the vibration sources in operational condition before some simulations. It is therefore the subject of this PhD work.The proposed approach is based on an inverse method, the Force Analysis Technique (FAT), and allows the identification of vibration sources from displacement measurements. The noise sensibility, common to most of inverse methods, is processed in a probabilistic framework using Bayesian methods.This Bayesian framework allows: some improvements of the FAT robustness; an automatic detection of sources; the sparse identification of sources for pointwise sources; the model parameters identification for the purpose of homogenized structures; the identification of unsteady sources; the propagation of uncertainties through force spectrum (with credibility intervals); measurement quality assessment from a empirical signal to noise ratio.These two last points are obtained from a unique scan of the structure, where more traditional statistical methods need multiple scans of the structure. Both numerical and experimental validations have been proposed, with a controled excitation and with an industrial source. Moreover, the procedure is rather unsupervised in this work. Therefore, the user only has a few number of parameters to set by himself. In a certain extent, the proposed approaches can then be applied as black boxes
Costache, Mihai. "Support vector machines et méthodes bayésiennes pour l'apprentissage sémantique fondé sur des catégories : recherche dans les bases de données d'imagerie satellitaire." Paris, ENST, 2008. http://www.theses.fr/2008ENST0026.
Full textNowadays large volumes of multimedia data are available with the source being represented by different human activity domains such as photography, television channels, remote sensing applications, etc. For all these data there is a clear need for tools and methods which can allow an optimal data organisation so that the access to the content can be done in a fast and efficient manner. The number of operational EO satellites increases every year and generates an explosion of the acquired data volume. Nowadays, for instance, on average somewhere between 10 to 100 Gigabytes of image data are recorded daily on regular basis by the available optical and Synthetic Aperture Radar (SAR) sensors on board of the EO satellites. ESA's Environmental Satellite, Envisat, deployed in 2002 collects per year around 18 Terabytes of multisensor data. This leads to a total of about 10 Terabytes of annually recorded data volume which is representing a huge volume of data to be processed and interpreted. The generated data volume in classical remote sensing applications, are treated manually by specialised experts for each domain of application. However, this type of analysis is costly and time consuming. Moreover, it allows the processing of only a small fraction of the available data as the user-based image interpretation is done at a greatly lower pace that the one in which the recorded images are sent to the ground stations. In order to cope with these two major aspects in remote sensing, there is a clear need for highly efficient search tools for EO image archives and for search mechanisms able to identify and recognise structures within EO images; moreover these systems should be fast and work with high precision. Such a system should automatically perform classification of the available digital image collection, based on a previous training, which was supervised by an expert. In this way, the image database is better organized and images of interest can be identifed more easily than just by employing a manual expert image interpretation. The task is to infer knowledge, by means of human-machine interaction, from EO image archives. The human-machine interaction enables the transfer of human expertise to the machine by means of knowledge inference algorithms which interpret the human decision and to translate it into conceptual levels. In this way, the EO image information search and extraction process is automated, allowing a fast reponse adapted to human queries
Foulley, Jean-Louis. "Méthodes d'évaluation des reproducteurs pour des caractères discrets à déterminisme polygénique en sélection animale." Paris 11, 1987. http://www.theses.fr/1987PA112160.
Full textLinear and non-linear models for the analysis of categorical data in animal breeding are reviewed and discussed on account of recent research made in that area. Only non-linear methods based on the threshold liability concept introduced by Wright are described. Emphasis is on describing statistical techniques for estimating genetic merit and parameters of genetic and phenotypic variation. For the non-linear threshold model, it is shown how Bayesian methodology is particularly well suited for estimating location and dispersion parameters in the underlying scale under mixed sources of variation. The generality of this approach is illustrated through a discussion of several situations in which this procedure can be applied (ordered polychotomies, multiple binary responses, mixture of binary and normal traits)
Foll, Matthieu. "Méthodes bayesiennes pour l'estimation de l'histoire démographique et de la pression de sélection à partir de la structure génétique des populations." Phd thesis, Grenoble 1, 2007. http://www.theses.fr/2007GRE10280.
Full textRecent advances in the fields of computational biology and molecular biology techniques have led to the emerging discipline of population genomics, whose main objective is the study of the spatial structure of genetic diversity. This structure is determined by both neutral forces, like migration and drift, and adaptive forces, like natural selection, and has important applications in many fields like medical genetics or conservation biology. Here, we develop new statistical methods to evaluate the role of natural selection and environment in this spatial structure. All these methods are based on the Bayesian Dirichlet-multinomial model of genetic differentiation. First, we propose to include environmental variables in the estimation process, in order to identify the biotic and abiotic factors that determine the genetic structure. Then, we study the possibility of extending the Dirichlet-multinomial model to dominant markers, which have become very popular in the last few years, but which are affected by various ascertainment biases. Finally, we try to separate neutral effects from adaptive effects on the genetic structure, in order to identify regions of the genome influenced by natural selection. Three databases have been analyzed as illustrations of the use of these new methods: human data, data of argan tree in Morocco, and data of periwinkle. Finally, we developed three softwares implementing these various models
Foll, Matthieu. "Méthodes bayesiennes pour l'estimation de l'histoire démographique et de la pression de sélection à partir de la structure génétique des populations." Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00216192.
Full textBataille, Frédéric. "Evaluation d'une méthode de restitution de contraste basée sur le guidage anatomique de la reconstruction des images cérébrales en tomographie par émission de positons." Paris 11, 2007. http://www.theses.fr/2007PA112044.
Full textPositron emission tomography is a medical imaging modality providing in-vivo volumetric images of functional processes of the human body, which is used for the diagnosis and the following of neurodegenerative diseases. PET efficiency is however limited by its poor spatial resolution, which generates a decrease of the image local contrast and leads to an under-estimation of small cerebral structures involved in the degenerative mechanism of those diseases. This so-called partial volume effect degradation is usually corrected in a post-reconstruction processing framework through the use of anatomical information, whose spatial resolution allows a better discrimination between functional tissues. However, this kind of method has the major drawback of being very sensitive to the residual mismatches on the anatomical information processing. We developed in this thesis an alternative methodology to compensate for the degradation, by incorporating in the reconstruction process both a model of thesystem impulse response and an anatomically-based image prior constraint. This methodology was validated by comparison with a post-reconstruction correction strategy, using data from an anthropomorphic phantom acquisition and then we evaluated its robustness to the residual mismatches through a realistic Monte Carlo simulation corresponding to a cerebral exam. The proposed algorithm was finally applied to reconstruct clinical data
Petit, Sébastien. "Improved Gaussian process modeling : Application to Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG063.
Full textThis manuscript focuses on Bayesian modeling of unknown functions with Gaussian processes. This task arises notably for industrial design, with numerical simulators whose computation time can reach several hours. Our work focuses on the problem of model selection and validation and goes in two directions. The first part studies empirically the current practices for stationary Gaussian process modeling. Several issues on Gaussian process parameter selection are tackled. A study of parameter selection criteria is the core of this part. It concludes that the choice of a family of models is more important than that of the selection criterion. More specifically, the study shows that the regularity parameter of the Matérn covariance function is more important than the choice of a likelihood or cross-validation criterion. Moreover, the analysis of the numerical results shows that this parameter can be selected satisfactorily by the criteria, which leads to a practical recommendation. Then, particular attention is given to the numerical optimization of the likelihood criterion. Observing important inconsistencies between the different libraries available for Gaussian process modeling like Erickson et al. (2018), we propose elementary numerical recipes making it possible to obtain significant gains both in terms of likelihood and model accuracy. Finally, the analytical formulas for computing cross-validation criteria are revisited under a new angle and enriched with similar formulas for the gradients. This last contribution aligns the computational cost of a class of cross-validation criteria with that of the likelihood. The second part presents a goal-oriented methodology. It is designed to improve the accuracy of the model in an (output) range of interest. This approach consists in relaxing the interpolation constraints on a relaxation range disjoint from the range of interest. We also propose an approach for automatically selecting the relaxation range. This new method can implicitly manage potentially complex regions of interest in the input space with few parameters. Outside, it learns non-parametrically a transformation improving the predictions on the range of interest. Numerical simulations show the benefits of the approach for Bayesian optimization, where one is interested in low values in the minimization framework. Moreover, the theoretical convergence of the method is established under some assumptions
Dang, Hong-Phuong. "Approches bayésiennes non paramétriques et apprentissage de dictionnaire pour les problèmes inverses en traitement d'image." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0019/document.
Full textDictionary learning for sparse representation has been widely advocated for solving inverse problems. Optimization methods and parametric approaches towards dictionary learning have been particularly explored. These methods meet some limitations, particularly related to the choice of parameters. In general, the dictionary size is fixed in advance, and sparsity or noise level may also be needed. In this thesis, we show how to perform jointly dictionary and parameter learning, with an emphasis on image processing. We propose and study the Indian Buffet Process for Dictionary Learning (IBP-DL) method, using a bayesian nonparametric approach.A primer on bayesian nonparametrics is first presented. Dirichlet and Beta processes and their respective derivatives, the Chinese restaurant and Indian Buffet processes are described. The proposed model for dictionary learning relies on an Indian Buffet prior, which permits to learn an adaptive size dictionary. The Monte-Carlo method for inference is detailed. Noise and sparsity levels are also inferred, so that in practice no parameter tuning is required. Numerical experiments illustrate the performances of the approach in different settings: image denoising, inpainting and compressed sensing. Results are compared with state-of-the art methods is made. Matlab and C sources are available for sake of reproducibility
Sayadi, Bessem. "Détection multi-utilisateurs par réseau de filtres de Kalman parallèles pour les systèmes AMRC." Paris 11, 2003. http://www.theses.fr/2003PA112224.
Full textThe research presented in this dissertation concerns the study of the multiuser detection as a symbol by symbol Bayesian estimation based on a symbol rate state space representation of the DS-CDMA system. The classical works on the Kalman filtering approach are based on the assumption of Gaussian signals. This is not valid in our context since the state noise presents a non Gaussian character (it is related to the transmitted symbols of users). By approximating their a posteriori pdf by a Weighted Sum of Gaussian (WSG) density function, where each Gaussian term parameters are adjusted using one Kalman filter, we show that the resulted MUD detector is structured into a Network of Kalman Filters (NKF). The proposed detector improves the performances of the classical structure such as DFE, MMSE,. . . Etc. It is also near far resistant. It presents an exponential complexity since it decodes jointly the users. So, we propose two simplified structures. The first is based on the application of a Network of LMS Filter. The second combines a hybrid SIC/PIC structure and a reduced Network of Kalman Filters-based on a reduced state space representation of the DS-CDMA system. The proposed structure involves two steps. The first, called forward step, decodes the users in a serial approach using a SIC structure. The second, called backward step, is based on a hybrid SIC/PIC structure in order to produce better estimates of the transmitted symbols. In addition to, we study the channel estimation in a multiuser context where we propose a hybrid structure for joint estimation and detection. This enable us to evaluate the impact of channel error estimation on the performances of the NKF detector. The second shutter of this dissertation is devoted to the study of the multiuser detection in an impulsive environment. After showing the deterioration of the performances of structures optimized under Gaussian hypothesis, we extend the formalism of Sorenson et al. To the case of the impulsive noise. We show that the resulted structure is a convex combination of two NKF working in parallel weighted by the probability of appearance of the impulsive noise (extended NKF structure). Further more, we propose a second structure based on the Huber's function called M-NKF-MMSE. Its performances are dependant on the value of the used threshold. Hence, in order to reduce the complexity of the proposed extended NKF structure, we propose to incorporate a Likelihood Ratio Test (LRT) for impulses localization. In fact, we show that the impulses localization in the received signal can be cast by two binary hypothesis. The resulted structure, is called NKF/Bayes. .
Bahamyirou, Asma. "Sur les intervalles de confiance bayésiens pour des espaces de paramètres contraints et le taux de fausses découvertes." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6986.
Full textBernhardt, Stéphanie. "Performances et méthodes pour l'échantillonnage comprimé : Robustesse à la méconnaissance du dictionnaire et optimisation du noyau d'échantillonnage." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS443/document.
Full textIn this thesis, we are interested in two different low rate sampling schemes that challenge Shannon’s theory: the sampling of finite rate of innovation signals and compressed sensing.Recently it has been shown that using appropriate sampling kernel, finite rate of innovation signals can be perfectly sampled even though they are non-bandlimited. In the presence of noise, reconstruction is achieved by a model-based estimation procedure. In this thesis, we consider the estimation of the amplitudes and delays of a finite stream of Dirac pulses using an arbitrary kernel and the estimation of a finite stream of arbitrary pulses using the Sum of Sincs (SoS) kernel. In both scenarios, we derive the Bayesian Cramér-Rao Bound (BCRB) for the parameters of interest. The SoS kernel is an interesting kernel since it is totally configurable by a vector of weights. In the first scenario, based on convex optimization tools, we propose a new kernel minimizing the BCRB on the delays, while in the second scenario we propose a family of kernels which maximizes the Bayesian Fisher Information, i.e., the total amount of information about each of the parameter in the measures. The advantage of the proposed family is that it can be user-adjusted to favor either of the estimated parameters.Compressed sensing is a promising emerging domain which outperforms the classical limit of the Shannon sampling theory if the measurement vector can be approximated as the linear combination of few basis vectors extracted from a redundant dictionary matrix. Unfortunately, in realistic scenario, the knowledge of this basis or equivalently of the entire dictionary is often uncertain, i.e. corrupted by a Basis Mismatch (BM) error. The related estimation problem is based on the matching of continuous parameters of interest to a discretized parameter set over a regular grid. Generally, the parameters of interest do not lie in this grid and there exists an estimation error even at high Signal to Noise Ratio (SNR). This is the off-grid (OG) problem. The consequence of the BM and the OG mismatch problems is that the estimation accuracy in terms of Bayesian Mean Square Error (BMSE) of popular sparse-based estimators collapses even if the support is perfectly estimated and in the high Signal to Noise Ratio (SNR) regime. This saturation effect considerably limits the effective viability of these estimation schemes.In this thesis, the BCRB is derived for CS model with unstructured BM and OG. We show that even though both problems share a very close formalism, they lead to different performances. In the biased dictionary based estimation context, we propose and study analytical expressions of the Bayesian Mean Square Error (BMSE) on the estimation of the grid error at high SNR. We also show that this class of estimators is efficient and thus reaches the Bayesian Cramér-Rao Bound (BCRB) at high SNR. The proposed results are illustrated in the context of line spectra analysis for several popular sparse estimator. We also study the Expected Cramér-Rao Bound (ECRB) on the estimation of the amplitude for a small OG error and show that it follows well the behavior of practical estimators in a wide SNR range.In the context of BM and OG errors, we propose two new estimation schemes called Bias-Correction Estimator (BiCE) and Off-Grid Error Correction (OGEC) respectively and study their statistical properties in terms of theoretical bias and variances. Both estimators are essentially based on an oblique projection of the measurement vector and act as a post-processing estimation layer for any sparse-based estimator and mitigate considerably the BM (OG respectively) degradation. The proposed estimators are generic since they can be associated to any sparse-based estimator, fast, and have good statistical properties. To illustrate our results and propositions, they are applied in the challenging context of the compressive sampling of finite rate of innovation signals
Yousfi, Elqasyr Khadija. "MODÉLISATION ET ANALYSE STATISTIQUE DES PLANS D'EXPÉRIENCE SÉQUENTIELS." Phd thesis, Université de Rouen, 2008. http://tel.archives-ouvertes.fr/tel-00377114.
Full textMarrelec, Guillaume. "Méthodes bayésiennes pour l'analyse de la réponse hémodynamique et de la connectivité fonctionnelle en IRM fonctionnelle : apport à l'étude de la plasticité dans la chirurgie des gliomes de bas grade intracérébraux." Paris 11, 2003. http://www.theses.fr/2003PA112260.
Full textBOLD functional MRI (fMAI) is a recent imaging technique that can be used to dynamically and non-invasively study brain hemodynamic evolutions induced by neuronal activity. Use of fMRI could in particular allow for a better understanding of the plasticity phenomena that occur in the pathology of law-grade gliomas. To this end, development of new mathematical models is necessary. We first briefly introduce functional neuroimaging and the methodological framework of our work. We then develop our research on two complementary models, whose common goal is the study of brain plasticity. The first model considers the brain as a black box characterized by its response function, the so-called hemodynamic response. We proposed a robust Bayesian method to inter this response, through introduction of basic yet relevant a priori information about the underlying physiological process. This method was then generalized to account for most event-related fMRI acquisitions. A second model considers the interactions between regions involved in a given task. We developed a novel model, relying on the theory of independence graphs, that enables the quantification of interactions within this network. We also proposed a Bayesian procedure to estimate these quantities. We finally show that both approaches can be considered as two special cases within a more general model whose further development would allow for a better understanding of brain functional processes as measured by fMRI. Both methods developed were applied to clinical data to investigate brain plasticity observed among patients with law-grade brain gliomas. Most results obtained agree with the litterature. Some cast a new light on the functional reorganization that occurs among patients
Saley, Issa. "Modélisation des données d'attractivité hospitalière par les modèles d'utilité." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS044/document.
Full textUnderstanding how patients choose hospitals is of utmost importance for both hospitals administrators and healthcare decision makers; the formers for patients incoming tide and the laters for regulations.In this thesis, we present different methods of modelling patients admission data in order to forecast patients incoming tide and compare hospitals attractiveness.The two first method use counting data models with possible spatial dependancy. Illustration is done on patients admission data in Languedoc-Roussillon.The third method uses discrete choice models (RUMs). Due to some limitations of these models according to our goal, we introduce a new approach where we released the assumption of utility maximization for an utility-threshold ; that is to say that an agent (patient) can choose an alternative (hospital) since he thinks that he can obtain a certain level of satisfaction of doing so, according to some aspects. Illustration of the approach is done on 2009 asthma admission data in Hérault
Jay, Emmanuelle. "Détection en Environnement non Gaussien." Phd thesis, Université de Cergy Pontoise, 2002. http://tel.archives-ouvertes.fr/tel-00174276.
Full textAvec l'évolution technologique des systèmes radar, la nature réelle du fouillis s'est révélée ne plus être Gaussienne. Bien que l'optimalité du filtre adapté soit mise en défaut dans pareils cas, des techniques TFAC (Taux de Fausses Alarmes Constant) ont été proposées pour ce détecteur, dans le but d'adapter la valeur du seuil de détection aux multiples variations locales du fouillis. Malgré leur diversité, ces techniques se sont avérées n'être ni robustes ni optimales dans ces situations.
A partir de la modélisation du fouillis par des processus complexes non-Gaussiens, tels les SIRP (Spherically Invariant Random Process), des structures optimales de détection cohérente ont pu être déterminées. Ces modèles englobent de nombreuses lois non-Gaussiennes, comme la K-distribution ou la loi de Weibull, et sont reconnus dans la littérature pour modéliser de manière pertinente de nombreuses situations expérimentales. Dans le but d'identifier la loi de leur composante caractéristique qu'est la texture, sans a priori statistique sur le modèle, nous proposons, dans cette thèse, d'aborder le problème par une approche bayésienne.
Deux nouvelles méthodes d'estimation de la loi de la texture en découlent : la première est une méthode paramétrique, basée sur une approximation de Padé de la fonction génératrice de moments, et la seconde résulte d'une estimation Monte Carlo. Ces estimations sont réalisées sur des données de fouillis de référence et donnent lieu à deux nouvelles stratégies de détection optimales, respectivement nommées PEOD (Padé Estimated Optimum Detector) et BORD (Bayesian Optimum Radar Detector). L'expression asymptotique du BORD (convergence en loi), appelée le "BORD Asymptotique", est établie ainsi que sa loi. Ce dernier résultat permet d'accéder aux performances théoriques optimales du BORD Asymptotique qui s'appliquent également au BORD dans le cas où la matrice de corrélation des données est non singulière.
Les performances de détection du BORD et du BORD Asymptotique sont évaluées sur des données expérimentales de fouillis de sol. Les résultats obtenus valident aussi bien la pertinence du modèle SIRP pour le fouillis que l'optimalité et la capacité d'adaptation du BORD à tout type d'environnement.
Picard, Guillaume. "Traitement statistique des distorsions non-linéaires pour la restauration des enregistrements sonores." Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00002315.
Full textBeeman, Jai Chowdhry. "Le rôle des gaz à effet de serre dans les variations climatiques passées : une approche basée sur des chronologies précises des forages polaires profonds." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAU023/document.
Full textDeep polar ice cores contain records of both past climate and trapped air that reflects past atmospheric compositions, notably of greenhouse gases. This record allows us to investigate the role of greenhouse gases in climate variations over eight glacial-interglacial cycles. The ice core record, like all paleoclimate records, contains uncertainties associated both with the relationships between proxies and climate variables, and with the chronologies of the records contained in the ice and trapped air bubbles. In this thesis, we develop a framework, based on Bayesian inverse modeling and the evaluation of complex probability densities, to accurately treat uncertainty in the ice core paleoclimate record. Using this framework, we develop two studies, the first about Antarctic Temperature and CO2 during the last deglaciation, and the second developing a Bayesian synchronization method for ice cores. In the first study, we use inverse modeling to identify the probabilities of piecewise linear fits to CO2 and a stack of Antarctic Temperature records from five ice cores, along with the individual temperature records from each core, over the last deglacial warming, known as Termination 1. Using the nodes, or change points in the piecewise linear fits accepted during the stochastic sampling of the posterior probability density, we discuss the timings of millenial-scale changes in trend in the series, and calculate the phasings between coherent changes. We find that the phasing between Antarctic Temperature and CO2 likely varied, though the response times remain within a range of ~500 years from synchrony, both between events during the deglaciation and accross the individual ice core records. This result indicates both regional-scale complexity and modulations or variations in the mechanisms linking Antarctic temperature and CO2 accross the deglaciation. In the second study, we develop a Bayesian method to synchronize ice cores using corresponding time series in the IceChrono inverse chronological model. Tests show that this method is able to accurately synchronize CH4 series, and is capable of including external chronological observations and prior information about the glaciological characteristics at the coring site. The method is continuous and objective, bringing a new degree of accuracy and precision to the use of synchronization in ice core chronologies
Jouffroy, Emma. "Développement de modèles non supervisés pour l'obtention de représentations latentes interprétables d'images." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0050.
Full textThe Laser Megajoule (LMJ) is a large research device that simulates pressure and temperature conditions similar to those found in stars. During experiments, diagnostics are guided into an experimental chamber for precise positioning. To minimize the risks associated with human error in such an experimental context, the automation of an anti-collision system is envisaged. This involves the design of machine learning tools offering reliable decision levels based on the interpretation of images from cameras positioned in the chamber. Our research focuses on probabilistic generative neural methods, in particular variational auto-encoders (VAEs). The choice of this class of models is linked to the fact that it potentially enables access to a latent space directly linked to the properties of the objects making up the observed scene. The major challenge is to study the design of deep network models that effectively enable access to such a fully informative and interpretable representation, with a view to system reliability. The probabilistic formalism intrinsic to VAE allows us, if we can trace back to such a representation, to access an analysis of the uncertainties of the encoded information
Lassoued, Khaoula. "Localisation de robots mobiles en coopération mutuelle par observation d'état distribuée." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2289/document.
Full textIn this work, we study some cooperative localization issues for mobile robotic systems that interact with each other without using relative measurements (e.g. bearing and relative distances). The considered localization technologies are based on beacons or satellites that provide radio-navigation measurements. Such systems often lead to offsets between real and observed positions. These systematic offsets (i.e, biases) are often due to inaccurate beacon positions, or differences between the real electromagnetic waves propagation and the observation models. The impact of these biases on robots localization should not be neglected. Cooperation and data exchange (estimates of biases, estimates of positions and proprioceptive measurements) reduce significantly systematic errors. However, cooperative localization based on sharing estimates is subject to data incest problems (i.e, reuse of identical information in the fusion process) that often lead to over-convergence problems. When position information is used in a safety-critical context (e.g. close navigation of autonomous robots), one should check the consistency of the localization estimates. In this context, we aim at characterizing reliable confidence domains that contain robots positions with high reliability. Hence, set-membership methods are considered as efficient solutions. This kind of approach enables merging adequately the information even when it is reused several time. It also provides reliable domains. Moreover, the use of non-linear models does not require any linearization. The modeling of a cooperative system of nr robots with biased beacons measurements is firstly presented. Then, we perform an observability study. Two cases regarding the localization technology are considered. Observability conditions are identified and demonstrated. We then propose a set-membership method for cooperativelocalization. Cooperation is performed by sharing estimated positions, estimated biases and proprioceptive measurements. Sharing biases estimates allows to reduce the estimation error and the uncertainty of the robots positions. The algorithm feasibility is validated through simulation when the observations are beacons distance measurements with several robots. The cooperation provides better performance compared to a non-cooperative method. Afterwards, the cooperative algorithm based on set-membership method is tested using real data with two experimental vehicles. Finally, we compare the interval method performance with a sequential Bayesian approach based on covariance intersection. Experimental results indicate that the interval approach provides more accurate positions of the vehicles with smaller confidence domains that remain reliable. Indeed, the comparison is performed in terms of accuracy and uncertainty
Elvira, Clément. "Modèles bayésiens pour l’identification de représentations antiparcimonieuses et l’analyse en composantes principales bayésienne non paramétrique." Thesis, Ecole centrale de Lille, 2017. http://www.theses.fr/2017ECLI0016/document.
Full textThis thesis proposes Bayesian parametric and nonparametric models for signal representation. The first model infers a higher dimensional representation of a signal for sake of robustness by enforcing the information to be spread uniformly. These so called anti-sparse representations are obtained by solving a linear inverse problem with an infinite-norm penalty. We propose in this thesis a Bayesian formulation of anti-sparse coding involving a new probability distribution, referred to as the democratic prior. A Gibbs and two proximal samplers are proposed to approximate Bayesian estimators. The algorithm is called BAC-1. Simulations on synthetic data illustrate the performances of the two proposed samplers and the results are compared with state-of-the art methods. The second model identifies a lower dimensional representation of a signal for modelisation and model selection. Principal component analysis is very popular to perform dimension reduction. The selection of the number of significant components is essential but often based on some practical heuristics depending on the application. Few works have proposed a probabilistic approach to infer the number of significant components. We propose a Bayesian nonparametric principal component analysis called BNP-PCA. The proposed model involves an Indian buffet process to promote a parsimonious use of principal components, which is assigned a prior distribution defined on the manifold of orthonormal basis. Inference is done using MCMC methods. The estimators of the latent dimension are theoretically and empirically studied. The relevance of the approach is assessed on two applications
Albright, Anna Lea. "The trade-wind boundary layer and climate sensitivity." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS207.
Full textThe response of trade-wind clouds to warming remains uncertain, raising the specter of a large climate sensitivity. Decreases in cloud fraction are thought to relate to interplay among convective mixing, turbulence, radiation, and the large-scale environment. The EUREC4A (Elucidating the role of cloud-circulation coupling in climate) field campaign made extensive measurements that allow for deeper physical understanding and the first process-based constraint on the trade cumulus feedback.I first use EUREC4A observations to improve understanding of the characteristic vertical structure of the trade-wind boundary layer and the processes that produce this structure. This improved physical understanding is then applied to the evaluation of trade cumulus feedbacks. Ideas developed support new conceptual models of the structure of the trade-wind boundary layer and a more active role of clouds in maintaining this structure, and show little evidence for a strong trade cumulus feedback to warming
Denis, Marie. "Méthodes de modélisation bayésienne et applications en recherche clinique." Thesis, Montpellier 1, 2010. http://www.theses.fr/2010MON1T001.
Full textFor some years a craze for the methods of bayesian modelling was observed in diverse domains as the environment, the medicine. The studies in clinical research lies on the modelling mathematical and the statistical inference. The purpose of this thesis is to study the possible applications of such a modelling within the framework of the clinical research. Indeed, judgments, knowledge of the experts (doctors, biologists) are many and important. It thus seems natural to want to take into account all these knowledge a priori in the statistical model. After a background on the fundamental of the bayesian statistics, preliminary works within the framework of the theory of the decision are presented as well as a state of the art of the methods of approximation. A MCMC method with reversible jumps was organized in the context of models known well in clinical research : the model of Cox and the logistic model. An approach of selection of model is proposed as an alternative in the classic criteria within the framework of the regression spline. Finally various applications of the nonparametric bayesian methods are developed. Algorithms are adapted and implemented to be able to apply such methods. This thesis allows to advance the bayesian methods in various ways within the framework of the clinical research through several sets of data
Roncen, Rémi. "Modélisation et identification par inférence bayésienne de matériaux poreux acoustiques en aéronautique." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0023/document.
Full textThe present work focuses on porous materials in aeronautics and the uncertainty considerations on the performed identifications. Porous materials are added inside the cavities of acoustic liners, materials formed with perforated plates and cavities, behaving as Helmholtz resonators, which are widely used in the industry. The aim is to increase the frequency range of the absorption spectrum, while improving the behaviour of liners to grazing flow and high sound intensity.This general topic is addressed by following two different leads.Porous materials were first considered in order to identify the intrinsic properties of their micro-geometry, necessary to the equivalent fluid semi-phenomenological models used later on. To achieve this, a statistical Bayesian inference tool is used to extract information on these properties, contained in reflected or transmitted signals, in three distinct frequency regimes. Furthermore, a modelling extension of rigid porous media is introduced, by adding two new intrinsic parameters related to the pore micro-structure and linked to the visco-inertial behaviour of the intra-pore fluid, at low frequencies.Then, the liner impedance, a global property representing the acoustic behaviour of materials, is identified through a Bayesian inference process. Data from a NASA benchmark are used to validate the developed tool, when the liner is subject to a shear grazing flow. An extension of these results to ONERA's B2A aeroacoustic bench is also performed, with measurements of the velocity profiles above the liner, obtained with a Laser Doppler Velocimetry technique. This identification technique is then further used for liner materials filled with porous media, to highlight the eventual influence of such a porous media on the acoustic response of the liner, when subject to a shear grazing flow. Additional measurements are permed without flow, at normal incidence, in a classical impedance tube. Different combinations of perforated plates and porous materials are tested at different sound pressure level, to evaluate the influence of the presence of porous media on the non-linear behaviour of liners when high sound pressure levels are present
Casarin, Roberto. "Méthodes de simulation pour l'estimation bayésienne des modèles à variables latentes." Paris 9, 2007. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2007PA090056.
Full textLatent variable models are now very common in econometrics and statistics. This thesis mainly focuses on the use of latent variables in mixture modelling, time series analysis and continuous time models. We follow a Bayesian inference framework based on simulation methods. In the third chapter we propose alfa-stable mixtures in order to account for skewness, heavy tails and multimodality in financial modelling. Chapter four proposes a Markov-Switching Stochastic-Volatility model with a heavy-tail observable process. We follow a Bayesian approach and make use of Particle Filter, in order to filter the state and estimate the parameters. Chapter five deals with the parameter estimation and the extraction of the latent structure in the volatilities of the US business cycle and stock market valuations. We propose a new regularised SMC procedure for doing Bayesian inference. In chapter six we employ a Bayesian inference procedure, based on Population Monte Carlo, to estimate the parameters in the drift and diffusion terms of a stochastic differential equation (SDE), from discretely observed data
Chopin, Nicolas. "Applications des méthodes de Monte Carlo séquentielles à la statistique bayésienne." Paris 6, 2003. http://www.theses.fr/2003PA066057.
Full textHallouli, Khalid. "Reconnaissance de caractères par méthodes markoviennes et réseaux bayésiens." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00000740.
Full textBrosse, Nicolas. "Around the Langevin Monte Carlo algorithm : extensions and applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX014/document.
Full textThis thesis focuses on the problem of sampling in high dimension and is based on the unadjusted Langevin algorithm (ULA).In a first part, we suggest two extensions of ULA and provide precise convergence guarantees for these algorithms. ULA is not feasible when the target distribution is compactly supported; thanks to a Moreau Yosida regularization, it is nevertheless possible to sample from a probability distribution close enough to the distribution of interest. ULA diverges when the tails of the target distribution are too thin; by taming appropriately the gradient, this difficulty can be overcome.In a second part, we give two applications of ULA. We provide an algorithm to estimate normalizing constants of log concave densities based on a sequence of distributions with increasing variance. By comparison of ULA with the Langevin diffusion, we develop a new control variates methodology based on the asymptotic variance of the Langevin diffusion.In a third part, we analyze Stochastic Gradient Langevin Dynamics (SGLD), which differs from ULA only in the stochastic estimation of the gradient. We show that SGLD, applied with usual parameters, may be very far from the target distribution. However, with an appropriate variance reduction technique, its computational cost can be much lower than ULA for the same accuracy
Trinh, Quoc Anh. "Méthodes neuronales dans l'analyse de survie." Evry, Institut national des télécommunications, 2007. http://www.theses.fr/2007TELE0004.
Full textThis thesis proposes a generalization of the conventional survival models where the linear prdictive variables are replaced by nonlinear multi-layer perceptions of variables. This modelling by neural networks predict the survival times with talking into account the time effects and the interactions between variables. The neural network models will be validated by cross validation technique or the bayesian slection criterion based on the model's posteriori probability. The prediction is refined by a boostrap aggregating (Bagging) and bayesian models average to increase the precision. Moreower, the censoring, the particularity of the survival analysis, needs a survival model which could take into account all available knowledges on the data for estimation to obtain a better prediction. The bayesian approach is thus a proposed approach because it allows a better generalization of the neural networks because of the avoidance of the overlifting. Moreover, the hierarchical models in bayesian learning of the neural networks is appropriate perfectly for a selection of relevant variables which gives a better explanation of the times effects and the interactions between variables
Chen, Yuting. "Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2014. http://www.theses.fr/2014ECAP0040/document.
Full textPlant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture
Revillon, Guillaume. "Uncertainty in radar emitter classification and clustering." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS098/document.
Full textIn Electronic Warfare, radar signals identification is a supreme asset for decision making in military tactical situations. By providing information about the presence of threats, classification and clustering of radar signals have a significant role ensuring that countermeasures against enemies are well-chosen and enabling detection of unknown radar signals to update databases. Most of the time, Electronic Support Measures systems receive mixtures of signals from different radar emitters in the electromagnetic environment. Hence a radar signal, described by a pulse-to-pulse modulation pattern, is often partially observed due to missing measurements and measurement errors. The identification process relies on statistical analysis of basic measurable parameters of a radar signal which constitute both quantitative and qualitative data. Many general and practical approaches based on data fusion and machine learning have been developed and traditionally proceed to feature extraction, dimensionality reduction and classification or clustering. However, these algorithms cannot handle missing data and imputation methods are required to generate data to use them. Hence, the main objective of this work is to define a classification/clustering framework that handles both outliers and missing values for any types of data. Here, an approach based on mixture models is developed since mixture models provide a mathematically based, flexible and meaningful framework for the wide variety of classification and clustering requirements. The proposed approach focuses on the introduction of latent variables that give us the possibility to handle sensitivity of the model to outliers and to allow a less restrictive modelling of missing data. A Bayesian treatment is adopted for model learning, supervised classification and clustering and inference is processed through a variational Bayesian approximation since the joint posterior distribution of latent variables and parameters is untractable. Some numerical experiments on synthetic and real data show that the proposed method provides more accurate results than standard algorithms
Spill, Yannick. "Développement de méthodes d'échantillonnage et traitement bayésien de données continues : nouvelle méthode d'échange de répliques et modélisation de données SAXS." Paris 7, 2013. http://www.theses.fr/2013PA077237.
Full textThe determination of protein structures and other macromolecular complexes is becoming more and more difficult. The simplest cases have already been determined, and today's research in structural bioinformatics focuses on ever more challenging targets. To successfully determine the structure of these complexes, it has become necessary to combine several kinds of experiments and to relax the quality standards during acquisition. In other words, structure determination makes an increasing use of sparse, noisy and inconsistent data. It is therefore becoming essential to quantify the accuracy of a determined structure. This quantification is superbly achieved by statistical inference. In this thesis, I develop a new sampling algorithm, Convective Replica-Exchange, sought to find probable structures more robustly. I also propose e proper statistical treatment for continuous data, such as Small-Angle X-Ray Scattering data
Pons, Isabelle. "Méthodes de segmentation bayésienne appliquées aux images SAR : théorie et mise en oeuvre." Nice, 1994. http://www.theses.fr/1994NICE4714.
Full text