To see the other types of publications on this topic, follow the link: Statistical estimation problem.

Dissertations / Theses on the topic 'Statistical estimation problem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical estimation problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Jinbo. "Semiparametric efficient and inefficient estimation for the auxiliary outcome problem with the conditional mean model /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Källberg, David. "Nonparametric Statistical Inference for Entropy-type Functionals." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-79976.

Full text
Abstract:
In this thesis, we study statistical inference for entropy, divergence, and related functionals of one or two probability distributions. Asymptotic properties of particular nonparametric estimators of such functionals are investigated. We consider estimation from both independent and dependent observations. The thesis consists of an introductory survey of the subject and some related theory and four papers (A-D). In Paper A, we consider a general class of entropy-type functionals which includes, for example, integer order Rényi entropy and certain Bregman divergences. We propose U-statistic estimators of these functionals based on the coincident or epsilon-close vector observations in the corresponding independent and identically distributed samples. We prove some asymptotic properties of the estimators such as consistency and asymptotic normality. Applications of the obtained results related to entropy maximizing distributions, stochastic databases, and image matching are discussed. In Paper B, we provide some important generalizations of the results for continuous distributions in Paper A. The consistency of the estimators is obtained under weaker density assumptions. Moreover, we introduce a class of functionals of quadratic order, including both entropy and divergence, and prove normal limit results for the corresponding estimators which are valid even for densities of low smoothness. The asymptotic properties of a divergence-based two-sample test are also derived. In Paper C, we consider estimation of the quadratic Rényi entropy and some related functionals for the marginal distribution of a stationary m-dependent sequence. We investigate asymptotic properties of the U-statistic estimators for these functionals introduced in Papers A and B when they are based on a sample from such a sequence. We prove consistency, asymptotic normality, and Poisson convergence under mild assumptions for the stationary m-dependent sequence. Applications of the results to time-series databases and entropy-based testing for dependent samples are discussed. In Paper D, we further develop the approach for estimation of quadratic functionals with m-dependent observations introduced in Paper C. We consider quadratic functionals for one or two distributions. The consistency and rate of convergence of the corresponding U-statistic estimators are obtained under weak conditions on the stationary m-dependent sequences. Additionally, we propose estimators based on incomplete U-statistics and show their consistency properties under more general assumptions.
APA, Harvard, Vancouver, ISO, and other styles
3

Mukherjee, Rajarshi. "Statistical Inference for High Dimensional Problems." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11516.

Full text
Abstract:
In this dissertation, we study minimax hypothesis testing in high-dimensional regression against sparse alternatives and minimax estimation of average treatment effect in an semiparametric regression with possibly large number of covariates.
APA, Harvard, Vancouver, ISO, and other styles
4

Herrick, David Richard Mark. "Wavelet methods for curve and surface estimation." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Bingwen. "Change-points Estimation in Statistical Inference and Machine Learning Problems." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/344.

Full text
Abstract:
"Statistical inference plays an increasingly important role in science, finance and industry. Despite the extensive research and wide application of statistical inference, most of the efforts focus on uniform models. This thesis considers the statistical inference in models with abrupt changes instead. The task is to estimate change-points where the underlying models change. We first study low dimensional linear regression problems for which the underlying model undergoes multiple changes. Our goal is to estimate the number and locations of change-points that segment available data into different regions, and further produce sparse and interpretable models for each region. To address challenges of the existing approaches and to produce interpretable models, we propose a sparse group Lasso (SGL) based approach for linear regression problems with change-points. Then we extend our method to high dimensional nonhomogeneous linear regression models. Under certain assumptions and using a properly chosen regularization parameter, we show several desirable properties of the method. We further extend our studies to generalized linear models (GLM) and prove similar results. In practice, change-points inference usually involves high dimensional data, hence it is prone to tackle for distributed learning with feature partitioning data, which implies each machine in the cluster stores a part of the features. One bottleneck for distributed learning is communication. For this implementation concern, we design communication efficient algorithm for feature partitioning data sets to speed up not only change-points inference but also other classes of machine learning problem including Lasso, support vector machine (SVM) and logistic regression."
APA, Harvard, Vancouver, ISO, and other styles
6

Savino, Mary Edith. "Statistical learning methods for nonlinear geochemical problems." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM032.

Full text
Abstract:
Dans le cadre de simulations numériques de systèmes géochimiques s'intégrant dans un projet de stockage profond de déchets hautement radioactifs, nous proposons dans cette thèse deux méthodes d'estimation de fonction ainsi qu'une méthode de sélection de variables dans un modèle de régression non-paramétrique multivarié.Plus précisément, dans le Chapitre 2, nous présentons une procédure d'apprentissage actif utilisant les processus Gaussiens pour approcher des fonctions inconnues ayant plusieurs variables d'entrée. Cette méthode permet à chaque itération le calcul de l'incertitude globale sur l'estimation de la fonction et donc de choisir astucieusement les points en lesquels la fonction à estimer doit être évaluée. Ceci permet de réduire considérablement le nombre d'observations nécessaire à l'obtention d'une estimation satisfaisante de la fonction sous-jacente. De ce fait, cette méthode permet de limiter les appels à un logiciel dit "solveur" d'équations de réactions géochimiques, ce qui réduit les temps de calculs.Dans le Chapitre 3, nous proposons une deuxième méthode d'estimation de fonctions non séquentielle consistant à approximer la fonction à estimer par une combinaison linéaire de B-splines et appelée GLOBER. Dans cette approche, les noeuds des B-splines pouvant être considérés comme des changements dans les dérivées de la fonction à estimer, ceux-ci sont choisis à l'aide du generalized lasso. Dans le Chapitre 4, nous introduisons une nouvelle méthode de sélection de variables dans un modèle de régression non-paramétrique multivarié, ABSORBER, pour identifier les variables dont dépend réellement la fonction inconnue considérée et réduire ainsi la complexité des systèmes géochimiques étudiés. Dans cette approche, nous considérons que la fonction à estimer peut être approximée par une combinaison linéaire de B-splines et de leurs termes d'interactions deux-à-deux. Les coefficients de chaque terme de la combinaison linéaire sont estimés en utilisant un critère des moindres carrés standard pénalisé par les normes l2 des dérivées partielles par rapport à chaque variable.Les approches considérées ont été évaluées puis validées à l'aide de simulations numériques et ont toutes été appliquées à des systèmes géochimiques plus ou moins complexes. Des comparaisons à des méthodes de l'état de l'art ont également permis de montrer de meilleures performances obtenues par nos méthodes.Dans le Chapitre 5, les méthodes d'estimation de fonctions ainsi que la méthode de sélection de variables ont été appliquées dans le cadre d'un projet européen EURAD et comparées aux méthodes d'autres équipes impliquées dans le projet. Cette application a permis de montrer la performance de nos méthodes, notamment lorsque seules les variables pertinentes sélectionnées avec ABSORBER sont considérées.Les méthodes proposées ont été implémentées dans des packages R : glober et absorber qui sont disponibles sur le CRAN (Comprehensive R Archive Network)<br>In this thesis, we propose two function estimation methods and a variable selection method in a multivariate nonparametric model as part of numerical simulations of geochemical systems, for a deep geological disposal facility of highly radioactive waste. More specifically, in Chapter 2, we present an active learning procedure using Gaussian processes to approximate unknown functions having several input variables. This method allows for the computation of the global uncertainty of the function estimation at each iteration and thus, cunningly selects the most relevant observation points at which the function to estimate has to be evaluated. Consequently, the number of observations needed to obtain a satisfactory estimation of the underlying function is reduced, limiting calls to geochemical reaction equations solvers and reducing calculation times. Additionally, in Chapter 3, we propose a non sequential function estimation method called GLOBER consisting in approximating the function to estimate by a linear combination of B-splines. In this approach, since the knots of the B-splines can be seen as changes in the derivatives of the function to estimate, they are selected using the generalized lasso. In Chapter 4, we introduce a novel variable selection method in a multivariate nonparametric model, ABSORBER, to identify the variables the unknown function really depends on, thereby simplifying the geochemical system. In this approach, we assume that the function can be approximated by a linear combination of B-splines and their pairwise interaction terms. The coefficients of each term of the linear combination are estimated using the usual least squares criterion penalized by the l2-norms of the partial derivatives with respect to each variable. The introduced approaches were evaluated and validated through numerical experiments and were all applied to geochemical systems of varying complexity. Comparisons with state-of-the-art methods demonstrated that our methods outperformed the others. In Chapter 5, the function estimation and variable selection methods were applied in the context of a European project, EURAD, and compared to methods devised by other scientific teams involved in the projet. This application highlighted the performance of our methods, particularly when only the relevant variables selected with ABSORBER were considered. The proposed methods have been implemented in R packages: glober and absorber which are available on the CRAN (Comprehensive R Archive Network)
APA, Harvard, Vancouver, ISO, and other styles
7

Depersin, Jules. "Statistical and Computational Complexities of Robust and High-Dimensional Estimation Problems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAG009.

Full text
Abstract:
La théorie de l'apprentissage statistique vise à fournir une meilleure compréhension des propriétés statistiques des algorithmes d'apprentissage. Ces propriétés sont souvent dérivées en supposant que les données sous-jacentes sont recueillies par échantillonnage de variables aléatoires gaussiennes (ou subgaussiennes) indépendantes et identiquement distribuées. Ces propriétés peuvent donc être radicalement affectées par la présence d'erreurs grossières (également appelées "valeurs aberrantes") dans les données, et par des données à queue lourde. Nous sommes intéressés par les procédures qui ont de bonnes propriétés même lorsqu'une partie des données est corrompue et à forte queue, procédures que nous appelons extit{robusts}, que nous obtenons souvent dans cette thèse en utilisant l'heuristique Median-Of-Mean.Nous sommes particulièrement intéressés par les procédures qui sont robustes dans des configurations à haute dimension, et nous étudions (i) comment la dimensionnalité affecte les propriétés statistiques des procédures robustes, et (ii) comment la dimensionnalité affecte la complexité computationnelle des algorithmes associés. Dans l'étude des propriétés statistiques (i), nous trouvons que pour une large gamme de problèmes, la complexité statistique des problèmes et sa "robustesse" peuvent être en un sens "découplées", conduisant à des limites où le terme dépendant de la dimension est ajouté au terme dépendant de la corruption, plutôt que multiplié par celui-ci. Nous proposons des moyens de mesurer les complexités statistiques de certains problèmes dans ce cadre corrompu, en utilisant par exemple la dimension VC. Nous fournissons également des limites inférieures pour certains de ces problèmes.Dans l'étude de la complexité computationnelle de l'algorithme associé (ii), nous montrons que dans deux cas particuliers, à savoir l'estimation robuste de la moyenne par rapport à la norme euclidienne et la régression robuste, on peut relaxer les problèmes d'optimisation associés qui deviennent exponentiellement difficiles avec la dimension pour obtenir un algorithme traitable qui se comporte de manière polynomiale dans la dimension<br>Statistical learning theory aims at providing a better understanding of the statistical properties of learning algorithms. These properties are often derived assuming the underlying data are gathered by sampling independent and identically distributed gaussian (or subgaussian) random variables. These properties can thus be drastically affected by the presence of gross errors (also called "outliers") in the data, and by data being heavy-tailed. We are interested in procedures that have good properties even when part of the data is corrupted and heavy-tailed, procedures that we call extit{robusts}, that we often get in this thesis by using the Median-Of-Mean heuristic.We are especially interested in procedures that are robust in high-dimensional set-ups, and we study (i) how dimensionality affects the statistical properties of robust procedures, and (ii) how dimensionality affects the computational complexity of the associated algorithms. In the study of the statistical properties (i), we find that for a large range of problems, the statistical complexity of the problems and its "robustness" can be in a sense "decoupled", leading to bounds where the dimension-dependent term is added to the term that depends on the corruption, rather than multiplied by it. We propose ways of measuring the statistical complexities of some problems in that corrupted framework, using for instance VC-dimension. We also provide lower bounds for some of those problems.In the study of computational complexity of the associated algorithm (ii), we show that in two special cases, namely robust mean-estimation with respect to the euclidean norm and robust regression, one can relax the associated optimization problems that becomes exponentially hard with the dimension to get tractable algorithm that behaves polynomially in the dimension
APA, Harvard, Vancouver, ISO, and other styles
8

Newman, Mark A. "Some problems in the estimation of testing of percentiles." Thesis, University of Newcastle Upon Tyne, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fertis, Apostolos. "A robust optimization approach to statistical estimation problems by Apostolos G. Fertis." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53270.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 87-91).<br>There have long been intuitive connections between robustness and regularization in statistical estimation, for example, in lasso and support vector machines. In the first part of the thesis, we formalize these connections using robust optimization. Specifically (a) We show that in classical regression, regularized estimators like lasso can be derived by applying robust optimization to the classical least squares problem. We discover the explicit connection between the size and the structure of the uncertainty set used in the robust estimator, with the coefficient and the kind of norm used in regularization. We compare the out-of-sample performance of the nominal and the robust estimators in computer generated and real data. (b) We prove that the support vector machines estimator is also a robust estimator of some nominal classification estimator (this last fact was also observed independently and simultaneously by Xu, Caramanis, and Mannor [52]). We generalize the support vector machines estimator by considering several sizes and structures for the uncertainty sets, and proving that the respective max-min optimization problems can be expressed as regularization problems. In the second part of the thesis, we turn our attention to constructing robust maximum likelihood estimators. Specifically (a) We define robust estimators for the logistic regression model, taking into consideration uncertainty in the independent variables, in the response variable, and in both. We consider several structures for the uncertainty sets, and prove that, in all cases, they lead to convex optimization problems. We provide efficient algorithms to compute the estimates in all cases.<br>(cont.) We report on the out-of-sample performance of the robust, as well as the nominal estimators in both computer generated and real data sets, and conclude that the robust estimators achieve a higher success rate. (b) We develop a robust maximum likelihood estimator for the multivariate normal distribution by considering uncertainty sets for the data used to produce it. We develop an efficient first order gradient descent method to compute the estimate and compare the efficiency of the robust estimate to the respective nominal one in computer generated data.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Rau, Christian. "Curve estimation and signal discrimination in spatial problems /." View thesis entry in Australian Digital Theses Program, 2003. http://thesis.anu.edu.au/public/adt-ANU20031215.163519/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Arendt, Christopher D. "Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems." Ft. Belvoir : Defense Technical Information Center, 2009. http://handle.dtic.mil/100.2/ADA499860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Boysen, Leif. "Jump estimation for noisy blurred step functions." Doctoral thesis, [S.l.] : [s.n.], 2006. http://webdoc.sub.gwdg.de/diss/2006/boysen.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wei, Ran. "On Estimation Problems in Network Sampling." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1471846863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Schülke, Christophe. "Statistical physics of linear and bilinear inference problems." Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC058.

Full text
Abstract:
Le développement récent de l'acquisition comprimée a permis de spectaculaires avancées dans la compréhension des problèmes d'estimation linéaire parcimonieuse. Ce développement a suscité un intérêt renouvelé pour les problèmes d'inférence linéaire et bilinéaire généralisée. Ces problèmes combinent une étape linéaire avec une étape non lineaire et probabiliste, à l'issue de laquelle des mesures sont effectuées. Ce type de situations se présente notamment en imagerie médicale et en astronomie. Cette thèse s'intéresse à des algorithmes pour la résolution de ces problèmes et à leur analyse théorique. Pour cela, nous utilisons des algorithmes de passage de message, qui permettent d'échantillonner des distributions de haute dimension. Ces algorithmes connaissent des changements de phase, qui se laissent analyser à l'aide de la méthode des répliques, initialement développée dans le cadre de la physique statistique des milieux désordonnés. L'analyse des phases révèle qu'elles correspondent à des domaines dans lesquels l'inférence est facile, difficile ou impossible. Les principales contributions de cette thèse sont de trois types. D'abord, l'application d'algorithmes connus à des problèmes concrets : détection de communautés, codes correcteurs d'erreurs ainsi qu'un système d'imagerie innovant. Ensuite, un nouvel algorithme traitant le problème de calibration aveugle de capteurs, potentiellement applicable à de nombreux systèmes de mesure. Enfin, une analyse théorique du problème de reconstruction de matrices à petit rang à partir de projections linéaires, ainsi qu'une analyse d'une instabilité présente dans les algorithmes d'inférence bilinéaire<br>The recent development of compressed sensing has led to spectacular advances in the under standing of sparse linear estimation problems as well as in algorithms to solve them. It has also triggered anew wave of developments in the related fields of generalized linear and bilinear inference problems. These problems have in common that they combine a linear mixing step and a nonlinear, probabilistic sensing step, producing indirect measurements of a signal of interest. Such a setting arises in problems such as medical or astronomical Imaging. The aim of this thesis is to propose efficient algorithms for this class of problems and to perform their theoretical analysis. To this end, it uses belief propagation, thanks to which high-dimensional distributions can be sampled efficiently, thus making a bayesian approach to inference tractable. The resulting algorithms undergo phase transitions that can be analyzed using the replica method, initially developed in statistical physics of disordered systems. The analysis reveals phases in which inference is easy, hard or impossible, corresponding to different energy landscapes of the problem. The main contributions of this thesis can be divided into three categories. First, the application of known algorithms to concrete problems : community detection, superposition codes and an innovative imaging system. Second, a new, efficient message-passing algorithm for blind sensor calibration, that could be used in signal processing for a large class of measurement systems. Third, a theoretical analysis of achievable performances in matrix compressed sensing and of instabilities in bayesian bilinear inference algorithms
APA, Harvard, Vancouver, ISO, and other styles
15

McKeone, James P. "Statistical methods for electromyography data and associated problems." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79631/1/James_McKeone_Thesis.pdf.

Full text
Abstract:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
APA, Harvard, Vancouver, ISO, and other styles
16

Collier, Olivier. "Méthodes statistiques pour la mise en correspondance de descripteurs." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00904686.

Full text
Abstract:
De nombreuses applications, en vision par ordinateur ou en médecine notamment,ont pour but d'identifier des similarités entre plusieurs images ou signaux. On peut alors détecter des objets, les suivre, ou recouper des prises de vue. Dans tous les cas, les procédures algorithmiques qui traitent les images utilisent une sélection de points-clefs qu'elles essayent ensuite de mettre en correspondance par paire. Elles calculent pour chaque point un descripteur qui le caractérise, le discrimine des autres. Parmi toutes les procédures possibles,la plus utilisée aujourd'hui est SIFT, qui sélectionne les points-clefs, calcule des descripteurs et propose un critère de mise en correspondance globale. Dans une première partie, nous tentons d'améliorer cet algorithme en changeant le descripteur original qui nécessite de trouver l'argument du maximum d'un histogramme : en effet, son calcul est statistiquement instable. Nous devons alors également changer le critère de mise en correspondance de deux descripteurs. Il en résulte un problème de test non paramétrique dans lequel à la fois l'hypothèse nulle et alternative sont composites, et même non paramétriques. Nous utilisons le test du rapport de vraisemblance généralisé afin d'exhiber des procédures de test consistantes, et proposons une étude minimax du problème. Dans une seconde partie, nous nous intéressons à l'optimalité d'une procédure globale de mise en correspondance. Nous énonçons un modèle statistique dans lequel des descripteurs sont présents dans un certain ordre dans une première image, et dans un autre dans une seconde image. La mise en correspondance revient alors à l'estimation d'une permutation. Nous donnons un critère d'optimalité au sens minimax pour les estimateurs. Nous utilisons en particulier la vraisemblance afin de trouver plusieurs estimateurs consistants, et même optimaux sous certaines conditions. Enfin, nous nous sommes intéressés à des aspects pratiques en montrant que nos estimateurs étaient calculables en temps raisonnable, ce qui nous a permis ensuite d'illustrer la hiérarchie de nos estimateurs par des simulations
APA, Harvard, Vancouver, ISO, and other styles
17

De, la Rey Tanja. "Two statistical problems related to credit scoring / Tanja de la Rey." Thesis, North-West University, 2007. http://hdl.handle.net/10394/3689.

Full text
Abstract:
This thesis focuses on two statistical problems related to credit scoring. In credit scoring of individuals, two classes are distinguished, namely low and high risk individuals (the so-called "good" and "bad" risk classes). Firstly, we suggest a measure which may be used to study the nature of a classifier for distinguishing between the two risk classes. Secondly, we derive a new method DOUW (detecting outliers using weights) which may be used to fit logistic regression models robustly and for the detection of outliers. In the first problem, the focus is on a measure which may be used to study the nature of a classifier. This measure transforms a random variable so that it has the same distribution as another random variable. Assuming a linear form of this measure, three methods for estimating the parameters (slope and intercept) and for constructing confidence bands are developed and compared by means of a Monte Carlo study. The application of these estimators is illustrated on a number of datasets. We also construct statistical hypothesis to test this linearity assumption. In the second problem, the focus is on providing a robust logistic regression fit and the identification of outliers. It is well-known that maximum likelihood estimators of logistic regression parameters are adversely affected by outliers. We propose a robust approach that also serves as an outlier detection procedure and is called DOUW. The approach is based on associating high and low weights with the observations as a result of the likelihood maximization. It turns out that the outliers are those observations to which low weights are assigned. This procedure depends on two tuning constants. A simulation study is presented to show the effects of these constants on the performance of the proposed methodology. The results are presented in terms of four benchmark datasets as well as a large new dataset from the application area of retail marketing campaign analysis. In the last chapter we apply the techniques developed in this thesis on a practical credit scoring dataset. We show that the DOUW method improves the classifier performance and that the measure developed to study the nature of a classifier is useful in a credit scoring context and may be used for assessing whether the distribution of the good and the bad risk individuals is from the same translation-scale family.<br>Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2008.
APA, Harvard, Vancouver, ISO, and other styles
18

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal<br>This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
APA, Harvard, Vancouver, ISO, and other styles
19

Oyeniran, Oluyemi. "Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1466358483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Alghamdi, Amani Saeed. "Study of Generalized Lomax Distribution and Change Point Problem." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1526387579759835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Salomond, Jean-Bernard. "Propriétés fréquentistes des méthodes Bayésiennes semi-paramétriques et non paramétriques." Thesis, Paris 9, 2014. http://www.theses.fr/2014PA090034/document.

Full text
Abstract:
La recherche sur les méthodes bayésiennes non-paramétriques connaît un essor considérable depuis les vingt dernières années notamment depuis le développement d'algorithmes de simulation permettant leur mise en pratique. Il est donc nécessaire de comprendre, d'un point de vue théorique, le comportement de ces méthodes. Cette thèse présente différentes contributions à l'analyse des propriétés fréquentistes des méthodes bayésiennes non-paramétriques. Si se placer dans un cadre asymptotique peut paraître restrictif de prime abord, cela permet néanmoins d'appréhender le fonctionnement des procédures bayésiennes dans des modèles extrêmement complexes. Cela permet notamment de détecter les aspects de l'a priori particulièrement influents sur l’inférence. De nombreux résultats généraux ont été obtenus dans ce cadre, cependant au fur et à mesure que les modèles deviennent de plus en plus complexes, de plus en plus réalistes, ces derniers s'écartent des hypothèses classiques et ne sont plus couverts par la théorie existante. Outre l'intérêt intrinsèque de l'étude d'un modèle spécifique ne satisfaisant pas les hypothèses classiques, cela permet aussi de mieux comprendre les mécanismes qui gouvernent le fonctionnement des méthodes bayésiennes non-paramétriques<br>Research on Bayesian nonparametric methods has received a growing interest for the past twenty years, especially since the development of powerful simulation algorithms which makes the implementation of complex Bayesian methods possible. From that point it is necessary to understand from a theoretical point of view the behaviour of Bayesian nonparametric methods. This thesis presents various contributions to the study of frequentist properties of Bayesian nonparametric procedures. Although studying these methods from an asymptotic angle may seems restrictive, it allows to grasp the operation of the Bayesian machinery in extremely complex models. Furthermore, this approach is particularly useful to detect the characteristics of the prior that are strongly influential in the inference. Many general results have been proposed in the literature in this setting, however the more complex and realistic the models the further they get from the usual assumptions. Thus many models that are of great interest in practice are not covered by the general theory. If the study of a model that does not fall under the general theory has an interest on its owns, it also allows for a better understanding of the behaviour of Bayesian nonparametric methods in a general setting
APA, Harvard, Vancouver, ISO, and other styles
22

Kappus, Julia Johanna. "Nonparametric adaptive estimation for discretely observed Lévy processes." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16613.

Full text
Abstract:
Die vorliegende Arbeit hat nichtparametrische Schätzmethoden für diskret beobachtete Lévyprozesse zum Gegenstand. Ein Lévyprozess mit endlichen zweiten Momenten und endlicher Variation auf Kompakta wird niederfrequent beobachtet. Die Sprungdynamik wird vollständig durch das endliche signierte Maß my(dx):= x ny(dx) beschrieben. Ein lineares Funktional von my soll nichtparametrisch geschätzt werden. Im ersten Teil werden Kernschätzer konstruiert und obere Schranken für das korrespondierende Risiko bewiesen. Daraus werden Konvergenzraten unter Glattheitsannahmen an das Lévymaß hergeleitet. Für Spezialfälle werden untere Schranken bewiesen und daraus Minimax-Optimalität gefolgert. Der Schwerpunkt liegt auf dem Problem der datengetriebenen Wahl des Glättungsparameters, das im zweiten Teil untersucht wird. Da die nichtparametrische Schätzung für Lévyprozesse starke strukturelle Ähnlichkeiten mit Dichtedekonvolutionsproblemen mit unbekannter Fehlerdichte aufweist, werden beide Problemstellungen parallel diskutiert und die Methoden allgemein sowohl für Lévyprozesse als auch für Dichtedekonvolution entwickelt. Es werden Methoden der Modellwahl durch Penalisierung angewandt. Während das Prinzip der Modellwahl im üblichen Fall darauf beruht, dass die Fluktuation stochastischer Terme durch Penalisierung mit einer deterministischen Größe beschränkt werden kann, ist die Varianz im hier betrachteten Fall unbekannt und der Strafterm somit stochastisch. Das Hauptaugenmerk der Arbeit liegt darauf, Strategien zum Umgang mit dem stochastischen Strafterm zu entwickeln. Dabei ist ein modifizierter Schätzer für die charakteristische Funktion im Nenner zentral, der es erlaubt, die punktweise Kontrolle der Abweichung dieses Objects von seiner Zielgröße auf die gesamte reelle Achse zu erweitern. Für die Beweistechnik sind insbesondere Talagrand-Konzentrationsungleichungen für empirische Prozesse relevant.<br>This thesis deals with nonparametric estimation methods for discretely observed Lévy processes. A Lévy process X having finite variation on compact sets and finite second moments is observed at low frequency. The jump dynamics is fully described by the finite signed measure my(dx)=x ny(dx). The goal is to estimate, nonparametrically, some linear functional of my. In the first part, kernel estimators are constructed and upper bounds on the corresponding risk are provided. From this, rates of convergence are derived, under regularity assumptions on the Lévy measure. For particular cases, minimax lower bounds are proved. The rates of convergence are thus shown to be minimax optimal. The focus lies on the data driven choice of the smoothing parameter, which is being considered in the second part. Since nonparametric estimation methods for Lévy processes have strong structural similarities with with nonparametric density deconvolution with unknown error density, both fields are discussed in parallel and the concepts are developed in generality, for Lévy processes as well as for density deconvolution. The choice of the bandwidth is realized, using techniques of model selection via penalization. The principle of model selection via penalization usually relies on the fact that the fluctuation of certain stochastic quantities can be controlled by penalizing with a deterministic term. Contrarily to this, the variance is unknown in the setting investigated here and the penalty term is hence itself a stochastic quantity. It is the main concern of this thesis to develop strategies to dealing with the stochastic penalty term. The most important step in this direction will be a modified estimator of the unknown characteristic function in the denominator, which allows to make the pointwise control of this object uniform on the real line. The main technical tools involved in the arguments are concentration inequalities of Talagrand type for empirical processes.
APA, Harvard, Vancouver, ISO, and other styles
23

Adkins, Laura Jean. "A Generalization of the EM Algorithm for Maximum Likelihood Estimation in Mallows' Model Using Partially Ranked Data and Asymptotic Relative Efficiencies for Some Ranking Tests of The K-Sample Problem /." The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487933245538208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

King, Caleb B. "Bridging the Gap: Selected Problems in Model Specification, Estimation, and Optimal Design from Reliability and Lifetime Data Analysis." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73165.

Full text
Abstract:
Understanding the lifetime behavior of their products is crucial to the success of any company in the manufacturing and engineering industries. Statistical methods for lifetime data are a key component to achieving this level of understanding. Sometimes a statistical procedure must be updated to be adequate for modeling specific data as is discussed in Chapter 2. However, there are cases in which the methods used in industrial standards are themselves inadequate. This is distressing as more appropriate statistical methods are available but remain unused. The research in Chapter 4 deals with such a situation. The research in Chapter 3 serves as a combination of both scenarios and represents how both statisticians and engineers from the industry can join together to yield beautiful results. After introducing basic concepts and notation in Chapter 1, Chapter 2 focuses on lifetime prediction for a product consisting of multiple components. During the production period, some components may be upgraded or replaced, resulting in a new ``generation" of component. Incorporating this information into a competing risks model can greatly improve the accuracy of lifetime prediction. A generalized competing risks model is proposed and simulation is used to assess its performance. In Chapter 3, optimal and compromise test plans are proposed for constant amplitude fatigue testing. These test plans are based on a nonlinear physical model from the fatigue literature that is able to better capture the nonlinear behavior of fatigue life and account for effects from the testing environment. Sensitivity to the design parameters and modeling assumptions are investigated and suggestions for planning strategies are proposed. Chapter 4 considers the analysis of ADDT data for the purposes of estimating a thermal index. The current industry standards use a two-step procedure involving least squares regression in each step. The methodology preferred in the statistical literature is the maximum likelihood procedure. A comparison of the procedures is performed and two published datasets are used as motivating examples. The maximum likelihood procedure is presented as a more viable alternative to the two-step procedure due to its ability to quantify uncertainty in data inference and modeling flexibility.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Söderlund, Robert. "Finite element methods for multiscale/multiphysics problems." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-42713.

Full text
Abstract:
In this thesis we focus on multiscale and multiphysics problems. We derive a posteriori error estimates for a one way coupled multiphysics problem, using the dual weighted residual method. Such estimates can be used to drive local mesh refinement in adaptive algorithms, in order to efficiently obtain good accuracy in a desired goal quantity, which we demonstrate numerically. Furthermore we prove existence and uniqueness of finite element solutions for a two way coupled multiphysics problem. The possibility of deriving dual weighted a posteriori error estimates for two way coupled problems is also addressed. For a two way coupled linear problem, we show numerically that unless the coupling of the equations is to strong the propagation of errors between the solvers goes to zero. We also apply a variational multiscale method to both an elliptic and a hyperbolic problem that exhibits multiscale features. The method is based on numerical solutions of decoupled local fine scale problems on patches. For the elliptic problem we derive an a posteriori error estimate and use an adaptive algorithm to automatically tune the resolution and patch size of the local problems. For the hyperbolic problem we demonstrate the importance of how to construct the patches of the local problems, by numerically comparing the results obtained for symmetric and directed patches.
APA, Harvard, Vancouver, ISO, and other styles
26

Marnitz, Philipp [Verfasser], Axel [Akademischer Betreuer] Munk, Russell [Akademischer Betreuer] Luke, and Thorsten [Akademischer Betreuer] Hohage. "Statistical multiresolution estimatiors in linear inverse problems - foundations and algorithmic aspects / Philipp Marnitz. Gutachter: Axel Munk ; Russell Luke ; Thorsten Hohage. Betreuer: Axel Munk." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2010. http://d-nb.info/1043515356/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Marnitz, Philipp Verfasser], Axel [Akademischer Betreuer] [Munk, Russell [Akademischer Betreuer] Luke, and Thorsten [Akademischer Betreuer] Hohage. "Statistical multiresolution estimatiors in linear inverse problems - foundations and algorithmic aspects / Philipp Marnitz. Gutachter: Axel Munk ; Russell Luke ; Thorsten Hohage. Betreuer: Axel Munk." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2010. http://d-nb.info/1043515356/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Havet, Antoine. "Estimation de la loi du milieu d'une marche aléatoire en milieu aléatoire." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX033/document.

Full text
Abstract:
Introduit dans les années 1960, le modèle de la marche aléatoire en milieu aléatoire i.i.d. sur les entiers relatifs (ou MAMA) a récemment été l'objet d'un regain d'intérêt dans la communauté statistique.Divers travaux se sont en particulier intéressés à la question de l'estimation de la loi du milieu à partir de l'observation d'une unique trajectoire de la MAMA.Cette thèse s'inscrit dans cette dynamique.Dans un premier temps, nous considérons le problème d'estimation d'un point de vue fréquentiste. Lorsque la MAMA est transiente à droite ou récurrente, nous construisons le premier estimateur non paramétrique de la densité de la loi du milieu et obtenons une majoration du risque associé mesuré en norme infinie.Dans un deuxième temps, nous envisageons le problème d'estimation sous un angle Bayésien. Lorsque la MAMA est transiente à droite, nous démontrons la consistance à posteriori de l'estimateur Bayésien de la loi du milieu.La principale difficulté mathématique de la thèse a été l'élaboration des outils nécessaires à la preuve du résultat de consistance bayésienne.Nous démontrons pour cela une version quantitative de l'inégalité de concentration de type Mac Diarmid pour chaînes de Markov.Nous étudions également le temps de retour en 0 d'un processus de branchement en milieu aléatoire avec immigration. Nous montrons l'existence d'un moment exponentiel fini uniformément valable sur une classe de processus de branchement en milieu aléatoire. Le processus de branchement en milieu aléatoire constituant une chaîne de Markov, ce résultat permet alors d'expliciter la dépendance des constantes de l'inégalité de concentration en fonction des caractéristiques de ce processus<br>Introduced in the 1960s, the model of random walk in i.i.d. environment on integers (or RWRE) raised only recently interest in the statistical community. Various works have in particular focused on the estimation of the environment distribution from a single trajectory of the RWRE.This thesis extends the advances made in those works and offers new approaches to the problem.First, we consider the estimation problem from a frequentist point of view. When the RWRE is transient to the right or recurrent, we build the first non-parametric estimator of the density of the environment distribution and obtain an upper-bound of the associated risk in infinite norm.Then, we consider the estimation problem from a Bayesian perspective. When the RWRE is transient to the right, we prove the posterior consistency of the Bayesian estimator of the environment distribution.The main difficulty of the thesis was to develop the tools necessary to the proof of Bayesian consistency.For this purpose, we demonstrate a quantitative version of a Mac Diarmid's type concentration inequality for Markov chains.We also study the return time to 0 of a branching process with immigration in random environment (or BPIRE). We show the existence of a finite exponential moment uniformly valid on a class of BPIRE. The BPIRE being a Markov chain, this result enables then to make explicit the dependence of the constants of the concentration inequality with respect to the characteristics of the BPIRE
APA, Harvard, Vancouver, ISO, and other styles
29

Blagouchine, Iaroslav. "Modélisation et analyse de la parole : Contrôle d’un robot parlant via un modèle interne optimal basé sur les réseaux de neurones artificiels. Outils statistiques en analyse de la parole." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX26666.

Full text
Abstract:
Cette thèse de doctorat traite les aspects de la modélisation et de l'analyse de la parole, regroupés sous le chapeau commun de la qualité. Le premier aspect est représenté par le développement d'un modèle interne de contrôle de la production de la parole ; le deuxième, par le développement des outils pour son analyse. Un modèle interne optimal sous contraintes est proposé pour le contrôle d'un robot parlant, basé sur l'hypothèse du point d'équilibre (EPH, modèle-lambda). Ce modèle interne se repose sur le principe suivant : les mouvements du robot sont produits de telle façon que la longueur du chemin, parcouru dans l'espace interne des commandes motrices lambda, soit minimale, sous certaines contraintes liées à l'espace externe. L'aspect mathématique du problème conduit au problème géodésique généralisé, un problème relevant du calcul variationnel, dont la solution exacte analytique est assez complexe. En utilisant certains résultats empiriques, une solution approximative est enfin développée et implémentée. La solution du problème donne des résultats intéressants et prometteurs, et montre que le modèle interne proposé permet d'atteindre une certaine réalité de la production de la parole ; notamment, des similitudes entre la parole réelle et celle produite par le robot sont constatées. Puis, dans un but d'analyser et de caractériser le signal de parole, plusieurs méthodes d'analyse statistique sont développées. Elles sont basées sur les statistiques d'ordre supérieurs et sur l'entropie discrète normalisée. Dans ce cadre, nous avons également élaboré un estimateur non-biaisée et efficace du cumulant d'ordre quatre, en deux versions bloc et adaptative<br>This Ph.D. dissertation deals with speech modeling and processing, which both share the speech quality aspect. An optimum internal model with constraints is proposed and discussed for the control of a biomechanical speech robot based on the equilibrium point hypothesis (EPH, lambda-model). It is supposed that the robot internal space is composed of the motor commands lambda of the equilibrium point hypothesis. The main idea of the work is that the robot movements, and in particular the robot speech production, are carried out in such a way that, the length of the path, traveled in the internal space, is minimized under acoustical and mechanical constraints. Mathematical aspect of the problem leads to one of the problems of variational calculus, the so-called geodesic problem, whose exact analytical solution is quite complicated. By using some empirical findings, an approximate solution for the proposed optimum internal model is then developed and implemented. It gives interesting and challenging results, and shows that the proposed internal model is quite realistic; namely, some similarities are found between the robot speech and the real one. Next, by aiming to analyze speech signals, several methods of statistical speech signal processing are developed. They are based on higher-order statistics (namely, on normalized central moments and on the fourth-order cumulant), as well as on the discrete normalized entropy. In this framework, we also designed an unbiased and efficient estimator of the fourth-order cumulant in both batch and adaptive versions
APA, Harvard, Vancouver, ISO, and other styles
30

Gonçalves, Paulo Henrique Rodrigues. "Uma abordagem da distribuição normal através da resolução de uma situação problema com a utilização do software Geogebra." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4760.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-10-26T10:53:11Z No. of bitstreams: 2 Dissertação - Paulo Henrique Rodrigues Gonçalves - 2014.pdf: 5144667 bytes, checksum: a888421e690cbbc6b7b5c585dee8115c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-10-26T11:00:07Z (GMT) No. of bitstreams: 2 Dissertação - Paulo Henrique Rodrigues Gonçalves - 2014.pdf: 5144667 bytes, checksum: a888421e690cbbc6b7b5c585dee8115c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Made available in DSpace on 2015-10-26T11:00:07Z (GMT). No. of bitstreams: 2 Dissertação - Paulo Henrique Rodrigues Gonçalves - 2014.pdf: 5144667 bytes, checksum: a888421e690cbbc6b7b5c585dee8115c (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-10-24<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>The Normal Distribution, also known as the Gaussian bell curve, is a mathematical model used to describe a compound of values and one of its main uses is providing an estimate about the mean of the compound. The Normal Distribution is essential for the statistical inference; yet, it is rarely studied during high school, being more commonly included in high education course outlines more specifically inside disciplines related to probability and statistics. The PCNs (National Curriculum Parameters) highlight the teaching’s relevance of inference techniques about a population through the evidences provided by a sample. This study proposes a Normal Distribution approach through the resolution of a problem situation with the aid of the Geogebra software to be applied to a veterans’ high school group. By reaching the resolution of this problem, the students will be motivated to estimate the mean weight of the mass of 40 kilograms of oranges to (considering this information) estimate the number of oranges contained in the mass. The students will also be able to determine the amount of oneliter juice boxes that could be produced from it. The methodology used was bibliographic research and contextualization. The presented work is pure theoretical which final product is a purpose to teach the Normal Function, working with the Geogebra software.<br>A Distribuição Normal ou curva de Gauss é um modelo matemático usado para descrever um conjunto de valores e uma das suas principais utilidades é fornecer uma estimação sobre a média desse conjunto. A Distribuição Normal é essencial para a inferência estatística, no entanto, raramente é vista no Ensino Médio, sendo mais comum em currículos de cursos de graduação em disciplinas de probabilidade ou estatística. Os PCNs destacam a relevância do ensino de técnicas de inferência sobre uma população através de evidências fornecidas por uma amostra. Este trabalho propõe uma abordagem da Distribuição Normal através da resolução de uma situação problema com auxílio do software Geogebra a ser aplicada numa turma de terceiro ano do Ensino Médio. Na resolução dessa situação problema os alunos serão motivados a estimar o peso médio de massa de quarenta quilos de laranjas pera para, através desta informação, estimar o número de laranjas contidas nessa massa e determinar a quantidade de caixas de um litro de suco que podem ser produzidas. A metodologia usada neste trabalho trata-se de pesquisa bibliográfica e contextualização. O trabalho apresentado é de caráter teórico cujo produto final é uma proposta para ensinar a Função Normal articulando com o software Geogebra.
APA, Harvard, Vancouver, ISO, and other styles
31

Algers, Björn. "Stereo Camera Calibration Accuracy in Real-time Car Angles Estimation for Vision Driver Assistance and Autonomous Driving." Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149443.

Full text
Abstract:
The automotive safety company Veoneer are producers of high end driver visual assistance systems, but the knowledge about the absolute accuracy of their dynamic calibration algorithms that estimate the vehicle’s orientation is limited. In this thesis, a novel measurement system is proposed to be used in gathering reference data of a vehicle’s orientation as it is in motion, more specifically the pitch and roll angle of the vehicle. Focus has been to estimate how the uncertainty of the measurement system is affected by errors introduced during its construction, and to evaluate its potential in being a viable tool in gathering reference data for algorithm performance evaluation. The system consisted of three laser distance sensors mounted on the body of the vehicle, and a range of data acquisition sequences with different perturbations were performed by driving along a stretch of road in Linköping with weights loaded in the vehicle. The reference data were compared to camera system data where the bias of the calculated angles were estimated, along with the dynamic behaviour of the camera system algorithms. The experimental results showed that the accuracy of the system exceeded 0.1 degrees for both pitch and roll, but no conclusions about the bias of the algorithms could be drawn as there were systematic errors present in the measurements.<br>Bilsäkerhetsföretaget Veoneer är utvecklare av avancerade kamerasystem inom förarassistans, men kunskapen om den absoluta noggrannheten i deras dynamiska kalibreringsalgoritmer som skattar fordonets orientering är begränsad. I denna avhandling utvecklas och testas ett nytt mätsystem för att samla in referensdata av ett fordons orientering när det är i rörelse, mer specifikt dess pitchvinkel och rollvinkel. Fokus har legat på att skatta hur osäkerheten i mätsystemet påverkas av fel som introducerats vid dess konstruktion, samt att utreda dess potential när det kommer till att vara ett gångbart alternativ för att samla in referensdata för evaluering av prestandan hos algoritmerna. Systemet bestod av tre laseravståndssensorer monterade på fordonets kaross. En rad mätförsök utfördes med olika störningar introducerade genom att köra längs en vägsträcka i Linköping med vikter lastade i fordonet. Det insamlade referensdatat jämfördes med data från kamerasystemet där bias hos de framräknade vinklarna skattades, samt att de dynamiska egenskaperna kamerasystemets algoritmer utvärderades. Resultaten från mätförsöken visade på att noggrannheten i mätsystemet översteg 0.1 grader för både pitchvinklarna och rollvinklarna, men några slutsatser kring eventuell bias hos algoritmerna kunde ej dras då systematiska fel uppstått i mätresultaten.
APA, Harvard, Vancouver, ISO, and other styles
32

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES<br>Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.<br>Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Vaiter, Samuel. "Régularisations de Faible Complexité pour les Problèmes Inverses." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01026398.

Full text
Abstract:
Cette thèse se consacre aux garanties de reconstruction et de l'analyse de sensibilité de régularisation variationnelle pour des problèmes inverses linéaires bruités. Il s'agit d'un problème d'optimisation convexe combinant un terme d'attache aux données et un terme de régularisation promouvant des solutions vivant dans un espace dit de faible complexité. Notre approche, basée sur la notion de fonctions partiellement lisses, permet l'étude d'une grande variété de régularisations comme par exemple la parcimonie de type analyse ou structurée, l'antiparcimonie et la structure de faible rang. Nous analysons tout d'abord la robustesse au bruit, à la fois en termes de distance entre les solutions et l'objet original, ainsi que la stabilité de l'espace modèle promu. Ensuite, nous étudions la stabilité de ces problèmes d'optimisation à des perturbations des observations. À partir d'observations aléatoires, nous construisons un estimateur non biaisé du risque afin d'obtenir un schéma de sélection de paramètre.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Tianyu. "Problème inverse statistique multi-échelle pour l'identification des champs aléatoires de propriétés élastiques." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2068.

Full text
Abstract:
Dans le cadre de la théorie de l'élasticité linéaire, la modélisation et simulation numérique du comportement mécanique des matériaux hétérogènes à microstructure aléatoire complexe soulèvent de nombreux défis scientifiques à différentes échelles. Bien qu'à l'échelle macroscopique, ces matériaux soient souvent modélisés comme des milieux homogènes et déterministes, ils sont non seulement hétérogènes et aléatoires à l'échelle microscopique, mais ils ne peuvent généralement pas non plus être explicitement décrits par les propriétés morphologiques et mécaniques locales de leurs constituants. Par conséquent, une échelle mésoscopique est introduite entre l'échelle macroscopique et l'échelle mésoscopique, pour laquelle les propriétés mécaniques d'un tel milieu élastique linéaire aléatoire sont décrites par un modèle stochastique prior non-gaussien paramétré par un nombre faible ou modéré d'hyperparamètres inconnus. Afin d'identifier ces hyperparamètres, une méthodologie innovante a été récemment proposée en résolvant un problème statistique inverse multi-échelle en utilisant uniquement des données expérimentales partielles et limitées aux deux échelles macroscopique et mésoscopique. Celui-ci a été formulé comme un problème d'optimisation multi-objectif qui consiste à minimiser une fonction-coût multi-objectif (à valeurs vectorielles) définie par trois indicateurs numériques correspondant à des fonctions-coût mono-objectif (à valeurs scalaires) permettant de quantifier et minimiser des distances entre les données expérimentales multi-échelles mesurées simultanément aux deux échelles macroscopique et mésoscopique sur un seul échantillon soumis à un essai statique, et les solutions des modèles numériques déterministe et stochastique utilisés pour simuler la configuration expérimentale multi-échelle sous incertitudes. Ce travail de recherche vise à contribuer à l'amélioration de la méthodologie d'identification inverse statistique multi-échelle en terme de coût de calcul, de précision et de robustesse en introduisant (i) une fonction-coût mono-objectif (indicateur numérique) supplémentaire à l'échelle mésoscopique quantifiant la distance entre la(les) longueur(s) de corrélation spatiale des champs expérimentaux mesurés et celle(s) des champs numériques calculés, afin que chaque hyperparamètre du modèle stochastique prior ait sa propre fonction-coût mono-objectif dédiée, permettant ainsi d'éviter d'avoir recours à l'algorithme d'optimisation global (algorithme génétique) utilisé précédemment et de le remplacer par un algorithme plus performant en terme d'efficacité numérique, tel qu'un algorithme itératif de type point fixe, pour résoudre le problème d'optimisation multi-objectif avec un coût de calcul plus faible, et (ii) une représentation stochastique ad hoc des hyperparamètres impliqués dans le modèle stochastique prior du champ d'élasticité aléatoire à l'échelle mésoscopique en les modélisant comme des variables aléatoires, pour lesquelles les distributions de probabilité peuvent être construites en utilisant le principe du maximum d'entropie sous un ensemble de contraintes définies par les informations objectives et disponibles, et dont les hyperparamètres peuvent être déterminés à l'aide de la méthode d'estimation du maximum de vraisemblance avec les données disponibles, afin d'améliorer à la fois la robustesse et la précision de la méthode d'identification inverse du modèle stochastique prior. En parallèle, nous proposons également de résoudre le problème d'optimisation multi-objectif en utilisant l’apprentissage automatique par des réseaux de neurones artificiels. Finalement, la méthodologie améliorée est tout d'abord validée sur un matériau virtuel fictif dans le cadre de l'élasticité linéaire en 2D contraintes planes et 3D, puis illustrée sur un matériau biologique hétérogène réel (os cortical de bœuf) en élasticité linéaire 2D contraintes planes<br>Within the framework of linear elasticity theory, the numerical modeling and simulation of the mechanical behavior of heterogeneous materials with complex random microstructure give rise to many scientific challenges at different scales. Despite that at macroscale such materials are usually modeled as homogeneous and deterministic elastic media, they are not only heterogeneous and random at microscale, but they often also cannot be properly described by the local morphological and mechanical properties of their constituents. Consequently, a mesoscale is introduced between macroscale and microscale, for which the mechanical properties of such a random linear elastic medium are represented by a prior non-Gaussian stochastic model parameterized by a small or moderate number of unknown hyperparameters. In order to identify these hyperparameters, an innovative methodology has been recently proposed by solving a multiscale statistical inverse problem using only partial and limited experimental data at both macroscale and mesoscale. It has been formulated as a multi-objective optimization problem which consists in minimizing a (vector-valued) multi-objective cost function defined by three numerical indicators corresponding to (scalar-valued) single-objective cost functions for quantifying and minimizing distances between multiscale experimental data measured simultaneously at both macroscale and mesoscale on a single specimen subjected to a static test, and the numerical solutions of deterministic and stochastic computational models used for simulating the multiscale experimental test configuration under uncertainties. This research work aims at contributing to the improvement of the multiscale statistical inverse identification method in terms of computational efficiency, accuracy and robustness by introducing (i) an additional mesoscopic numerical indicator allowing the distance between the spatial correlation length(s) of the measured experimental fields and the one(s) of the computed numerical fields to be quantified at mesoscale, so that each hyperparameter of the prior stochastic model has its own dedicated single-objective cost-function, thus allowing the time-consuming global optimization algorithm (genetic algorithm) to be avoided and replaced with a more efficient algorithm, such as the fixed-point iterative algorithm, for solving the underlying multi-objective optimization problem with a lower computational cost, and (ii) an ad hoc stochastic representation of the hyperparameters involved in the prior stochastic model of the random elasticity field at mesoscale by modeling them as random variables, for which the probability distributions can be constructed by using the maximum entropy principle under a set of constraints defined by the available and objective information, and whose hyperparameters can be determined using the maximum likelihood estimation method with the available data, in order to enhance both the robustness and accuracy of the statistical inverse identification method of the prior stochastic model. Meanwhile, we propose as well to solve the multi-objective optimization problem by using machine learning based on artificial neural networks. Finally, the improved methodology is first validated on a fictitious virtual material within the framework of 2D plane stress and 3D linear elasticity theory, and then illustrated on a real heterogenous biological material (beef cortical bone) in 2D plane stress linear elasticity
APA, Harvard, Vancouver, ISO, and other styles
35

Xun, Xiaolei. "Statistical Inference in Inverse Problems." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10874.

Full text
Abstract:
Inverse problems have gained popularity in statistical research recently. This dissertation consists of two statistical inverse problems: a Bayesian approach to detection of small low emission sources on a large random background, and parameter estimation methods for partial differential equation (PDE) models. Source detection problem arises, for instance, in some homeland security applications. We address the problem of detecting presence and location of a small low emission source inside an object, when the background noise dominates. The goal is to reach the signal-to-noise ratio levels on the order of 10^-3. We develop a Bayesian approach to this problem in two-dimension. The method allows inference not only about the existence of the source, but also about its location. We derive Bayes factors for model selection and estimation of location based on Markov chain Monte Carlo simulation. A simulation study shows that with sufficiently high total emission level, our method can effectively locate the source. Differential equation (DE) models are widely used to model dynamic processes in many fields. The forward problem of solving equations for given parameters that define the DEs has been extensively studied in the past. However, the inverse problem of estimating parameters based on observed state variables is relatively sparse in the statistical literature, and this is especially the case for PDE models. We propose two joint modeling schemes to solve for constant parameters in PDEs: a parameter cascading method and a Bayesian treatment. In both methods, the unknown functions are expressed via basis function expansion. For the parameter cascading method, we develop the algorithm to estimate the parameters and derive a sandwich estimator of the covariance matrix. For the Bayesian method, we develop the joint model for data and the PDE, and describe how the Markov chain Monte Carlo technique is employed to make posterior inference. A straightforward two-stage method is to first fit the data and then to estimate parameters by the least square principle. The three approaches are illustrated using simulated examples and compared via simulation studies. Simulation results show that the proposed methods outperform the two-stage method.
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Aarti. "Nonparametric set estimation problems in statistical inference and learning." 2008. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cowling, Ann Margaret. "Some problems in kernel curve estimation." Phd thesis, 1995. http://hdl.handle.net/1885/138794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Shang-HengTsai and 蔡尚亨. "Small-Sample Statistical Condition Number Estimations for Numerical Algebra Problems." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/8y79ny.

Full text
Abstract:
碩士<br>國立成功大學<br>數學系應用數學碩博士班<br>104<br>In this thesis, we study small sample statistical condition number estimation (SSCE) to estimate the condition numbers of various numerical algebra problems. Method SSCE is used to estimate the condition number of a numerical problem in a probability sense. The main task of SSCE involves estimating the derivative part of the target problem. In this thesis, we plan to use SSCE on linear systems, eigenvalue problems, and state feedback pole assignment problems, to estimate individual condition numbers, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Munda, Sonu. "Study on the Problem of Estimation of Parameters of Generalized Exponential Distribution." Thesis, 2015. http://ethesis.nitrkl.ac.in/7277/1/Study_Munda_2015.pdf.

Full text
Abstract:
Mudholkar and Srivastava, Freimer (1995) proposed three-parameter exponentiated Weibull distribution. Two-parameter exponentiated exponential or generalized exponential (GE) distribution is a particular member of the exponentiated Weibull distribution. we study the problem of estimation of unknown parameters of the GE distribution and describe the some estimation techniques which are very useful to estimate the unknown parameters of the GE distribution. We consider the methods of maximum likelihood estimator, moments estimator, percentiles estimator, least square estimator, weighted least square estimator.
APA, Harvard, Vancouver, ISO, and other styles
40

Richardson, Alice. "Some problems in estimation in mixed linear models." Phd thesis, 1995. http://hdl.handle.net/1885/133644.

Full text
Abstract:
This thesis is concerned with the properties of classical estimators of the parameters in mixed linear models, the development of robust estimators, and the properties and uses of these estimators. The first chapter contains a review of estimation in mixed linear models, and a description of four data sets that are used to illustrate the methods discussed. In the second chapter, some results about the asymptotic distribution of the restricted maximum likelihood (REML) estimator of variance components are stated and proven. Some asymptotic results are also stated and proven for the associated weighted least squares estimtor of fixed effects. Central limit theorems are obtained using elementary arguments with only mild conditions on the covariates in the fixed part of the model and without having to assume that the data are either normally or spherically symmetrically distributed. It is also shown that the REML and maximum likelihood estimators of variance components are asymptotically equivalent. Robust estimators are proposed in the third chapter. These estimators are M - estimators constructed by applying weight functions to the log-likelihood, the restricted log-likelihood or the associated estimating equations. These functions reduce the influence of outlying observations on the parameter estimates. Other suggestions for robust estimators are also discussed, including Fellner's method. It is shown that Fellner's method is a direct robustification of the REML estimating equations, as well as being a robust version of Harville's algorithm, which in turn is equivalent to the expectation-maximisation (EM) algorithm of Dempster, Laird and Rubin. The robust estimators are then modified in the fourth chapter to define bounded influence estimators, also known as generalised M or GM estimators in the linear regression model. Outlying values of both the dependent variable and continuous independent variables are downweighted, creating estimators which are analogous to the GM estimators of Mallows and Schweppe. Some general results on the asymptotic properties of bounded influence estimators (of which maximum likelihood, REML and the robust methods of Chapter 3 are all special cases) are stated and proven. The method of proof is similar to that employed for the classical estimators in Chapter 2. Chapter 5 is concerned with the practical problem of selecting covariates in mixed linear models. In particular, a change of deviance statistic is proposed which provides an alternative to likelihood ratio test methodology and which can be applied in situations where the components of variance are estimated by REML. The deviance is specified by the procedure used to estimate the fixed effects and the estimated covariance matrix is held fixed across different models for the fixed effects. The distribution of the change of deviance is derived, and a robustification of the change of deviance is given. Finally, in Chapter 6 a simulation study is undertaken to investigate the asymptotic properties of the proposed estimators in samples of moderate size. The empirical influence function of some of the estimators is studied, as is the distribution of the change of deviance statistic. Issues surrounding bounded influence estimation when there are outliers in the independent variables are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Rath, Satarupa. "Estimation of Parameters of Some Distribution Functions and its Application to Optimization Problem." Thesis, 2012. http://ethesis.nitrkl.ac.in/3667/1/PTHESIS.pdf.

Full text
Abstract:
The thesis addresses the study of some basic results used in statistics and estimation of parameters. Here we have presented estimation of parameters for some well known discrete distribution functions. Chapter 1, contains an introduction to estimation theory and motivation. Chapter 2, contains some basic results, definitions, methods of estimation and Chance constraint approach in linear programming problem. In chapter 3, we have estimated the parameters of well known discrete distribution functions by different methods like method of moments, method of maximum likelihood estimation and Bayes estimation. Further in Chapter 4, a Chance Constraint method is discussed which is an application of beta distribution function.
APA, Harvard, Vancouver, ISO, and other styles
42

Choi, Chik Wan Edwin. "Some problems in curve and surface estimation." Phd thesis, 1998. http://hdl.handle.net/1885/144268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Lie-fen. "Uses of Bayesian posterior modes in solving complex estimation problems in statistics." Thesis, 1992. http://hdl.handle.net/1957/36872.

Full text
Abstract:
In Bayesian analysis, means are commonly used to summarize Bayesian posterior distributions. Problems with a large number of parameters often require numerical integrations over many dimensions to obtain means. In this dissertation, posterior modes with respect to appropriate measures are used to summarize Bayesian posterior distributions, using the Newton-Raphson method to locate modes. Further inference of modes relies on the normal approximation, using asymptotic multivariate normal distributions to approximate posterior distributions. These techniques are applied to two statistical estimation problems. First, Bayesian sequential dose selection procedures are developed for Bioassay problems using Ramsey's prior [28]. Two adaptive designs for Bayesian sequential dose selection and estimation of the potency curve are given. The relative efficiency is used to compare the adaptive methods with other non-Bayesian methods (Spearman-Karber, up-and-down, and Robbins-Monro) for estimating the ED50 . Second, posterior distributions of the order of an autoregressive (AR) model are determined following Robb's method (1980). Wolfer's sunspot data is used as an example to compare the estimating results with FPE, AIC, BIC, and CIC methods. Both Robb's method and the normal approximation for estimation of the order have full posterior results.<br>Graduation date: 1992
APA, Harvard, Vancouver, ISO, and other styles
44

Matić, Rada. "Estimation Problems Related to Random Matrix Ensembles." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-B406-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Perry, Marcus B. Pignatiello Joseph J. "Robust change detection and change point estimation for poisson count processes." 2004. http://etd.lib.fsu.edu/theses/available/etd-06032004-164842.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2004.<br>Advisor: Dr. Joseph Pignatiello, Jr., Florida State University, College of Engineering, Dept. of Industrial Engineering. Title and description from dissertation home page (viewed Sept. 23, 2004). Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
46

BALLERINI, VERONICA. "Fisher's noncentral hypergeometric distribution and population size estimation problems." Doctoral thesis, 2021. http://hdl.handle.net/11573/1563206.

Full text
Abstract:
Fisher’s noncentral hypergeometric distribution (FNCH) describes a biased urn experiment with independent draws of differently coloured balls where each colour is associated with a different weight (Fisher (1935), Fog (2008a)). FNCH potentially suits many official statistics problems. However, such distribution has been underemployed in the statistical literature mainly because of the computational burden given by its probability mass function. Indeed, as the number of draws and the number of different categories in the population increases, any method involving evaluating the likelihood is practically unfeasible. In the first part of this work, we present a methodology to estimate the posterior distribution of the population size, exploiting both the possibility of including extra-experimental information and the computational efficiency of MCMC and ABC methods. The second part devotes particular attention to overcoverage, i.e., the possibility that one or more data sources erroneously include some out-of-scope units. After a critical review of the most recent literature, we present an alternative modelisation of the latent erroneous counts in a capture-recapture framework, simultaneously addressing overcoverage and undercoverage problems. We show the utility of FNCH in this context, both in the posterior sampling process and in the elicitation of prior distributions. We rely on the PCI assumption of Zhang (2019) to include non-negligible prior information. Finally, we address model selection, which is not trivial in the framework of log-linear models when there are a few (or even zero) degrees of freedom.
APA, Harvard, Vancouver, ISO, and other styles
47

Frick, Sophie. "Change point estimation in noisy Hammerstein integral equations." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B6A4-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Narayanaprasad, Karthik Periyapattana. "Sequential Controlled Sensing to Detect an Anomalous Process." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5514.

Full text
Abstract:
In this thesis, we study the problem of identifying an anomalous arm in a multi-armed bandit as quickly as possible, subject to an upper bound on the error probability. Also known as odd arm identification, this problem falls within the class of optimal stopping problems in decision theory and can be embedded within the framework of active sequential hypothesis testing. Prior works on odd arm identification dealt with independent and identically distributed observations from each arm. We provide the first known extension to the case of Markov observations from each arm. Our analysis and results are in the asymptotic regime of vanishing error probability. We associate with each arm an ergodic discrete time Markov process that evolves on a common, finite state space. The notion of anomaly is that the transition probability matrix (TPM) of the Markov process of one of the arms (the {\em odd arm}) is some $P_{1}$, and that of each non-odd arm is a different $P_{2}$, $P_{2}\neq P_{1}$. A learning agent whose goal it is to find the odd arm samples the arms sequentially, one at any given time, and observes the state of the Markov process of the sampled arm. The Markov processes of the unobserved arms may either remain frozen at their last observed states until their next sampling instant ({\em rested arms}) or continue to undergo state evolution ({\em restless arms}). The TPMs $P_{1}$ and $P_{2}$ may be known to the learning agent beforehand or unknown. We analyse the following cases: (a) rested arms with TPMs unknown, (b) restless arms with TPMs known, and (c) restless arms with TPMs unknown. For each of these cases, we first present an asymptotic lower bound on the {\color{black} growth rate of the }expected time required to find the odd arm, and subsequently devise a sequential arm selection policy which we show achieves the lower bound and is therefore asymptotically optimal. A key ingredient in our analysis of the setting of rested arms is the observation that for the Markov process of each arm, the long term fraction of entries into a state is equal to the long term fraction of exits from the state (global balance). When the arms are restless, it is necessary for the learning agent to keep track of the time since each arm was last sampled (arm's {\em delay}) and the state of each arm when it was last sampled (arm's {\em last observed state}). We show that the arm delays and the last observed states form a controlled Markov process which is ergodic under any stationary arm selection policy that picks each arm with a strictly positive probability. Our approach of considering the delays and the last observed states of all the arms jointly offers a global perspective of the arms and serves as a `lift' from the local perspective of dealing with the delay and the last observed state of each arm separately, one that is suggested by the prior works. Lastly, when the TPMs are unknown and have to be estimated along the way, it is important to ensure that the estimates converge almost surely to their true values asymptotically, i.e., the system is {\em identifiable}. We show identifiability follows from the ergodic theorem in the rested case, and provide sufficient conditions for it in the restless case.<br>Department of Science and Technology, Ministry of Human Resource Development, Center for Networked Intelligence, Robert Bosch Center for Cyber-Physical Systems
APA, Harvard, Vancouver, ISO, and other styles
49

Sawlan, Zaid A. "Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties." Diss., 2018. http://hdl.handle.net/10754/629731.

Full text
Abstract:
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
APA, Harvard, Vancouver, ISO, and other styles
50

O'Brien, Sophie. "Estimating Prevalence from Complex Surveys." 2014. https://scholarworks.umass.edu/masters_theses_2/105.

Full text
Abstract:
Massachusetts passed legislation in the fall of 2012 to allow the construction of three casinos and a slot parlor in the state. The prevalence of problem gambling in the state and in areas where casinos will be constructed is of particular interest. The goal is to evaluate the change in prevalence after construction of the casinos, using a multi-mode address based sample survey. The objective of this thesis is to evaluate and describe ways of using statistical inference to estimates prevalence rates in finite populations. Four methods were considered in an attempt to evaluate the prevalence of problem gambling in the context of the gambling study. These methods were evaluated unconditionally and conditionally, controlling for gender, using mean square error (MSE) as a measure of accuracy. The simple mean, the post-stratified mean, the best linear unbiased predictor (BLUP), and the empirical best linear unbiased predictor (EBLUP) were considered in three examples. Conditional analyses of a population with N=1,000 and a crude problem gambling rate of 1.5, samples of n=200 led to the simple mean and the post-stratified mean to perform better in certain situations, as measured by their low MSE values. When there are less females than expected in a sample, the post-stratified mean produces a lower mean MSE over the 10,000 simulations. When there are more females than expected in a sample, the simple mean produces a lower mean MSE over the 10,000 simulations. Conditional analysis provided more appropriate results than unconditional analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography