To see the other types of publications on this topic, follow the link: Entropy estimator.

Dissertations / Theses on the topic 'Entropy estimator'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Entropy estimator.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xie, Li Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Finite horizon robust state estimation for uncertain finite-alphabet hidden Markov models." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2004. http://handle.unsw.edu.au/1959.4/38664.

Full text
Abstract:
In this thesis, we consider a robust state estimation problem for discrete-time, homogeneous, first-order, finite-state finite-alphabet hidden Markov models (HMMs). Based on Kolmogorov's Theorem on the existence of a process, we first present the Kolmogorov model for the HMMs under consideration. A new change of measure is introduced. The statistical properties of the Kolmogorov representation of an HMM are discussed on the canonical probability space. A special Kolmogorov measure is constructed. Meanwhile, the ergodicity of two expanded Markov chains is investigated. In order to describe the uncertainty of HMMs, we study probability distance problems based on the Kolmogorov model of HMMs. Using a change of measure technique, the relative entropy and the relative entropy rate as probability distances between HMMs, are given in terms of the HMM parameters. Also, we obtain a new expression for a probability distance considered in the existing literature such that we can use an information state method to calculate it. Furthermore, we introduce regular conditional relative entropy as an a posteriori probability distance to measure the discrepancy between HMMs when a realized observation sequence is given. A representation of the regular conditional relative entropy is derived based on the Radon-Nikodym derivative. Then a recursion for the regular conditional relative entropy is obtained using an information state method. Meanwhile, the well-known duality relationship between free energy and relative entropy is extended to the case of regular conditional relative entropy given a sub-[special character]-algebra. Finally, regular conditional relative entropy constraints are defined based on the study of the probability distance problem. Using a Lagrange multiplier technique and the duality relationship for regular conditional relative entropy, a finite horizon robust state estimator for HMMs with regular conditional relative entropy constraints is derived. A complete characterization of the solution to the robust state estimation problem is also presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferfache, Anouar Abdeldjaoued. "Les M-estimateurs semiparamétriques et leurs applications pour les problèmes de ruptures." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2643.

Full text
Abstract:
Dans cette thèse, nous nous intéressons principalement aux modèles semiparamétriques qui ont reçu beaucoup d’intérêt par leur excellente utilité scientifique et leur complexité théorique intrigante. Dans la première partie, nous considérons le problème de l’estimation d’un paramètre dans un espace θ de Banach, en maximisant une fonction critère qui dépend d’un paramètre de nuisance inconnu h, éventuellement de dimension infinie. Nous montrons que le bootstrap m out of n, dans ce cadre général, est consistant sous des conditions similaires à celles requises pour la convergence faible des M-estimateurs non-réguliers. Dans ce cadre délicat, des techniques avancées seront nécessaires pour faire face aux estimateurs du paramètre de nuisance à l’intérieur des fonctions critères non régulières. Nous étudions ensuite le bootstrap échangeable pour les Z-estimateurs. L’ingrédient principal est l’utilisation originale d’une identité différentielle qui s’applique lorsque la fonction critère aléatoire est linéaire en termes de mesure empirique. Un grand nombre de schémas de rééchantillonnage bootstrap apparaissent comme des cas particuliers de notre étude. Des exemples d’applications de la littérature sont présentes pour illustrer la généralité et l’utilité de nos résultats. La deuxième partie est consacrée aux modèles statistiques semiparamétriques de ruptures multiples. L’objectif principal de cette partie est d’étudier les propriétés asymptotiques des M-estimateurs semiparamétriques avec des fonctions critères non lisses des paramètres d’un modèle de rupture multiples pour une classe générale de modèles dans lesquels la forme de la distribution peut changer de segment en segment et dans lesquels, éventuellement, il y a des paramètres communs à tous les segments. La consistance des M-estimateurs semi-paramétriques des points de rupture est établie et la vitesse de convergence est déterminée. La normalité asymptotique des M-estimateurs semiparamétriques des paramètres est établie sous des conditions générales. Nous étendons enfin notre étude au cadre des données censurées. Nous étudions les performances de nos méthodologies pour des petits échantillons à travers des études de simulations<br>In this dissertation we are concerned with semiparametric models. These models have success and impact in mathematical statistics due to their excellent scientific utility and intriguing theoretical complexity. In the first part of the thesis, we consider the problem of the estimation of a parameter θ, in Banach spaces, maximizing some criterion function which depends on an unknown nuisance parameter h, possibly infinite-dimensional. We show that the m out of n bootstrap, in a general setting, is weakly consistent under conditions similar to those required for weak convergence of the non smooth M-estimators. In this framework, delicate mathematical derivations will be required to cope with estimators of the nuisance parameters inside non-smooth criterion functions. We then investigate an exchangeable weighted bootstrap for function-valued estimators defined as a zero point of a function-valued random criterion function. The main ingredient is the use of a differential identity that applies when the random criterion function is linear in terms of the empirical measure. A large number of bootstrap resampling schemes emerge as special cases of our settings. Examples of applications from the literature are given to illustrate the generality and the usefulness of our results. The second part of the thesis is devoted to the statistical models with multiple change-points. The main purpose of this part is to investigate the asymptotic properties of semiparametric M-estimators with non-smooth criterion functions of the parameters of multiple change-points model for a general class of models in which the form of the distribution can change from segment to segment and in which, possibly, there are parameters that are common to all segments. Consistency of the semiparametric M-estimators of the change-points is established and the rate of convergence is determined. The asymptotic normality of the semiparametric M-estimators of the parameters of the within-segment distributions is established under quite general conditions. We finally extend our study to the censored data framework. We investigate the performance of our methodologies for small samples through simulation studies
APA, Harvard, Vancouver, ISO, and other styles
3

Butkuvienė, Rita. "Baigtinės populiacijos parametrų įvertinių tikslumo tyrimas modeliuojant." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130617_111757-43302.

Full text
Abstract:
Baigiamajame magistro darbe nagrinėjamas nuoseklusis ėmimas, priklausantis pozicinių imties planų su fiksuota rikiuojančio skirstinio forma klasei. Šio imties plano atveju gautos imties plano ir populiacijos elementų priklausymo imčiai tikimybių analizinės išraiškos. Remiantis entropija, nuoseklusis ėmimas yra lyginamas su tai pačiai klasei priklausančiais Pareto ir nuosekliuoju Puasono ėmimu. Nagrinėjamas ir dviejų fazių imties planas sluoksniavimui, taikant pirmosios fazės nuoseklųjį ėmimą. Baigtinėje populiacijoje apibrėžto tyrimo kintamojo reikšmių suma vertinama kvazi Horvico ir Tompsono įvertiniu. Modeliuojant tiriama, ar sumažėja įvertinio dispersija dėl antrosios fazės sluoksniavimo. Šiame tyrime naudojami Lietuvos gyventojų užimtumo statistinio tyrimo duomenys, vertinamas užimtųjų ir bedarbių skaičius, nedarbo lygis.<br>The successive sampling design, belonging to the class of order sampling designs with fixed order distribution shape, is studied. Analytical expressions for design probability and element inclusion probability are obtained. Entropy is used to compare successive, Pareto and sequential Poisson sampling designs, belonging to the same class. Two-phase sampling design for stratification with the first-phase order sampling is also studied. The total of the study variable values, defined on a finite population, is estimated by a quasi-Horwitz-Thompson estimator. The behaviour of the variance estimator influenced by the second phase stratification is investigated by simulation. The study is carried out for estimates of the number of employed, unemployed and the unemployment rate using the Lithuanian Labor Force Survey data.
APA, Harvard, Vancouver, ISO, and other styles
4

Paavola, M. (Marko). "An efficient entropy estimation approach." Doctoral thesis, Oulun yliopisto, 2011. http://urn.fi/urn:isbn:9789514295935.

Full text
Abstract:
Abstract Advances in miniaturisation have led to the development of new wireless measurement technologies such as wireless sensor networks (WSNs). A WSN consists of low cost nodes, which are battery-operated devices, capable of sensing the environment, transmitting and receiving, and computing. While a WSN has several advantages, including cost-effectiveness and easy installation, the nodes suffer from small memory, low computing power, small bandwidth and limited energy supply. In order to cope with restrictions on resources, data processing methods should be as efficient as possible. As a result, high quality approximates are preferred instead of accurate answers. The aim of this thesis was to propose an efficient entropy approximation method for resource-constrained environments. Specifically, the algorithm should use a small, constant amount of memory, and have certain accuracy and low computational demand. The performance of the proposed algorithm was evaluated experimentally with three case studies. The first study focused on the online monitoring of WSN communications performance in an industrial environment. The monitoring approach was based on the observation that entropy could be applied to assess the impact of interferences on time-delay variation of periodic tasks. The main purpose of the additional two cases, depth of anaesthesia (DOA) –monitoring and benchmarking with simulated data sets was to provide additional evidence on the general applicability of the proposed method. Moreover, in case of DOA-monitoring, an efficient entropy approximation could assist in the development of handheld devices or processing large amount of online data from different channels simultaneously. The initial results from the communication and DOA monitoring applications as well as from simulations were encouraging. Therefore, based on the case studies, the proposed method was able to meet the stated requirements. Since entropy is a widely used quantity, the method is also expected to have a variety of applications in measurement systems with similar requirements<br>Tiivistelmä Mekaanisten- ja puolijohdekomponenttien pienentyminen on mahdollistanut uusien mittaustekniikoiden, kuten langattomien anturiverkkojen kehittämisen. Anturiverkot koostuvat halvoista, paristokäyttöisistä solmuista, jotka pystyvät mittaamaan ympäristöään sekä käsittelemään, lähettämään ja vastaanottamaan tietoja. Anturiverkkojen etuja ovat kustannustehokkuus ja helppo käyttöönotto, rajoitteina puolestaan vähäinen muisti- ja tiedonsiirtokapasiteetti, alhainen laskentateho ja rajoitettu energiavarasto. Näiden rajoitteiden vuoksi solmuissa käytettävien laskentamenetelmien tulee olla mahdollisimman tehokkaita. Tämän työn tavoitteena oli esittää tehokas entropian laskentamenetelmä resursseiltaan rajoitettuihin ympäristöihin. Algoritmin vaadittiin olevan riittävän tarkka, muistinkulutukseltaan pieni ja vakiosuuruinen sekä laskennallisesti tehokas. Työssä kehitetyn menetelmän suorituskykyä tutkittiin sovellusesimerkkien avulla. Ensimmäisessä tapauksessa perehdyttiin anturiverkon viestiyhteyksien reaaliaikaiseen valvontaan. Lähestymistavan taustalla oli aiempi tutkimus, jonka perusteella entropian avulla voidaan havainnoida häiriöiden vaikutusta viestien viiveiden vaihteluun. Muiden sovellusesimerkkien, anestesian syvyysindikaattorin ja simulaatiokokeiden, päätavoite oli tutkia menetelmän yleistettävyyttä. Erityisesti anestesian syvyyden seurannassa menetelmän arvioitiin voivan olla lisäksi hyödyksi langattomien, käsikäyttöisten syvyysmittareiden kehittämisessä ja suurten mittausmäärien reaaliaikaisessa käsittelyssä. Alustavat tulokset langattoman verkon yhteyksien ja anestesian syvyyden valvonnasta sekä simuloinneista olivat lupaavia. Sovellusesimerkkien perusteella esitetty algoritmi kykeni vastaamaan asetettuihin vaatimuksiin. Koska entropia on laajalti käytetty suure, menetelmä saattaa soveltua useisiin mittausympäristöihin, joissa on samankaltaisia vaatimuksia
APA, Harvard, Vancouver, ISO, and other styles
5

Nilsson, Mattias. "Entropy and Speech." Doctoral thesis, Stockholm : Sound and Image Processing Laboratory, School of Electrical Engineering, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Grafström, Anton. "On unequal probability sampling designs." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33701.

Full text
Abstract:
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
APA, Harvard, Vancouver, ISO, and other styles
7

Sake, Lekhya Sai. "ESTIMATION ON GIBBS ENTROPY FOR AN ENSEMBLE." CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/264.

Full text
Abstract:
In this world of growing technology, any small improvement in the present scenario would create a revolution. One of the popular revolutions in the computer science field is parallel computing. A single parallel execution is not sufficient to see its non-deterministic features, as same execution with the same data at different time would end up with a different path. In order to see how non deterministic a parallel execution can extend up to, creates the need of the ensemble of executions. This project implements a program to estimate the Gibbs Entropy for an ensemble of parallel executions. The goal is to develop tools for studying the non-deterministic feature of parallel code based on execution entropy and use these developed tools for current and future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Fischer, Richard. "Modélisation de la dépendance pour des statistiques d'ordre et estimation non-paramétrique." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1039/document.

Full text
Abstract:
Dans cette thèse, on considère la modélisation de la loi jointe des statistiques d'ordre, c.à.d. des vecteurs aléatoires avec des composantes ordonnées presque sûrement. La première partie est dédiée à la modélisation probabiliste des statistiques d'ordre d'entropie maximale à marginales fixées. Les marginales étant fixées, la caractérisation de la loi jointe revient à considérer la copule associée. Dans le Chapitre 2, on présente un résultat auxiliaire sur les copules d'entropie maximale à diagonale fixée. Une condition nécessaire et suffisante est donnée pour l'existence d'une telle copule, ainsi qu'une formule explicite de sa densité et de son entropie. La solution du problème de maximisation d'entropie pour les statistiques d'ordre à marginales fixées est présentée dans le Chapitre 3. On donne des formules explicites pour sa copule et sa densité jointe. On applique le modèle obtenu pour modéliser des paramètres physiques dans le Chapitre 4.Dans la deuxième partie de la thèse, on étudie le problème d'estimation non-paramétrique des densités d'entropie maximale des statistiques d'ordre en distance de Kullback-Leibler. Le chapitre 5 décrit une méthode d'agrégation pour des densités de probabilité et des densités spectrales, basée sur une combinaison convexe de ses logarithmes, et montre des bornes optimales non-asymptotiques en déviation. Dans le Chapitre 6, on propose une méthode adaptative issue d'un modèle exponentiel log-additif pour estimer les densités considérées, et on démontre qu'elle atteint les vitesses connues minimax. L'application de cette méthode pour estimer des dimensions des défauts est présentée dans le Chapitre 7<br>In this thesis we consider the modelling of the joint distribution of order statistics, i.e. random vectors with almost surely ordered components. The first part is dedicated to the probabilistic modelling of order statistics of maximal entropy with marginal constraints. Given the marginal constraints, the characterization of the joint distribution can be given by the associated copula. Chapter 2 presents an auxiliary result giving the maximum entropy copula with a fixed diagonal section. We give a necessary and sufficient condition for its existence, and derive an explicit formula for its density and entropy. Chapter 3 provides the solution for the maximum entropy problem for order statistics with marginal constraints by identifying the copula of the maximum entropy distribution. We give explicit formulas for the copula and the joint density. An application for modelling physical parameters is given in Chapter 4.In the second part of the thesis, we consider the problem of nonparametric estimation of maximum entropy densities of order statistics in Kullback-Leibler distance. Chapter 5 presents an aggregation method for probability density and spectral density estimation, based on the convex combination of the logarithms of these functions, and gives non-asymptotic bounds on the aggregation rate. In Chapter 6, we propose an adaptive estimation method based on a log-additive exponential model to estimate maximum entropy densities of order statistics which achieves the known minimax convergence rates. The method is applied to estimating flaw dimensions in Chapter 7
APA, Harvard, Vancouver, ISO, and other styles
9

Höns, Robin. "Estimation of distribution algorithms and minimum relative entropy." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980407877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thomaz, Carlos Eduardo. "Maximum entropy covariance estimate for statistical pattern recognition." Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/8755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tahmasbi, Mohammad Saeed. "VLSI implementation of heart sounds maximum entropy spectral estimation /." Title page, contents and summary only, 1994. http://web4.library.adelaide.edu.au/theses/09ENS/09enst128.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Juhlin, Sanna. "An Entropy Estimate of Written Language and Twitter Language : A Comparison between English and Swedish." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64952.

Full text
Abstract:
The purpose of this study is to estimate and compare the entropy and redundancy of written English and Swedish. We also investigate and compare the entropy and redundancy of Twitter language. This is done by extracting n consecutive characters called n-grams and calculating their frequencies. No precise values are obtained, due to the amount of text being finite, while the entropy is estimated for text length tending towards infinity. However we do obtain results for n = 1,...,6  and the results show that written Swedish has higher entropy than written English and that the redundancy is lower for Swedish language. When comparing Twitter with the standard languages we find that for Twitter, the entropy is higher and the redundancy is lower.
APA, Harvard, Vancouver, ISO, and other styles
13

Källberg, David. "Nonparametric Statistical Inference for Entropy-type Functionals." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-79976.

Full text
Abstract:
In this thesis, we study statistical inference for entropy, divergence, and related functionals of one or two probability distributions. Asymptotic properties of particular nonparametric estimators of such functionals are investigated. We consider estimation from both independent and dependent observations. The thesis consists of an introductory survey of the subject and some related theory and four papers (A-D). In Paper A, we consider a general class of entropy-type functionals which includes, for example, integer order Rényi entropy and certain Bregman divergences. We propose U-statistic estimators of these functionals based on the coincident or epsilon-close vector observations in the corresponding independent and identically distributed samples. We prove some asymptotic properties of the estimators such as consistency and asymptotic normality. Applications of the obtained results related to entropy maximizing distributions, stochastic databases, and image matching are discussed. In Paper B, we provide some important generalizations of the results for continuous distributions in Paper A. The consistency of the estimators is obtained under weaker density assumptions. Moreover, we introduce a class of functionals of quadratic order, including both entropy and divergence, and prove normal limit results for the corresponding estimators which are valid even for densities of low smoothness. The asymptotic properties of a divergence-based two-sample test are also derived. In Paper C, we consider estimation of the quadratic Rényi entropy and some related functionals for the marginal distribution of a stationary m-dependent sequence. We investigate asymptotic properties of the U-statistic estimators for these functionals introduced in Papers A and B when they are based on a sample from such a sequence. We prove consistency, asymptotic normality, and Poisson convergence under mild assumptions for the stationary m-dependent sequence. Applications of the results to time-series databases and entropy-based testing for dependent samples are discussed. In Paper D, we further develop the approach for estimation of quadratic functionals with m-dependent observations introduced in Paper C. We consider quadratic functionals for one or two distributions. The consistency and rate of convergence of the corresponding U-statistic estimators are obtained under weak conditions on the stationary m-dependent sequences. Additionally, we propose estimators based on incomplete U-statistics and show their consistency properties under more general assumptions.
APA, Harvard, Vancouver, ISO, and other styles
14

Jain, Akash. "Estimation of Melting Points of Organic Compounds." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1303%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Sangil. "Ensemble Filtering Methods for Nonlinear Dynamics." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1101%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Aragon, Nathan D. "Flip estimators, cross-entropy, and half-stationary bounding processes for Monte Carlo simulations." Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3315822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Berrett, Thomas Benjamin. "Modern k-nearest neighbour methods in entropy estimation, independence testing and classification." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/267832.

Full text
Abstract:
Nearest neighbour methods are a classical approach in nonparametric statistics. The k-nearest neighbour classifier can be traced back to the seminal work of Fix and Hodges (1951) and they also enjoy popularity in many other problems including density estimation and regression. In this thesis we study their use in three different situations, providing new theoretical results on the performance of commonly-used nearest neighbour methods and proposing new procedures that are shown to outperform these existing methods in certain settings. The first problem we discuss is that of entropy estimation. Many statistical procedures, including goodness-of-fit tests and methods for independent component analysis, rely critically on the estimation of the entropy of a distribution. In this chapter, we seek entropy estimators that are efficient and achieve the local asymptotic minimax lower bound with respect to squared error loss. To this end, we study weighted averages of the estimators originally proposed by Kozachenko and Leonenko (1987), based on the k-nearest neighbour distances of a sample. A careful choice of weights enables us to obtain an efficient estimator in arbitrary dimensions, given sufficient smoothness, while the original unweighted estimator is typically only efficient in up to three dimensions. A related topic of study is the estimation of the mutual information between two random vectors, and its application to testing for independence. We propose tests for the two different situations of the marginal distributions being known or unknown and analyse their performance. Finally, we study the classical k-nearest neighbour classifier of Fix and Hodges (1951) and provide a new asymptotic expansion for its excess risk. We also show that, in certain situations, a new modification of the classifier that allows k to vary with the location of the test point can provide improvements. This has applications to the field of semi-supervised learning, where, in addition to labelled training data, we also have access to a large sample of unlabelled data.
APA, Harvard, Vancouver, ISO, and other styles
18

Macedo, Pedro Filipe Pessoa. "Contributions to the theory of maximum entropy estimation for ill-posed models." Doctoral thesis, Universidade de Aveiro, 2013. http://hdl.handle.net/10773/11048.

Full text
Abstract:
Doutoramento em Matemática<br>As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.<br>Statistical techniques are essential in most areas of science being linear regression one of the most widely used. It is well-known that under fairly conditions linear regression is a powerful statistical tool. Unfortunately, some of these conditions are usually not satisfied in practice and the regression models become ill-posed, which means that the application of traditional estimation methods may lead to non-unique or highly unstable solutions. This work is mainly focused on the maximum entropy estimation of ill-posed models, in particular the estimation of regression models with small samples sizes affected by collinearity and outliers. The research is developed in three directions, namely the estimation of technical efficiency with state-contingent production frontiers, the estimation of the ridge parameter in ridge regression, and some developments in maximum entropy estimation. In the estimation of technical efficiency with state-contingent production frontiers, this work reveals that the maximum entropy estimators outperform the maximum likelihood estimator in most of the cases analyzed, namely in models with few observations in some states of nature and models with a large number of states of nature, which usually represent models affected by collinearity. The maximum entropy estimators are expected to make an important contribution to the increase of empirical work with state-contingent production frontiers. The main challenge in ridge regression is the selection of the ridge parameter. There is a huge number of methods to estimate the ridge parameter and no single method emerges in the literature as the best overall. In this work, a new method to select the ridge parameter in ridge regression is presented. The simulation study reveals that, in the case of regression models with small samples sizes affected by collinearity, the new estimator is probably one of the best ridge parameter estimators available in the literature on ridge regression. Founded on the Shannon entropy, the ordinary least squares estimator and some concepts from quantum electrodynamics, the maximum entropy Leuven estimator overcomes the main weakness of the generalized maximum entropy estimator, avoiding exogenous information that is usually not available. Based on the maximum entropy Leuven estimator, information theory and robust regression, new developments on the theory of maximum entropy estimation are provided in this work. The simulation studies and the empirical applications reveal that the new estimators are a good choice in the estimation of linear regression models with small samples sizes affected by collinearity and outliers. Finally, a contribution to the increase of computational resources on the maximum entropy estimation is also accomplished in this work.
APA, Harvard, Vancouver, ISO, and other styles
19

Omer, Mohamoud. "Estimation of regularity and synchronism in parallel biomedical time series." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=101879&source=NDLTD&language=en.

Full text
Abstract:
Objectives: Self-monitoring in health applications has already been recognized as a part of the mobile crowdsensing concept, where subjects, equipped with adequate sensors, share and extract information for personal or common benefit. Limited data transmission resources force a local analysis at wearable devices, but it is incompatible with analytical tools that require stationary and artifact-free data. The key objective of this thesis is to explain a computationally efficient binarized cross-approximate entropy, (X)BinEn, for blind cardiovascular signal processing in environments where energy and processor resources are limited.Methods: The proposed method is a descendant of cross-approximate entropy ((X)ApEn). It operates over binary differentially encoded data series, split into m-sized binary vectors. Hamming distance is used as a distance measure, while a search for similarities is performed over the vector sets, instead of over the individual vectors. The procedure is tested in laboratory rats exposed to shaker and restraint stress and compared to the existing (X)ApEn results.Results: The number of processor operations is reduced. (X)BinEn captures entropy changes similarly to (X)ApEn. The coding coarseness has an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon entropy. A binary conditional m=1 entropy is embedded into the procedure and can serve as a complementary dynamic measure.Conclusion: (X)BinEn can be applied to a single time series as auto-entropy or, more generally, to a pair of time series, as cross-entropy. It is intended for mobile, battery operated self-attached sensing devices with limited power and processor resources.<br>Cilj: Snimanje sopstvenih zdravstveih prametara je postalo deo koncepta mobilnog &lsquo;crowdsensing-a&rsquo; prema kojem učesnici sa nakačenim senzorima skupljaju i dele informacije, na ličnu ili op&scaron;tu dobrobit. Međutim, ograničenja u prenosu podataka dovela su do koncepta lokalne obrade (na licu mesta). To je pak nespojivo sa uobičajenim metodama za koje je potrebno da podaci koji se obrađuju budu stacionarni i bez artefakata. Ključni deo ove teze je opis procesorski nezahtevne binarizovane unakrsne aproksimativne entropije (X)BinEn koja omogućava analizu kardiovaskularnih podataka bez prethodne predobrade, u uslovima ograničenog napajanja i procesorskih resursa.Metoda: (X)BinEn je nastao razradom postojećeg postupka unakrsne entropije ((X)ApEn). Definisan je nad binarnim diferencijalno kodovanim vremenskim nizovima, razdeljenim u binarne vektore dužine m. Za procenu razmaka između vektora koristi se Hemingovo rastojanje, a sličnost vektora se ne procenjuje između svakog vektora pojedinačno, već između skupova vektora. Procedura je testirana nad laboratorijskim pacovima izloženim različitim vrstova stresova i upoređena sa postojećim rezultatima.Rezultati: Broj potrebnih procesorskih operacija je značajno smanjen. (X)BinEn registruje promene entropije slično (X)ApEn. Beskonačno klipovanje je gruba kvantizacija i za posledicu ima smanjenu osetljivost na promene, ali, sa druge strane, prigu&scaron;uje binarnu asimetriju i nekonzistentnan uticaj parametara. Za određeni skup parametara (X)BinEn je ekvivalentna &Scaron;enonovoj entropiji. Uslovna binarna m=1 entropija automatski se dobija kao uzgredni product binarizovane entropije, i može da se iskoristi kao komplementarna dinamička mera.Zaključak: (X)BinEn može da se koristi za jedan vremenski niz, kao auto-entropija, ili, u op&scaron;tem slučaju, za dva vremenska niza kao unakrsna entropija. Namenjena je mobilnim uređajima sa baterijskim napajanjem za individualne korisnike, to jest za korisnike sa ograničenim napajanjem i procesorskim resursima.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhou, Yuyang. "Performance improvement for stochastic systems using state estimation." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/performance-improvement-for-stochastic-systems-using-state-estimation(ab663282-47dc-450a-9e00-135aacb33e25).html.

Full text
Abstract:
Recent developments in the practice control field have heightened the need for performance enhancement. The designed controller should not only guarantee the variables to follow their set point values, but also ought to focus on the performance of systems like quality, efficiency, etc. Hence, with the fact that the inevitable noises are widely existing during industry processes, the randomness of the tracking errors can be considered as a critical performance to improve further. In addition, due to the fact that some controllers for industrial processes cannot be changed once the parameters are designed, it is crucial to design a control algorithm to minimise the randomness of tracking error without changing the existing closed-loop control. In order to achieve the above objectives, a class of novel algorithms are proposed in this thesis for different types of systems with unmeasurable states. Without changing the existing closed-loop proportional integral(PI) controller, the compensative controller is extra added to reduce the randomness of tracking error. That means the PI controller can always guarantee the basic tracking property while the designed compensative signal can be removed any time without affecting the normal operation. Instead of just using the output information as PI controller, the compensative controller is designed to minimise the randomness of tracking error using estimated states information. Since most system states are unmeasurable, proper filters are employed to estimate the system states. Based on the stochastic system control theory, the criterion to characterise the system randomness are valid to different systems. Therefore a brief review about the basic concepts of stochastic system control contained in this thesis. More specifically, there are overshoot minimisation for linear deterministic systems, minimum variance control for linear Gaussian stochastic systems, and minimum entropy control for non-linear and non-Gaussian stochastic systems. Furthermore, the stability analysis of each system is discussed in mean-square sense. To illustrate the effectiveness of presented control methods, the simulation results are given. Finally, the works of this thesis are summarised and the future work towards to the limitations existed in the proposed algorithms are listed.
APA, Harvard, Vancouver, ISO, and other styles
21

Moses, Lawrenzo D. "Error Estimates for Entropy Solutions to Scalar Conservation Laws with Continuous Flux Functions." University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1353991101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gallón, Gómez Santiago Alejandro. "Template estimation for samples of curves and functional calibration estimation via the method of maximum entropy on the mean." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2000/.

Full text
Abstract:
L'une des principales difficultés de l'analyse des données fonctionnelles consiste à extraire un motif commun qui synthétise l'information contenue par toutes les fonctions de l'échantillon. Le Chapitre 2 examine le problème d'identification d'une fonction qui représente le motif commun en supposant que les données appartiennent à une variété ou en sont suffisamment proches, d'une variété non linéaire de basse dimension intrinsèque munie d'une structure géométrique inconnue et incluse dans un espace de grande dimension. Sous cette hypothèse, un approximation de la distance géodésique est proposé basé sur une version modifiée de l'algorithme Isomap. Cette approximation est utilisée pour calculer la fonction médiane empirique de Fréchet correspondante. Cela fournit un estimateur intrinsèque robuste de la forme commune. Le Chapitre 3 étudie les propriétés asymptotiques de la méthode de normalisation quantile développée par Bolstad, et al. (2003) qui est devenue l'une des méthodes les plus populaires pour aligner des courbes de densité en analyse de données de microarrays en bioinformatique. Les propriétés sont démontrées considérant la méthode comme un cas particulier de la procédure de la moyenne structurelle pour l'alignement des courbes proposée par Dupuy, Loubes and Maza (2011). Toutefois, la méthode échoue dans certains cas. Ainsi, nous proposons une nouvelle méthode, pour faire face à ce problème. Cette méthode utilise l'algorithme développée dans le Chapitre 2. Dans le Chapitre 4, nous étendons le problème d'estimation de calage pour la moyenne d'une population finie de la variable de sondage dans un cadre de données fonctionnelles. Nous considérons le problème de l'estimation des poids de sondage fonctionnel à travers le principe du maximum d'entropie sur la moyenne -MEM-. En particulier, l'estimation par calage est considérée comme un problème inverse linéaire de dimension infinie suivant la structure de l'approche du MEM. Nous donnons un résultat précis d'estimation des poids de calage fonctionnels pour deux types de mesures aléatoires a priori: la measure Gaussienne centrée et la measure de Poisson généralisée<br>One of the main difficulties in functional data analysis is the extraction of a meaningful common pattern that summarizes the information conveyed by all functions in the sample. The problem of finding a meaningful template function that represents this pattern is considered in Chapter 2 assuming that the functional data lie on an intrinsically low-dimensional smooth manifold with an unknown underlying geometric structure embedding in a high-dimensional space. Under this setting, an approximation of the geodesic distance is developed based on a robust version of the Isomap algorithm. This approximation is used to compute the corresponding empirical Fréchet median function, which provides a robust intrinsic estimator of the template. The Chapter 3 investigates the asymptotic properties of the quantile normalization method by Bolstad, et al. (2003) which is one of the most popular methods to align density curves in microarray data analysis. The properties are proved by considering the method as a particular case of the structural mean curve alignment procedure by Dupuy, Loubes and Maza (2011). However, the method fails in some case of mixtures, and a new methodology to cope with this issue is proposed via the algorithm developed in Chapter 2. Finally, the problem of calibration estimation for the finite population mean of a survey variable under a functional data framework is studied in Chapter 4. The functional calibration sampling weights of the estimator are obtained by matching the calibration estimation problem with the maximum entropy on the mean -MEM- principle. In particular, the calibration estimation is viewed as an infinite-dimensional linear inverse problem following the structure of the MEM approach. A precise theoretical setting is given and the estimation of functional calibration weights assuming, as prior measures, the centered Gaussian and compound Poisson random measures is carried out
APA, Harvard, Vancouver, ISO, and other styles
23

Chandrasekaran, Venkat. "Modeling and estimation in Gaussian graphical models : maximum-entropy methods and walk-sum analysis." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40521.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.<br>Includes bibliographical references (leaves 81-86).<br>Graphical models provide a powerful formalism for statistical signal processing. Due to their sophisticated modeling capabilities, they have found applications in a variety of fields such as computer vision, image processing, and distributed sensor networks. In this thesis we study two central signal processing problems involving Gaussian graphical models, namely modeling and estimation. The modeling problem involves learning a sparse graphical model approximation to a specified distribution. The estimation problem in turn exploits this graph structure to solve high-dimensional estimation problems very efficiently. We propose a new approach for learning a thin graphical model approximation to a specified multivariate probability distribution (e.g., the empirical distribution from sample data). The selection of sparse graph structure arises naturally in our approach through the solution of a convex optimization problem, which differentiates our procedure from standard combinatorial methods. In our approach, we seek the maximum entropy relaxation (MER) within an exponential family, which maximizes entropy subject to constraints that marginal distributions on small subsets of variables are close to the prescribed marginals in relative entropy. We also present a primal-dual interior point method that is scalable and tractable provided the level of relaxation is sufficient to obtain a thin graph. A crucial element of this algorithm is that we exploit sparsity of the Fisher information matrix in models defined on chordal graphs. The merits of this approach are investigated by recovering the graphical structure of some simple graphical models from sample data. Next, we present a general class of algorithms for estimation in Gaussian graphical models with arbitrary structure.<br>(cont.) These algorithms involve a sequence of inference problems on tractable subgraphs over subsets of variables. This framework includes parallel iterations such as Embedded Trees, serial iterations such as block Gauss-Seidel, and hybrid versions of these iterations. We also discuss a method that uses local memory at each node to overcome temporary communication failures that may arise in distributed sensor network applications. We analyze these algorithms based on the recently developed walk-sum interpretation of Gaussian inference. We describe the walks "computed" by the algorithms using walk-sum diagrams, and show that for non-stationary iterations based on a very large and flexible set of sequences of subgraphs, convergence is achieved in walk-summable models. Consequently, we are free to choose spanning trees and subsets of variables adaptively at each iteration. This leads to efficient methods for optimizing the next iteration step to achieve maximum reduction in error. Simulation results demonstrate that these non-stationary algorithms provide a significant speedup in convergence over traditional one-tree and two-tree iterations.<br>by Venkat Chandrasekaran.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Gärtner, Joel. "Analysis of Entropy Usage in Random Number Generators." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214567.

Full text
Abstract:
Cryptographically secure random number generators usually require an outside seed to be initialized. Other solutions instead use a continuous entropy stream to ensure that the internal state of the generator always remains unpredictable. This thesis analyses four such generators with entropy inputs. Furthermore, different ways to estimate entropy is presented and a new method useful for the generator analysis is developed. The developed entropy estimator performs well in tests and is used to analyse entropy gathered from the different generators. Furthermore, all the analysed generators exhibit some seemingly unintentional behaviour, but most should still be safe for use.<br>Kryptografiskt säkra slumptalsgeneratorer behöver ofta initialiseras med ett oförutsägbart frö. En annan lösning är att istället konstant ge slumptalsgeneratorer entropi. Detta gör det möjligt att garantera att det interna tillståndet i generatorn hålls oförutsägbart. I den här rapporten analyseras fyra sådana generatorer som matas med entropi. Dessutom presenteras olika sätt att skatta entropi och en ny skattningsmetod utvecklas för att användas till analysen av generatorerna. Den framtagna metoden för entropiskattning lyckas bra i tester och används för att analysera entropin i de olika generatorerna. Alla analyserade generatorer uppvisar beteenden som inte verkar optimala för generatorns funktionalitet. De flesta av de analyserade generatorerna verkar dock oftast säkra att använda.
APA, Harvard, Vancouver, ISO, and other styles
25

Badowski, Tomasz [Verfasser]. "Adaptive importance sampling via minimization of estimators of cross-entropy, mean square, and inefficiency constant / Tomasz Badowski." Berlin : Freie Universität Berlin, 2016. http://d-nb.info/1111558868/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Visaya, Maria Vivien V. "A lower estimate of the topological entropy from a one-dimensional reconstruction of time series." 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/136722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hamon, Abdellatif. "Estimation d'une densité de probabilité multidimensionnelle par dualité." Rouen, 2000. http://www.theses.fr/2000ROUES055.

Full text
Abstract:
Nous étudions dans cette thèse l'estimation d'une fonction de densité d'une loi de probabilité multidimensionnelle à partir d'un certain nombre de moments issue de cette loi. Nous considérons ces moments comme une information sur la loi inconnue. Nous introduisons une nouvelle fonction d'information de type Kullback, afin d'utiliser la méthode du maximum d'entropie pour construire un estimateur convergent uniformément vers la loi inconnue lorsque le nombre de moments augmente. Nous utilisons les techniques de dualité de Fenchel-Young pour démontrer dans un premier temps la convergence uniforme de l'estimateur de maximum d'entropie vers la densité inconnue lorsque les fonctions deux densités sont uniformément bornées. Nous explicitons dans un deuxième temps la vitesse de convergence de l'estimateur du maximum d'entropie lorsque les fonctions de moments sont algébriques, trigonométriques définies sur un compact de IR n. Nous construisons une famille de splines à régularité modifiables, puis nous démontrons que lorsque les fonctions de moments proviennent de cette famille alors la convergence uniforme de l'estimateur du maximum d'entropie est assurée. Dans la dernière partie de cette thèse, nous proposons un algorithme de construction d'une loi inconnue à partir d'un certain nombre de ces moments.
APA, Harvard, Vancouver, ISO, and other styles
28

De, bortoli Valentin. "Statistiques non locales dans les images : modélisation, estimation et échantillonnage." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN020.

Full text
Abstract:
Dans cette thèse, on étudie d'un point de vueprobabiliste deux statistiques non locales dans les images : laredondance spatiale et les moments de certaines couches de réseauxde neurones convolutionnels. Plus particulièrement, on s'intéresse àl'estimation et à la détection de la redondance spatiale dans lesimages naturelles et à l'échantillonnage de modèles d'images souscontraintes de moments de sorties de réseaux deneurones.On commence par proposer une définition de la redondance spatialedans les images naturelles. Celle-ci repose sur une analyseGestaltiste de la notion de similarité ainsi que sur un cadrestatistique pour le test d'hypothèses via la méthode acontrario. On développe un algorithme pour identifier cetteredondance dans les images naturelles. Celui-ci permet d'identifierles patchs similaires dans une image. On utilise cette informationpour proposer de nouveaux algorithmes de traitement d'image(débruitage, analyse de périodicité).Le reste de cette thèse est consacré à la modélisation et àl'échantillonnage d'images sous contraintes non locales. Les modèlesd'images considérés sont obtenus via le principe de maximumd'entropie. On peut alors déterminer la distribution cible sur lesimages via une procédure de minimisation. On aborde ce problème enutilisant des outils issus de l'optimisationstochastique.Plus précisément, on propose et analyse un nouvel algorithme pourl'optimisation stochastique : l'algorithme SOUL (StochasticOptimization with Unadjusted Langevin). Dans cette méthodologie, legradient est estimé par une méthode de Monte Carlo par chaîne deMarkov (ici l'algorithme de Langevin non ajusté). Les performancesde cet algorithme repose sur les propriétés de convergenceergodiques des noyaux de Markov associés aux chaînes de Markovutilisées. On s'intéresse donc aux propriétés de convergencegéométrique de certaines classes de modèles fonctionnelsautorégressifs. On caractérise précisément la dépendance des taux deconvergence de ces modèles vis à vis des constantes du modèle(dimension, régularité,convexité...).Enfin, on applique l'algorithme SOUL au problème de synthèse detexture par maximum d'entropie. On étudie les liens qu'entretientcette approche avec d'autres modèles de maximisation d'entropie(modèles macrocanoniques, modèles microcanoniques). En utilisant desstatistiques de moments de sorties de réseaux de neuronesconvolutionnels on obtient des résultats visuels comparables à ceux del'état de l'art<br>In this thesis we study two non-localstatistics in images from a probabilistic point of view: spatialredundancy and convolutional neural network features. Moreprecisely, we are interested in the estimation and detection ofspatial redundancy in naturalimages. We also aim at sampling images with neural network constraints.We start by giving a definition of spatial redundancy in naturalimages. This definition relies on two concepts: a Gestalt analysisof the notion of similarity in images, and a hypothesis testingframework (the a contrario method). We propose an algorithm toidentify this redundancy in natural images. Using this methodologywe can detect similar patches in images and, with this information,we propose new algorithms for diverse image processing tasks(denoising, periodicity analysis).The rest of this thesis deals with sampling images with non-localconstraints. The image models we consider are obtained via themaximum entropy principle. The target distribution is then obtainedby minimizing an energy functional. We use tools from stochasticoptimization to tackle thisproblem.More precisely, we propose and analyze a new algorithm: the SOUL(Stochastic Optimization with Unadjusted Langevin) algorithm. Inthis methodology, the gradient is estimated using Monte Carlo MarkovChains methods. In the case of the SOUL algorithm we use an unadjustedLangevin algorithm. The efficiency of the SOUL algorithm is relatedto the ergodic properties of the underlying Markov chains. Thereforewe are interested in the convergence properties of certain class offunctional autoregressive models. We characterize precisely thedependency of the convergence rates of these models with respect totheir parameters (dimension, smoothness,convexity).Finally, we apply the SOUL algorithm to the problem ofexamplar-based texture synthesis with a maximum entropy approach. Wedraw links between our model and other entropy maximizationprocedures (macrocanonical models, microcanonical models). Usingconvolutional neural network constraints we obtain state-of-the artvisual results
APA, Harvard, Vancouver, ISO, and other styles
29

David, Afshin. "Modeling and estimation using maximum entropy and minimum mean squared criteria based on partial and noisy observations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0016/NQ57033.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rietsch, Théo. "Théorie des valeurs extrêmes et applications en environnement." Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-00876217.

Full text
Abstract:
Dans cette thèse nous apportons plusieurs contributions, à la fois théoriques et appliquées, à la théorie des valeurs extrêmes. Les deux premiers chapitres de cette thèse s'attachent à répondre à des questions cruciales en climatologie. La première question est de savoir si un changement dans le comportement des extrêmes de température peut être détecté entre le début du siècle et aujourd'hui. Pour cela nous proposons d'utiliser la divergence de Kullback Leibler, que nous adaptons au contexte des extrêmes. Des résultats théoriques et des simulations permettent de valider l'approche proposée, dont les performances sont ensuite illustrées sur un jeu de données réelles. La deuxième question quant à elle combine les réseaux de neurones à la théorie des valeurs extrêmes afin de savoir où ajouter (resp. retirer) des stations dans un réseau pour gagner (resp. perdre) le plus (resp. le moins) d'information sur le comportement des extrêmes. Un algorithme, le Query By Committee, issu de la théorie du machine learning est développé puis appliqué à un jeu de données réelles. Les avantages, inconvénients et limites de cette approche sont ainsi mis en exergue. La dernier chapitre de la thèse traite d'un sujet plus théorique, à savoir l'estimation robuste du paramètre de queue d'une distribution de type Weibull en présence de covariables aléatoires. Nous proposons un estimateur robuste en utilisant un critère de minimisation de la divergence entre deux densités et étudions ses propriétés asymptotiques. Des simulations illustrent les performances de l'estimateur à distance finie. Cette thèse offre de nombreuses perspectives dont une liste non exhaustive est dressée en conclusion.
APA, Harvard, Vancouver, ISO, and other styles
31

Ekström, Magnus. "Maximum spacing methods and limit theorems for statistics based on spacings." Doctoral thesis, Umeå universitet, Matematisk statistik, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-85176.

Full text
Abstract:
The maximum spacing (MSP) method, introduced by Cheng and Amin (1983) and independently by Ranneby (1984), is a general estimation method for continuous univariate distributions. The MSP method, which is closely related to the maximum likelihood (ML) method, can be derived from an approximation based on simple spacings of the Kullback-Leibler information. It is known to give consistent and asymptotically efficient estimates under general conditions and works also in situations where the ML method fails, e.g. for the three parameter Weibull model. In this thesis it is proved under general conditions that MSP estimates of parameters in the Euclidian metric are strongly consistent. The ideas behind the MSP method are extended and a class of estimation methods is introduced. These methods, called generalized MSP methods, are derived from approxima­tions based on sum-functions of rath order spacings of certain information mea­sures, i.e. the ^-divergences introduced by Csiszår (1963). It is shown under general conditions that generalized MSP methods give consistent estimates. In particular, it is proved that generalized MSP methods give L1 consistent esti­mates in any family of distributions with unimodal densities, without any further conditions on the distributions. Other properties such as distributional robust­ness are also discussed. Several limit theorems for sum-functions of rath order spacings are given, for ra fixed as well as for the case when ra is allowed to in­crease to infinity with the sample size. These results provide a strongly consistent nonparametric estimator of entropy, as well as a characterization of the uniform distribution. Further, it is shown that Cressie's (1976) goodness of fit test is strongly consistent against all continuous alternatives.<br>digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
32

Caputo, Jean-Guy. "Dimension et entropie des attracteurs associés à des écoulements réels : estimation et analyse de la méthode." Grenoble 1, 1986. http://www.theses.fr/1986GRE10057.

Full text
Abstract:
On s'interesse a la caracterisation des regimes chaotiques par lesquels un ecoulement atteint la turbulence. On montre qu'un regime chaotique de convection de rayleigh-benard est decrit par un attracteur dont on determine la dimension et l'entropie. En vue de caracteriser des attracteurs de dimension plus elevee on determine les conditions d'obtention de resultats corrects sur des exemples precis
APA, Harvard, Vancouver, ISO, and other styles
33

Heywood, Ben. "Investigations into the use of quantified Bayesian maximum entropy methods to generate improved distribution maps and biomass estimates from fisheries acoustic survey data /." St Andrews, 2008. http://hdl.handle.net/10023/512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhu, Jie. "Entropic measures of connectivity with an application to intracerebral epileptic signals." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S006/document.

Full text
Abstract:
Les travaux présentés dans cette thèse s'inscrivent dans la problématique de la connectivité cérébrale, connectivité tripartite puisqu'elle sous-tend les notions de connectivité structurelle, fonctionnelle et effective. Ces trois types de connectivité que l'on peut considérer à différentes échelles d'espace et de temps sont bien évidemment liés et leur analyse conjointe permet de mieux comprendre comment structures et fonctions cérébrales se contraignent mutuellement. Notre recherche relève plus particulièrement de la connectivité effective qui permet de définir des graphes de connectivité qui renseignent sur les liens causaux, directs ou indirects, unilatéraux ou bilatéraux via des chemins de propagation, représentés par des arcs, entre les nœuds, ces derniers correspondant aux régions cérébrales à l'échelle macroscopique. Identifier les interactions entre les aires cérébrales impliquées dans la génération et la propagation des crises épileptiques à partir d'enregistrements intracérébraux est un enjeu majeur dans la phase pré-chirurgicale et l'objectif principal de notre travail. L'exploration de la connectivité effective suit généralement deux approches, soit une approche basée sur les modèles, soit une approche conduite par les données comme nous l'envisageons dans le cadre de cette thèse où les outils développés relèvent de la théorie de l'information et plus spécifiquement de l'entropie de transfert, la question phare que nous adressons étant celle de la précision des estimateurs de cette grandeur dans le cas des méthodes développées basées sur les plus proches voisins. Les approches que nous proposons qui réduisent le biais au regard d'estimateurs issus de la littérature sont évaluées et comparées sur des signaux simulés de type bruits blancs, processus vectoriels autorégressifs linéaires et non linéaires, ainsi que sur des modèles physiologiques réalistes avant d'être appliquées sur des signaux électroencéphalographiques de profondeur enregistrés sur un patient épileptique et comparées à une approche assez classique basée sur la fonction de transfert dirigée. En simulation, dans les situations présentant des non-linéarités, les résultats obtenus permettent d'apprécier la réduction du biais d'estimation pour des variances comparables vis-à-vis des techniques connues. Si les informations recueillies sur les données réelles sont plus difficiles à analyser, elles montrent certaines cohérences entre les méthodes même si les résultats préliminaires obtenus s'avèrent davantage en accord avec les conclusions des experts cliniciens en appliquant la fonction de transfert dirigée<br>The work presented in this thesis deals with brain connectivity, including structural connectivity, functional connectivity and effective connectivity. These three types of connectivities are obviously linked, and their joint analysis can give us a better understanding on how brain structures and functions constrain each other. Our research particularly focuses on effective connectivity that defines connectivity graphs with information on causal links that may be direct or indirect, unidirectional or bidirectional. The main purpose of our work is to identify interactions between different brain areas from intracerebral recordings during the generation and propagation of seizure onsets, a major issue in the pre-surgical phase of epilepsy surgery treatment. Exploring effective connectivity generally follows two kinds of approaches, model-based techniques and data-driven ones. In this work, we address the question of improving the estimation of information-theoretic quantities, mainly mutual information and transfer entropy, based on k-Nearest Neighbors techniques. The proposed approaches we developed are first evaluated and compared with existing estimators on simulated signals including white noise processes, linear and nonlinear vectorial autoregressive processes, as well as realistic physiology-based models. Some of them are then applied on intracerebral electroencephalographic signals recorded on an epileptic patient, and compared with the well-known directed transfer function. The experimental results show that the proposed techniques improve the estimation of information-theoretic quantities for simulated signals, while the analysis is more difficult in real situations. Globally, the different estimators appear coherent and in accordance with the ground truth given by the clinical experts, the directed transfer function leading to interesting performance
APA, Harvard, Vancouver, ISO, and other styles
35

Enqvist, Per. "Spectral Estimation by Geometric, Topological and Optimization Methods." Doctoral thesis, Stockholm, 2001. http://media.lib.kth.se:8080/kthdisseng.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mourragui, Mustapha. "Comportement hydrodynamique des processus de sauts, de naissances et de morts." Rouen, 1993. http://www.theses.fr/1993ROUES002.

Full text
Abstract:
Dans cette thèse, nous étudions la limite hydrodynamique des processus de sauts, de naissances et de morts qui décrivent l'évolution de particules indistingables sur le tore. Dans ces systèmes, les particules sautent indépendamment les unes des autres, elles sont créées et détruites selon des taux non linéaires. En utilisant différentes méthodes : la méthode de l'estimation surexponentielle de C. Kipnis, S. Olla et S. R. S. Varadhan, la méthode de production d'entropie de M. Z. Guo, G. C. Papanicolaou et S. R. S. Varadhan et la méthode d'entropie relative de H. T. Yau, nous démontrons que sous certaines conditions sur les taux de naissance et de mort, et par passage à la limite, nos systèmes évoluent selon des équations de réaction-diffusion non linéaires
APA, Harvard, Vancouver, ISO, and other styles
37

Nembé, Jocelyn. "Estimation de la fonction d'intensité d'un processus ponctuel par complexité minimale." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00346118.

Full text
Abstract:
Soit un processus ponctuel observé sur un intervalle de temps fini, et admettant une intensité stochastique conforme au modèle de Aalen. La fonction d'intensité du processus est estimée à partir d'un échantillon indépendant et identiquement distribué de paires constituées par la réalisation du processus ponctuel et du processus prévisible associé, par la minimisation d'un critère qui représente la longueur d'un code variable pour les données observées. L'estimateur de complexité minimale est la fonction minimisant ce critère dans une famille de fonctions candidates. Un choix judicieux des fonctions de complexité permet de définir ainsi des codes universels pour des réalisations de processus ponctuels. Les estimateurs de la fonction d'intensité obtenus par minimisation de ce critère sont presque-sûrement consistants au sens de l'entropie, et au sens de la distance de Hellinger pour des fonctions de complexité satisfaisant l'inégalité de Kraft. L'étude des vitesses de convergence pour la distance de Hellinger, montre qu'elles sont majorées par celle de la redondance du code. Ces vitesses, sont précisées dans le cas des familles de fonctions trigonométriques, polynomiales et splines. Dans le cas particulier des processus de Poisson et des modèles de durées de vie avec censure, les mêmes vitesses de convergence sont obtenues pour des distances plus fortes. D'autres propriétés de l'estimateur sont présentées, notamment la découverte exacte de la fonction inconnue dans certains cas, et la normalité asymptotique. Des suites de tests exponentiels consistants sont également étudiées. Le comportement numérique de l'estimateur est analysé à travers des simulations dans le cas des modèles de durées de vie avec censure
APA, Harvard, Vancouver, ISO, and other styles
38

Wilkie, Kathleen P. "Mutual Information Based Methods to Localize Image Registration." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1121.

Full text
Abstract:
Modern medicine has become reliant on medical imaging. Multiple modalities, e. g. magnetic resonance imaging (MRI), computed tomography (CT), etc. , are used to provide as much information about the patient as possible. The problem of geometrically aligning the resulting images is called image registration. Mutual information, an information theoretic similarity measure, allows for automated intermodal image registration algorithms. <br /><br /> In applications such as cancer therapy, diagnosticians are more concerned with the alignment of images over a region of interest such as a cancerous lesion, than over an entire image set. Attempts to register only the regions of interest, defined manually by diagnosticians, fail due to inaccurate mutual information estimation over the region of overlap of these small regions. <br /><br /> This thesis examines the region of union as an alternative to the region of overlap. We demonstrate that the region of union improves the accuracy and reliability of mutual information estimation over small regions. <br /><br /> We also present two new mutual information based similarity measures which allow for localized image registration by combining local and global image information. The new similarity measures are based on convex combinations of the information contained in the regions of interest and the information contained in the global images. <br /><br /> Preliminary results indicate that the proposed similarity measures are capable of localizing image registration. Experiments using medical images from computer tomography and positron emission tomography demonstrate the initial success of these measures. <br /><br /> Finally, in other applications, auto-detection of regions of interest may prove useful and would allow for fully automated localized image registration. We examine methods to automatically detect potential regions of interest based on local activity level and present some encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
39

Rai, Ajit. "Estimation de la disponibilité par simulation, pour des systèmes incluant des contraintes logistiques." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S105/document.

Full text
Abstract:
L'analyse des FDM (Reliability, Availability and Maintainability en anglais) fait partie intégrante de l'estimation du coût du cycle de vie des systèmes ferroviaires. Ces systèmes sont hautement fiables et présentent une logistique complexe. Les simulations Monte Carlo dans leur forme standard sont inutiles dans l'estimation efficace des paramètres des FDM à cause de la problématique des événements rares. C'est ici que l'échantillonnage préférentiel joue son rôle. C'est une technique de réduction de la variance et d'accélération de simulations. Cependant, l'échantillonnage préférentiel inclut un changement de lois de probabilité (changement de mesure) du modèle mathématique. Le changement de mesure optimal est inconnu même si théoriquement il existe et fournit un estimateur avec une variance zéro. Dans cette thèse, l'objectif principal est d'estimer deux paramètres pour l'analyse des FDM: la fiabilité des réseaux statiques et l'indisponibilité asymptotique pour les systèmes dynamiques. Pour ce faire, la thèse propose des méthodes pour l'estimation et l'approximation du changement de mesure optimal et l'estimateur final. Les contributions se présentent en deux parties: la première partie étend la méthode de l'approximation du changement de mesure de l'estimateur à variance zéro pour l'échantillonnage préférentiel. La méthode estime la fiabilité des réseaux statiques et montre l'application à de réels systèmes ferroviaires. La seconde partie propose un algorithme en plusieurs étapes pour l'estimation de la distance de l'entropie croisée. Cela permet d'estimer l'indisponibilité asymptotique pour les systèmes markoviens hautement fiables avec des contraintes logistiques. Les résultats montrent une importante réduction de la variance et un gain par rapport aux simulations Monte Carlo<br>RAM (Reliability, Availability and Maintainability) analysis forms an integral part in estimation of Life Cycle Costs (LCC) of passenger rail systems. These systems are highly reliable and include complex logistics. Standard Monte-Carlo simulations are rendered useless in efficient estimation of RAM metrics due to the issue of rare events. Systems failures of these complex passenger rail systems can include rare events and thus need efficient simulation techniques. Importance Sampling (IS) are an advanced class of variance reduction techniques that can overcome the limitations of standard simulations. IS techniques can provide acceleration of simulations, meaning, less variance in estimation of RAM metrics in same computational budget as a standard simulation. However, IS includes changing the probability laws (change of measure) that drive the mathematical models of the systems during simulations and the optimal IS change of measure is usually unknown, even though theroretically there exist a perfect one (zero-variance IS change of measure). In this thesis, we focus on the use of IS techniques and its application to estimate two RAM metrics : reliability (for static networks) and steady state availability (for dynamic systems). The thesis focuses on finding and/or approximating the optimal IS change of measure to efficiently estimate RAM metrics in rare events context. The contribution of the thesis is broadly divided into two main axis : first, we propose an adaptation of the approximate zero-variance IS method to estimate reliability of static networks and show the application on real passenger rail systems ; second, we propose a multi-level Cross-Entropy optimization scheme that can be used during pre-simulation to obtain CE optimized IS rates of Markovian Stochastic Petri Nets (SPNs) transitions and use them in main simulations to estimate steady state unavailability of highly reliably Markovian systems with complex logistics involved. Results from the methods show huge variance reduction and gain compared to MC simulations
APA, Harvard, Vancouver, ISO, and other styles
40

Bettinger, Régis. "Inversion d'un système par krigeage : application à la synthèse de catalyseurs à haut débit." Nice, 2009. https://tel.archives-ouvertes.fr/tel-00460162.

Full text
Abstract:
Ce travail concerne la modélisation du processus de synthèse (construction) de supports de catalyseurs obtenus par réaction silice-alumine. Ce phénomène est caractérisé par 5 variables d'entrée et 2 variables de sortie (la surface spécifique et le volume mésoporeux du support de catalyseur). Chaque combinaison des valeurs de sortie ayant une application potentielle, on voudrait savoir en synthétiser le plus grand nombre, c'est-à-dire connaître les variables d'entrée permettant de construire un catalyseur ayant des caractéristiques données (surface, volume) a priori quelconques. Les limites atteignables des deux sorties du système sont inconnues. Ne disposant pas de suffisamment d'essais pour pouvoir espérer construire un modèle fiable sur l'ensemble du domaine de variation des variables d'entrée, nous choisissons une approche par plans d'expérience séquentiels avec modélisation par krigeage permettant d'éviter une trop grande dispersion des variables d'entrée tout en assurant une exploration du domaine accessible pour les variables de sortie. Les essais sont choisis séquentiellement en se servant de l'information apportée par les essais précédents et traitée par le modèle de krigeage. Cette façon de procéder est a priori plus efficace que celle consistant à utiliser un plan d'expériences fixé au départ et comprenant la totalité des essais disponibles. Des critères d'ajout séquentiel de points d'expérimentation (définissant les valeurs des variables d'entrée) sont proposés, qui favorisent une forte dispersion des sorties correspondantes et prennent en compte les incertitudes associées aux prédictions par krigeage. Enfin, les critères retenus, l'un à base de distance et l'autre à base d'entropie, sont testés sur des données simulées afin de vérifier la bonne répartition finale des valeurs des réponses. Des rappels sur la modélisation par processus gaussien, la régression/interpolation par krigeage et ses liens avec les méthodes de type splines et SVM, ainsi que la planification d'expériences sont présentés en essayant de concilier rigueur et clarté<br>This work deals with the modeling of the synthesis process for catalyst supports obtained by a chemical reaction involving silica and alumina. The process is characterized by 5 inputs and 2 outputs (specific surface and mesoporous volume of the support). Each pair of output values has a potential application and the ultimate objective is to be able to find input values associated with the synthesis of a catalyst with any given output characteristics (surface, volume). The ranges of the two outputs are unknown. The number of runs available is too small to build a satisfactory model over the whole input domain. We thus combine design of experiments and kriging modeling in a way that ensures both a limited dispersion of the input factors and a good exploration of the reachable output domain. The runs are designed sequentially, using the information provided by former runs through their associated kriging model. This sequential construction seems more efficient than the design of a non-sequential experiment containing the total amount of available runs. Several criteria are proposed for sequential design which favor a high dispersion of the corresponding outputs and take the uncertainties associated with the kriging model into account. The two most appealing are tested on simulated data in order to check the dispersion of outputs; one is based on minimax distance and the other on entropy. Basic properties of Gaussian processes, regression/interpolation by kriging and links with other methods such as splines and SVMs are reminded, together with standard methods for designing experiments, with the objective of combining rigor and clarity
APA, Harvard, Vancouver, ISO, and other styles
41

Siththara, Gedara Jagath Senarathne. "Experimental design for dependent data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201237/1/Jagath%20Senarathne_Siththara%20Gedara_Thesis.pdf.

Full text
Abstract:
This PhD focused on developing new methods to design experiments where dependent data are observed. Of primary consideration was Bayesian design, i.e. designs found based on undertaking a Bayesian analysis of the data. The generic design algorithms and the loss functions proposed in this study cater to a wide range of applications, including designing clinical trials and geostatistical experiments. These tools enable informed decisions to be made efficiently through maximizing the information gained from experiments while reducing costs.
APA, Harvard, Vancouver, ISO, and other styles
42

Gamboa, Fabrice. "Méthode du maximum d'entropie sur la moyenne et applications." Paris 11, 1989. http://www.theses.fr/1989PA112346.

Full text
Abstract:
La méthode du maximum d'entropie permet de donner une solution explicite au problème de reconstruction d'une mesure de probabilité lorsque l'on ne dispose que des valeurs moyennes de certaines variables aléatoires. Nous utilisons cette méthode pour reconstruire une fonction contrainte à appartenir à un convexe C (contrainte non linéaire), en utilisant un nombre fini de ses moments généralisés (contrainte linéaire). Pour cela on considère une suite de problèmes de maximisation de l'entropie, le n-ème problème consiste à reconstruire une probabilité sur Cn, projection de C sur Rⁿ, dont la moyenne vérifie une contrainte approchant la contrainte initiale (moments généralisés). Faisant ensuite tendre n vers l’infini, on obtient une solution au problème initial en considérant la limite de la suite des moyennes des lois du maximum d'entropie sur les espaces Cn. Ce procédé de reconstruction est baptisé méthode du maximum d'entropie sur la moyenne (M. E. M), car la contrainte linéaire ne porte que sur la moyenne des lois à reconstruire. On étudie principalement le cas où C est une bande de fonctions continues. On obtient alors une famille de reconstructions, chacun des éléments de cette famille ne dépend que de la suite des mesures de référence utilisée dans la suite des problèmes d'entropie. Nous montrons que la méthode (M. E. M) est équivalente à la maximisation d'un critère concave. Nous utilisons ensuite la méthode (M. E. M) pour construire un critère numériquement calculable permettant de résoudre le problème des moments généralisés sur une bande bornée de fonctions continues. Enfin nous nous intéressons à des applications statistiques de la méthode<br>An explicit solution for the problem of probability reconstruction when only the averages of random variables are known is given by the maximum entropy method. We use this method to reconstruct a function constrained to a convex set C, (no linear constraint) using a finite number of its generalized moments linear constraint). A sequence of entropy maximization problems is considered. The nth problem consists in the reconstruction of a probability distribution on Cn, the projection of C on Rⁿ whose mean satisfies a constraint approximating the initial linear constraint (generalized moments). When n approaches infinity this gives a solution for the initial problem as the limit of the sequence of means of maximum entropy distributions on Cn. We call this technique the maximum entropy method on the mean (M. E. M) because linear constraints are only on the mean of the distribution to be reconstructed. We mainly study the case where C is a band of continuous functions. We find a reconstruction familly, each element of this family only depends of referenced measures used for the sequence of entropy problems. We show that the M. E. M method is equivalent to a concav criteria maximization. We then use the M. E. M method to construct a numerically computable criteria to solve generalized moments problem on a bounded band of continuous functions. In the last chapter we discuss statistical applications of the method
APA, Harvard, Vancouver, ISO, and other styles
43

Cosma, Ioana Ada. "Dimension reduction of streaming data via random projections." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:09eafd84-8cb3-4e54-8daf-18db7832bcfc.

Full text
Abstract:
A data stream is a transiently observed sequence of data elements that arrive unordered, with repetitions, and at very high rate of transmission. Examples include Internet traffic data, networks of banking and credit transactions, and radar derived meteorological data. Computer science and engineering communities have developed randomised, probabilistic algorithms to estimate statistics of interest over streaming data on the fly, with small computational complexity and storage requirements, by constructing low dimensional representations of the stream known as data sketches. This thesis combines techniques of statistical inference with algorithmic approaches, such as hashing and random projections, to derive efficient estimators for cardinality, l_{alpha} distance and quasi-distance, and entropy over streaming data. I demonstrate an unexpected connection between two approaches to cardinality estimation that involve indirect record keeping: the first using pseudo-random variates and storing selected order statistics, and the second using random projections. I show that l_{alpha} distances and quasi-distances between data streams, and entropy, can be recovered from random projections that exploit properties of alpha-stable distributions with full statistical efficiency. This is achieved by the method of L-estimation in a single-pass algorithm with modest computational requirements. The proposed estimators have good small sample performance, improved by the methods of trimming and winsorising; in other words, the value of these summary statistics can be approximated with high accuracy from data sketches of low dimension. Finally, I consider the problem of convergence assessment of Markov Chain Monte Carlo methods for simulating from complex, high dimensional, discrete distributions. I argue that online, fast, and efficient computation of summary statistics such as cardinality, entropy, and l_{alpha} distances may be a useful qualitative tool for detecting lack of convergence, and illustrate this with simulations of the posterior distribution of a decomposable Gaussian graphical model via the Metropolis-Hastings algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Karlsson, Johan. "Inverse Problems in Analytic Interpolation for Robust Control and Spectral Estimation." Doctoral thesis, Stockholm : Matematik, Mathematics, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wagner, John Joseph. "An adaptive atmospheric prediction algorithm to improve density forecasting for aerocapture guidance processes." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53046.

Full text
Abstract:
Many modern entry guidance systems depend on predictions of atmospheric parameters, notably atmospheric density, in order to guide the entry vehicle to some desired final state. However, in highly dynamic atmospheric environments such as the Martian atmosphere, the density may vary by as much as 200% from predicted pre-entry trends. This high level of atmospheric density uncertainty can cause significant complications for entry guidance processes and may in extreme scenarios cause complete failure of the entry. In the face of this uncertainty, mission designers are compelled to apply large trajectory and design safety margins which typically drive the system design towards less efficient solutions with smaller delivered payloads. The margins necessary to combat the high levels of atmospheric uncertainty may even preclude scientifically interesting destinations or architecturally useful mission modes such as aerocapture. Aerocapture is a method for inserting a spacecraft into an orbit about a planetary body with an atmosphere without the need for significant propulsive maneuvers. This can reduce the required propellant and propulsion hardware for a given mission which lowers mission costs and increases the available payload fraction. However, large density dispersions have a particularly acute effect on aerocapture trajectories due to the interaction of the high required speeds and relatively low densities encountered at aerocapture altitudes. Therefore, while the potential system level benefits of aerocapture are great, so too are the risks associated with this mission mode in highly uncertain atmospheric environments such as Mars. Contemporary entry guidance systems utilize static atmospheric density models for trajectory prediction and control. These static models are unable to alter the fundamental nature of the underlying state equations which are used to predict atmospheric density. This limits both the fidelity and adaptive freedom of these models and forces the guidance system to retroactively correct for the density prediction errors after those errors have already impacted the trajectory. A new class of dynamic density estimator called a Plastic Ensemble Neural System (PENS) is introduced which is able to generate high fidelity, adaptable density forecast models by altering the underlying atmospheric state equations to better agree with observed atmospheric trends. A new construct called an ensemble echo is also introduced which creates an associative learning architecture, permitting PENS to evolve with increasing atmospheric exposure. The PENS estimator is applied to a numerical guidance system and the performance of the composite system is investigated with over 144,000 guided trajectory simulations. The results demonstrate that the PENS algorithm achieves significant reductions in both the required post-aerocapture performance, and the aerocapture failure rates relative to historical density estimators.
APA, Harvard, Vancouver, ISO, and other styles
46

Voicu, Iulian. "Analyse, caractérisation et classification de signaux foetaux." Phd thesis, Université François Rabelais - Tours, 2011. http://tel.archives-ouvertes.fr/tel-00907317.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine biomédical, à l'interface entre l'instrumentation et le traitement du signal. L'objectif de ce travail est d'obtenir, grâce à une mélange de différentes informations, un monitorage de l'activité du fœtus (rythme cardiaque et mouvements fœtaux) pour apprécier son état de bien-être ou de souffrance, et ceci aux différents stades de la grossesse. Actuellement, les paramètres qui caractérisent la souffrance fœtale, issus du rythme cardiaque et des mouvements fœtaux, sont évalués par le médecin et ils sont réunis dans le score de Manning. Deux inconvénients majeurs existent: a) l'évaluation du score est trop longue puisqu'elle dure 1 heure; b) il existe des variations inter et intra-opérateur conduisant à différentes interprétations du bilan médical de la patiente. Pour s'affranchir de ces désavantages nous évaluons le bien-être fœtal d'une façon objective, à travers le calcul d'un score. Pour atteindre ce but, nous avons développé une technologie ultrasonore multi-capteurs (12 capteurs) permettant de recueillir une soixantaine de (paires de) signaux Doppler en provenance du cœur, des membres inférieurs et supérieurs. Dans cette thèse notre première contribution s'appuie sur la mise en œuvre de nouveaux algorithmes de détection du rythme cardiaque (mono-canal et multi-canaux). Notre deuxième contribution concerne l'implémentation de deux catégories de paramètres basés sur le rythme cardiaque : a) la classes des paramètres "traditionnels" utilisés par les obstétriciens et évalués aussi dans le test de Manning (ligne de base, accélérations, décélérations); b) la classe des paramètres utilisés par les cardiologues et qui caractérisent la complexité d'une série temporelle (entropie approximée, entropie échantillonnée, entropie multi-échelle, graphe de récurrence, etc.). Notre troisième contribution consiste également à apporter des modifications notables aux différents algorithmes du calcul des mouvements fœtaux et des paramètres qui en découlent comme : le nombre de mouvements, le pourcentage du temps passé en mouvement par un fœtus, la durée des mouvements. Notre quatrième contribution concerne l'analyse conjointe du rythme cardiaque et des mouvements fœtaux. Cette analyse nous conduit alors à l'identification de différents états comportementaux. Le développement ou le non-développement de ces états est un indicateur de l'évolution neurologique du fœtus. Nous proposons d'évaluer les paramètres de mouvements pour chaque état comportemental. Enfin, notre dernière contribution concerne la mise en œuvre de différents scores et des classifications qui en découlent. Les perspectives directes à donner à ce travail concernent l'intégration des scores ou paramètres les plus pertinents dans un dispositif de surveillance à domicile ou bien dans un dispositif de monitorage clinique.
APA, Harvard, Vancouver, ISO, and other styles
47

Almeida, Tiago Paggi de. "Decomposição de sinais eletromiográficos de superfície misturados linearmente utilizando análise de componentes independentes." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261976.

Full text
Abstract:
Orientador: Antônio Augusto Fasolo Quevedo<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação<br>Made available in DSpace on 2018-08-20T12:21:10Z (GMT). No. of bitstreams: 1 Almeida_TiagoPaggide_M.pdf: 6663822 bytes, checksum: bdc5918b5983a84b46acf03bb9096cc7 (MD5) Previous issue date: 2012<br>Resumo: A eletromiografia e uma pratica clinica que permite inferir sobre a integridade do sistema neuromuscular, o que inclui a analise da unidade funcional contrátil do sistema neuromuscular, a unidade motora. O sinal eletromiografico e um sinal elétrico resultante do transiente iônico devido potenciais de ação de unidades motoras capturados por eletrodos invasivos ou não invasivos. Eletrodos invasivos capturam potenciais de ação de ate uma única unidade motora, porem o procedimento e demorado e incomodo. Eletrodos de superfície permitem detectar potenciais de ação de modo não invasivo, porem resultam na mistura de potenciais de ação de varias unidades motoras, resultando em um sinal com aparência de ruído aleatório, dificultando uma analise. Técnicas de Separação Cega de Fontes, como Analise de Componentes Independentes, tem se mostrado eficientes na decomposição de sinais eletromiograficos de superfície nos constituintes potenciais de ação de unidades motoras. Este projeto tem como objetivo desenvolver um protótipo capaz de capturar sinais mioeletricos de superfície e analisar a viabilidade da separação de sinais eletromiograficos intramusculares misturados linearmente, utilizando Analise de Componentes Independentes. O sistema proposto integra uma matriz de eletrodos com ate sete canais, um modulo de pré-processamento, um software para controle da captura dos sinais eletromiograficos de superfície e o algoritmo FastICA em ambiente MATLABR para separação dos sinais eletromiograficos. Os resultados mostram que o sistema foi capaz de capturar sinais eletromiograficos de superfície e os sinais eletromiograficos intramusculares misturados linearmente foram separados de forma confiável<br>Abstract: Electromyography is a clinical practice that provides information regarding the physiological condition of the neuromuscular system, which includes the analysis of the contractile functional unit of the neuromuscular system, known as motor unit. The electromyographic signal is an electrical signal resultant from ionic transient regarding motor unit action potentials captured by invasive or non-invasive electrodes. Invasive electrodes are able to detect action potentials of even one motor unit, although the procedure is time consuming and uncomfortable. Surface electrodes enable detecting action potential noninvasively, although the detected signal is a mixture of action potentials from several motor units within the detection area of the electrode, resulting in a complex interference pattern which is difficult to interpret. Blind Source Separation techniques, such as Independent Component Analysis, have proven effective for decomposing surface electromyographic signals into the constituent motor unit action potentials. The objective of this project was to develop a system in order to capture surface myoelectric signals and to analyze the viability for decomposing intramuscular myoelectric signals that were mixed linearly, using independent component analyzes. The system includes an electrode matrix with up to seven channels, a preprocessing module, a software for controlling surface myoelectric signals capture, and the FastICA algorithm in MATLABR for the intramuscular myoelectric signals decomposition. The results show that the system was able to capture surface myoelectric signals and was capable of decomposing the intramuscular myoelectric signals that were previously linearly mixed<br>Mestrado<br>Engenharia Biomedica<br>Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
48

Mokkadem, Abdelkader. "Critères de mélange pour des processus stationnaires : estimation sous des hypothèses de mélange : entropie des processus linéaires." Paris 11, 1987. http://www.theses.fr/1987PA112267.

Full text
Abstract:
Cette thèse est constituée de trois parties. La première est consacrée aux propriétés d'ergodicité et de mélange de certains processus aléatoires autorégressifs non linéaires ou autorégressifs polynomiaux. On y établit des critères d'ergodicité géométrique et de régularité absolue géométrique. Les résultats s'appliquent aux processus ARMA et aux processus bilinéaires. Les techniques utilisées proviennent de la théorie des chaînes de Markov et de la géométrie algébrique et différentielle réelle. Dans la deuxième partie on étudie des estimateurs à noyau sous des hypothèses de mélange. On établit des majorations des risques moyens d'ordre p et du risque uniforme pour l'estimateur de la densité et d'autres fonctionnelles on propose également un estimateur de l'entropie d'une variable aléatoire et on majore les risques de cet estimateur. Dans la troisième partie on s'intéresse à l'entropie des processus linéaires on établit une inégalité entre l'entropie d'un processus et celle de son filtré linéaire ; on obtient une égalité dans divers cas ; on termine cette partie en donnant des applications ; en particulier au principe du maximum d'entropie<br>There is three part in this thesis. In the first part we study the ergodic and mixing properties of some non linear or polynomial autoregressive random processes. We obtain sufficient conditions for geometric ergodicity and geometric absolute regularity of such processes. The results apply to the ARMA and bilinear processes. The technics used come from the Markov chain theory and the real algebraic and differential geometry. In the second part we study kernel estimators under strong mixing hypothesis ; we bound the p-mean risks and the uniform risk for the estimator of the density and some functionals we also propose estimators of the entropy and information of random variables and bound their risks. In the third part we study the entropy of linear processes we obtain an inequality between the entropy of a process and those of its linearly filtered ; an equality is obtained in some cases ; we close this part with applications particularly for the maximum entropy principle
APA, Harvard, Vancouver, ISO, and other styles
49

Mokkadem, Abdelkader. "Critères de mélange pour des processus stationnaires estimation sous des hypothèses de mélange, entropie des processus linéaires /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37608121j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chakik, Fadi El. "Maximum d'entropie et réseaux de neurones pour la classification." Grenoble INPG, 1998. http://www.theses.fr/1998INPG0091.

Full text
Abstract:
Cette these s'inscrit dans le cadre de la classification. Elle porte particulierement sur l'etude des methodes basees sur le principe du maximum d'entropie (maxent). Ces approches ont ete utilisees dans le laboratoire leibniz, par exemple, pour apprendre des comportements a un robot autonome. Le but du travail a ete de comparer cette approche a celles basees sur des reseaux de neurones. Une analyse theorique de la classification a permis de montrer qu'il existe une equivalence entre le maxent et l'apprentissage hebbien des reseaux neuronaux. Apprendre les valeurs des poids de ces derniers est equivalent a apprendre les valeurs moyennes de certains observables du maxent. L'inclusion de nouveaux observables permet d'apprendre a apprendre avec des regles d'apprentissage plus performantes dans le cadre des reseaux de neurones. Le maxent a ete applique a deux problemes particuliers : la classification des ondes de breiman (probleme standard en apprentissage), et la reconnaissance de textures d'images spot. Ces applications ont montre que le maxent permet d'atteindre des performances comparables, voire meilleures, que les methodes neuronales. La robustesse du code du maxent mis au point au cours de cette these est en train d'etre etudiee dans le laboratoire tima. Il est prevu qu'il soit telecharge sur un satellite americain (projet mptb), pour l'evaluer en presence de rayonnements ionisants, dans la perspective de faire des traitements d'images en systemes embarques.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography