To see the other types of publications on this topic, follow the link: Entropy (statistics).

Dissertations / Theses on the topic 'Entropy (statistics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Entropy (statistics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Asaad-Sultan, Asaad M. Abu. "Entropic vector optimization and simulated entropy : theory and applications." Thesis, University of Liverpool, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Monteiro, André Bosque. "Entropy statistics and aeronautical evolution." Instituto Tecnológico de Aeronáutica, 2006. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2169.

Full text
Abstract:
Employing the probabilistic entropy methodology, one of the objectives of this work is to develop a robust and simple tool to be employed in the empirical analysis of airplane configuration and technology evolution. Two specific analyses were performed to validate the procedure. One is the evolution of civil aviation transportation in the jet age (1950-2006) and the other being the evolution of fighter aircraft during the period of 1914 to 2009. An extensive study was carried out in order to select the variables used to describe each aircraft. After the creation of the aircraft databank, the tool developed in the present work takes the variables as input do evaluate two important technological indexes: the convergence and the diffusion. Studies analyzing the combination of the diffusion and the convergence indexes, as well as the critical transition of the airplanes were conducted in this paper. The methodology herein developed can be highly helpful as a decision making tool to be used during the conceptual design phase of an airplane.
APA, Harvard, Vancouver, ISO, and other styles
3

Carvalho, André Izecson de. "A design method based in entropy statistics." Instituto Tecnológico de Aeronáutica, 2008. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=1169.

Full text
Abstract:
Desde o início da história da aviação, cada nova aeronave é criada para que seja mais econômica, mais rápida, mais leve, melhor do que as que a antecederam. Compreender a evolução tecnológica da aviação é extremamente útil quando se deseja projetar uma nova aeronave. Saviotti (1984) e, posteriormente, Frenken (1997) propuseram um método de análise da evolução tecnológica de aeronaves. Esse método, baseado em conceitos de teoria da informação desenvolvidos por Shannon (1948), especialmente o conceito de entropia estatística, se mostrou bastante eficaz. O método, porém, é essencialmente uma ferramenta de análise, e não de projeto. Baseado em um banco de dados de aeronaves, o método é capaz de determinar em que medida cada uma delas foi influenciada por seus predecessores (o que se denomina "convergência") e, por sua vez, influenciou seus sucessores (o que se denomina "difusão"). Neste trabalho, uma ferramenta de auxílio ao projeto de aeronaves é proposta. Essa ferramenta se baseia no método de entropia estatística. Dadas especificações da aeronave que se deseja projetar, e um banco de dados com informações de diversas aeronaves, é realizada a minimização da entropia do sistema, o que conduz a uma aeronave com alto índice de convergência, ou seja, que tenha absorvido o mais possível da tecnologia de aeronaves existentes. A minimização da entropia é realizada através de um algoritmo genético. O algoritmo foi selecionado devido a sua robustez ao lidar com grandes quantidades de informação, minimizando diversas variáveis independentes simultaneamente mesmo na ausência de uma modelagem física do sistema. Foram realizadas diversas análises para avaliar a eficácia do método da entropia estatística. Em especial, um design criado pelo método foi comparado com três outros projetos com as mesmas especificações, realizados por times distintos de engenheiros utilizando-se de métodos convencionais. Além disso, foi avaliada a gama de especificações na qual o método é eficaz, e os seus limites. Como forma de avaliar mais completamente a qualidade dos resultados produzidos pelo método, estes foram testados, através de uma análise de desempenho das aeronaves obtidas, para avaliar se eram internamente consistentes.
APA, Harvard, Vancouver, ISO, and other styles
4

Weis, Stephan Wilhelm. "Exponential families with incompatible statistics and their entropy distance." kostenfrei, 2009. http://d-nb.info/1000055337/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Taghavianfar, Mohsen. "An Investigation on Network Entropy-Gossiping Protocol and Anti-entropy Evaluation." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2684.

Full text
Abstract:
This thesis is concerned with studying the behavior of a gossiping protocol in the specific sense meant by Ericsson; in the following pages I’ll introduce a Markov process which models the spread of information in such systems. The results will be verified by means of a discreet-event simulation.
Gossiping Protocols, are inherently random in behavior.Nonetheless, they are not structure-less. Their asymptotic behavior when implemented in large scales is the matter of focus in this thesis.
Tel: +46709700505 Address: Pinnharvsgatan 3 E lgh 1202 43147 Mölndal Sweden
APA, Harvard, Vancouver, ISO, and other styles
6

Peccarelli, Adric M. "A Comparison of Variance and Renyi's Entropy with Application to Machine Learning." Thesis, Northern Illinois University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10603911.

Full text
Abstract:

This research explores parametric and nonparametric similarities and disagreements between variance and the information theoretic measure of entropy, specifically Renyi’s entropy. A history and known relationships of the two different uncertainty measures is examined. Then, twenty discrete and continuous parametric families are tabulated with their respective variance and Renyi entropy functions ordered to understand the behavior of these two measures of uncertainty. Finally, an algorithm for variable selection using Renyi’s Quadratic Entropy and its kernel estimation is explored and compared to other popular selection methods using real data.

APA, Harvard, Vancouver, ISO, and other styles
7

Källberg, David. "Nonparametric Statistical Inference for Entropy-type Functionals." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-79976.

Full text
Abstract:
In this thesis, we study statistical inference for entropy, divergence, and related functionals of one or two probability distributions. Asymptotic properties of particular nonparametric estimators of such functionals are investigated. We consider estimation from both independent and dependent observations. The thesis consists of an introductory survey of the subject and some related theory and four papers (A-D). In Paper A, we consider a general class of entropy-type functionals which includes, for example, integer order Rényi entropy and certain Bregman divergences. We propose U-statistic estimators of these functionals based on the coincident or epsilon-close vector observations in the corresponding independent and identically distributed samples. We prove some asymptotic properties of the estimators such as consistency and asymptotic normality. Applications of the obtained results related to entropy maximizing distributions, stochastic databases, and image matching are discussed. In Paper B, we provide some important generalizations of the results for continuous distributions in Paper A. The consistency of the estimators is obtained under weaker density assumptions. Moreover, we introduce a class of functionals of quadratic order, including both entropy and divergence, and prove normal limit results for the corresponding estimators which are valid even for densities of low smoothness. The asymptotic properties of a divergence-based two-sample test are also derived. In Paper C, we consider estimation of the quadratic Rényi entropy and some related functionals for the marginal distribution of a stationary m-dependent sequence. We investigate asymptotic properties of the U-statistic estimators for these functionals introduced in Papers A and B when they are based on a sample from such a sequence. We prove consistency, asymptotic normality, and Poisson convergence under mild assumptions for the stationary m-dependent sequence. Applications of the results to time-series databases and entropy-based testing for dependent samples are discussed. In Paper D, we further develop the approach for estimation of quadratic functionals with m-dependent observations introduced in Paper C. We consider quadratic functionals for one or two distributions. The consistency and rate of convergence of the corresponding U-statistic estimators are obtained under weak conditions on the stationary m-dependent sequences. Additionally, we propose estimators based on incomplete U-statistics and show their consistency properties under more general assumptions.
APA, Harvard, Vancouver, ISO, and other styles
8

Padayachee, Jayanethie. "The application of Bayesian statistics and maximum entropy to Ion beam analysis techniques." Master's thesis, University of Cape Town, 1997. http://hdl.handle.net/11427/16143.

Full text
Abstract:
Bibliography: pages 128-129.
The elimination of some blurring property, such as the detector response function, from spectra has received a considerable amount of attention. The problem is usually complicated by the presence of noise in the data, and in general, there exists an infinite set of possible solutions which are consistent with the data within the bounds imposed by the noise. Such a problem is known, generally, as an ill-defined inverse problem. Many techniques have been developed in an attempt to solve inverse problems, for example the problem of deconvolution, but these techniques employ ad hoc modifications to solve different problems. Bayesian Statistics has been proved to be the only consistent method for solving inverse problems of the type where the information is expressed in terms of probability distributions. This dissertation presents results of applying the Bayesian formalism, together with the concepts of maximum information entropy and multiresolution pixons, to various inverse problems in ion beam analysis; The results of this method of deconvoluting Rutherford Backscattering Spectrometry (RBS) and Proton Induced X-ray Emission (PIXE) spectra are compared to the results from other deconvolution techniques, namely Fourier Transforms, Jansson's method and maximum entropy (MaxEnt) without pixons. All the deconvolution techniques show an improvement in the resolution of the RBS spectra but only the MaxEnt techniques show a significant improvement in the resolution of the PIXE spectra. The MaxEnt methods also produce physically acceptable results. The MaxEnt formalism was applied to the extraction of depth profiles from RBS and PIXE spectra and yielded good results. The technique was also used to deconvolute the beam profile from one-dimensional nuclear microprobe scans.
APA, Harvard, Vancouver, ISO, and other styles
9

Bright, Trevor James. "Non-fourier heat equations in solids analyzed from phonon statistics." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29710.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Zhang, Zhuomin; Committee Member: Kumar, Satish; Committee Member: Peterson, G. P. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
10

Snyder, Selena Tyr. "Time Series Modeling of Clinical Electroencephalogram Data - An Information Theory Approach." Ohio University Honors Tutorial College / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1524830090342372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Scafetta, Nicola. "An entropic approach to the analysis of time series." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc3033/.

Full text
Abstract:
Statistical analysis of time series. With compelling arguments we show that the Diffusion Entropy Analysis (DEA) is the only method of the literature of the Science of Complexity that correctly determines the scaling hidden within a time series reflecting a Complex Process. The time series is thought of as a source of fluctuations, and the DEA is based on the Shannon entropy of the diffusion process generated by these fluctuations. All traditional methods of scaling analysis, instead, are based on the variance of this diffusion process. The variance methods detect the real scaling only if the Gaussian assumption holds true. We call H the scaling exponent detected by the variance methods and d the real scaling exponent. If the time series is characterized by Fractional Brownian Motion, we have H¹d and the scaling can be safely determined, in this case, by using the variance methods. If, on the contrary, the time series is characterized, for example, by Lévy statistics, H ¹ d and the variance methods cannot be used to detect the true scaling. Lévy walk yields the relation d=1/(3-2H). In the case of Lévy flights, the variance diverges and the exponent H cannot be determined, whereas the scaling d exists and can be established by using the DEA. Therefore, only the joint use of two different scaling analysis methods, the variance scaling analysis and the DEA, can assess the real nature, Gauss or Lévy or something else, of a time series. Moreover, the DEA determines the information content, under the form of Shannon entropy, or of any other convenient entopic indicator, at each time step of the process that, given a sufficiently large number of data, is expected to become diffusion with scaling. This makes it possible to study the regime of transition from dynamics to thermodynamics, non-stationary regimes, and the saturation regime as well. First of all, the efficiency of the DEA is proved with theoretical arguments and with numerical work on artificial sequences. Then we apply the DEA to three different sets of real data, Genome sequences, hard x-ray solar flare waiting times and sequences of sociological interest. In all these cases the DEA makes new properties, overlooked by the standard method of analysis, emerge.
APA, Harvard, Vancouver, ISO, and other styles
12

Fischer, Richard. "Modélisation de la dépendance pour des statistiques d'ordre et estimation non-paramétrique." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1039/document.

Full text
Abstract:
Dans cette thèse, on considère la modélisation de la loi jointe des statistiques d'ordre, c.à.d. des vecteurs aléatoires avec des composantes ordonnées presque sûrement. La première partie est dédiée à la modélisation probabiliste des statistiques d'ordre d'entropie maximale à marginales fixées. Les marginales étant fixées, la caractérisation de la loi jointe revient à considérer la copule associée. Dans le Chapitre 2, on présente un résultat auxiliaire sur les copules d'entropie maximale à diagonale fixée. Une condition nécessaire et suffisante est donnée pour l'existence d'une telle copule, ainsi qu'une formule explicite de sa densité et de son entropie. La solution du problème de maximisation d'entropie pour les statistiques d'ordre à marginales fixées est présentée dans le Chapitre 3. On donne des formules explicites pour sa copule et sa densité jointe. On applique le modèle obtenu pour modéliser des paramètres physiques dans le Chapitre 4.Dans la deuxième partie de la thèse, on étudie le problème d'estimation non-paramétrique des densités d'entropie maximale des statistiques d'ordre en distance de Kullback-Leibler. Le chapitre 5 décrit une méthode d'agrégation pour des densités de probabilité et des densités spectrales, basée sur une combinaison convexe de ses logarithmes, et montre des bornes optimales non-asymptotiques en déviation. Dans le Chapitre 6, on propose une méthode adaptative issue d'un modèle exponentiel log-additif pour estimer les densités considérées, et on démontre qu'elle atteint les vitesses connues minimax. L'application de cette méthode pour estimer des dimensions des défauts est présentée dans le Chapitre 7
In this thesis we consider the modelling of the joint distribution of order statistics, i.e. random vectors with almost surely ordered components. The first part is dedicated to the probabilistic modelling of order statistics of maximal entropy with marginal constraints. Given the marginal constraints, the characterization of the joint distribution can be given by the associated copula. Chapter 2 presents an auxiliary result giving the maximum entropy copula with a fixed diagonal section. We give a necessary and sufficient condition for its existence, and derive an explicit formula for its density and entropy. Chapter 3 provides the solution for the maximum entropy problem for order statistics with marginal constraints by identifying the copula of the maximum entropy distribution. We give explicit formulas for the copula and the joint density. An application for modelling physical parameters is given in Chapter 4.In the second part of the thesis, we consider the problem of nonparametric estimation of maximum entropy densities of order statistics in Kullback-Leibler distance. Chapter 5 presents an aggregation method for probability density and spectral density estimation, based on the convex combination of the logarithms of these functions, and gives non-asymptotic bounds on the aggregation rate. In Chapter 6, we propose an adaptive estimation method based on a log-additive exponential model to estimate maximum entropy densities of order statistics which achieves the known minimax convergence rates. The method is applied to estimating flaw dimensions in Chapter 7
APA, Harvard, Vancouver, ISO, and other styles
13

Berrett, Thomas Benjamin. "Modern k-nearest neighbour methods in entropy estimation, independence testing and classification." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/267832.

Full text
Abstract:
Nearest neighbour methods are a classical approach in nonparametric statistics. The k-nearest neighbour classifier can be traced back to the seminal work of Fix and Hodges (1951) and they also enjoy popularity in many other problems including density estimation and regression. In this thesis we study their use in three different situations, providing new theoretical results on the performance of commonly-used nearest neighbour methods and proposing new procedures that are shown to outperform these existing methods in certain settings. The first problem we discuss is that of entropy estimation. Many statistical procedures, including goodness-of-fit tests and methods for independent component analysis, rely critically on the estimation of the entropy of a distribution. In this chapter, we seek entropy estimators that are efficient and achieve the local asymptotic minimax lower bound with respect to squared error loss. To this end, we study weighted averages of the estimators originally proposed by Kozachenko and Leonenko (1987), based on the k-nearest neighbour distances of a sample. A careful choice of weights enables us to obtain an efficient estimator in arbitrary dimensions, given sufficient smoothness, while the original unweighted estimator is typically only efficient in up to three dimensions. A related topic of study is the estimation of the mutual information between two random vectors, and its application to testing for independence. We propose tests for the two different situations of the marginal distributions being known or unknown and analyse their performance. Finally, we study the classical k-nearest neighbour classifier of Fix and Hodges (1951) and provide a new asymptotic expansion for its excess risk. We also show that, in certain situations, a new modification of the classifier that allows k to vary with the location of the test point can provide improvements. This has applications to the field of semi-supervised learning, where, in addition to labelled training data, we also have access to a large sample of unlabelled data.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Di. "INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2842.

Full text
Abstract:
Measurement of visual quality is crucial for various image and video processing applications.

The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics.

The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy.

The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Ekström, Magnus. "Maximum spacing methods and limit theorems for statistics based on spacings." Doctoral thesis, Umeå universitet, Matematisk statistik, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-85176.

Full text
Abstract:
The maximum spacing (MSP) method, introduced by Cheng and Amin (1983) and independently by Ranneby (1984), is a general estimation method for continuous univariate distributions. The MSP method, which is closely related to the maximum likelihood (ML) method, can be derived from an approximation based on simple spacings of the Kullback-Leibler information. It is known to give consistent and asymptotically efficient estimates under general conditions and works also in situations where the ML method fails, e.g. for the three parameter Weibull model. In this thesis it is proved under general conditions that MSP estimates of parameters in the Euclidian metric are strongly consistent. The ideas behind the MSP method are extended and a class of estimation methods is introduced. These methods, called generalized MSP methods, are derived from approxima­tions based on sum-functions of rath order spacings of certain information mea­sures, i.e. the ^-divergences introduced by Csiszår (1963). It is shown under general conditions that generalized MSP methods give consistent estimates. In particular, it is proved that generalized MSP methods give L1 consistent esti­mates in any family of distributions with unimodal densities, without any further conditions on the distributions. Other properties such as distributional robust­ness are also discussed. Several limit theorems for sum-functions of rath order spacings are given, for ra fixed as well as for the case when ra is allowed to in­crease to infinity with the sample size. These results provide a strongly consistent nonparametric estimator of entropy, as well as a characterization of the uniform distribution. Further, it is shown that Cressie's (1976) goodness of fit test is strongly consistent against all continuous alternatives.
digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
16

Conley, Thomas A. "Effective Programmatic Analysis of Network Flow Data for Security and Visualization using Higher-order Statistics and Domain Specific Embedded Languages." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1336482912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Taconeli, Cesar Augusto. "Árvores de classificação multivariadas fundamentadas em coeficientes de dissimilaridade e entropia." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-15102008-082243/.

Full text
Abstract:
A análise estatística de grandes bancos de dados requer a utilização de metodologias flexíveis, capazes de produzir resultados esclarecedores e facilmente compreensíveis frente a dificuldades como a presença de números elevados de variáveis, diferentes graus de associações entre as mesmas e dados ausentes. A construção de árvores de classificação e regressão proporciona a modelagem de uma variável resposta, categorizada ou numérica, com base em um conjunto de covariáveis, sem esbarrar nas dificuldades mencionadas. A extensão multivariada de técnicas de classificação e regressão por árvores visa permitir a análise conjunta de duas ou mais variáveis respostas. Embora seja objeto de estudos recentes, a proposição de técnicas multivariadas de classificação e regressão por árvores tem sido verificada de maneira mais acentuada para situações em que se dispõe de múltiplas variáveis respostas numéricas. Propõemse, neste trabalho, novas alternativas para a construção de árvores de classificação multivariadas, visando analisar múltiplas variáveis respostas categorizadas. Tais alternativas baseiam-se em medidas de dissimilaridade e entropia. Por meio de um estudo de simulação, verificou-se o efeito das correlações e entropias das variáveis no desempenho das metodologias propostas (os resultados são melhores quanto maiores as entropias e correlações das variáveis sob estudo). A análise de dados de consumo de álcool e fumo dos habitantes do município de Botucatu-SP complementa o presente estudo, evidenciando, dentre outras coisas, que fatores como o grau de escolaridade, a ocupação profissional e a possibilidade de compartilhar problemas com amigos têm influência sobre os consumos de álcool e fumo dos habitantes.
The statistical analysis of large datasets requires the use of flexible methodologies, that can provide insight and understanding even in the presence of difficulties such as large numbers of variables having variable levels of association between themselves, and missing data. The construction of classification and regression trees allows for modeling of a categorical or numerical response variable as a function a set of covariates, while bypassing many of the cited difficulties. Multivariate trees extend classification and regression techniques to allow for joint analysis of two or more response variables. In recent studies, application of multivariate classification and regression techniques has been most common in situations involving numerical response variables. In this work we propose alternatives for constructing multivariate classification trees for multiple categorized response variables. Such alternatives are based on dissimilarity and entropy measures. A simulation study was used to examine the effect of variable correlations and entropies on the performance of the proposed methodology (results are better for high correlations and entropies). Analysis of data on alcohol consumption and smoking among inhabitants from Botucatu (SP) complements the analysis by showing that factors as the education level, daily occupation and possibility of sharing problems with friends have an influence on the alcohol consumption and smoking.
APA, Harvard, Vancouver, ISO, and other styles
18

Eliason, Ryan Lee. "Application of Convex Methods to Identification of Fuzzy Subpopulations." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2242.

Full text
Abstract:
In large observational studies, data are often highly multivariate with many discrete and continuous variables measured on each observational unit. One often derives subpopulations to facilitate analysis. Traditional approaches suggest modeling such subpopulations with a compilation of interaction effects. However, when many interaction effects define each subpopulation, it becomes easier to model membership in a subpopulation rather than numerous interactions. In many cases, subjects are not complete members of a subpopulation but rather partial members of multiple subpopulations. Grade of Membership scores preserve the integrity of this partial membership. By generalizing an analytic chemistry concept related to chromatography-mass spectrometry, we obtain a method that can identify latent subpopulations and corresponding Grade of Membership scores for each observational unit.
APA, Harvard, Vancouver, ISO, and other styles
19

Pachas, Erik W. "Probabilistic Methods In Information Theory." CSUSB ScholarWorks, 2016. https://scholarworks.lib.csusb.edu/etd/407.

Full text
Abstract:
Given a probability space, we analyze the uncertainty, that is, the amount of information of a finite system, by studying the entropy of the system. We also extend the concept of entropy to a dynamical system by introducing a measure preserving transformation on a probability space. After showing some theorems and applications of entropy theory, we study the concept of ergodicity, which helps us to further analyze the information of the system.
APA, Harvard, Vancouver, ISO, and other styles
20

Davalos, Trevino Antonio. "Sur les Propriétés Statistiques de l'Entropie de Permutation Multi-échelle et ses Raffinements; applications sur les Signaux Électromyographiques de Surface." Thesis, Orléans, 2020. http://www.theses.fr/2020ORLE3102.

Full text
Abstract:
L'entropie de permutation (EP) et l'entropie de permutation multi-échelle (EPM) sont largement utilisées dans l'analyse des séries temporelles à la recherche de régularités, notamment dans le contexte du signal biomédical. Les chercheurs doivent trouver des interprétations optimales, qui peuvent être compromises en ne prenant pas en compte les propriétés de l'algorithme MPE, notamment en ce qui concerne ses propriétés statistiques.C'est pourquoi, dans le présent travail, nous développons la théorie statistique qui sous-tend le MPE, notamment en ce qui concerne la caractérisation de ses deux premiers moments dans le contexte de la multi-échelle. Nous explorons ensuite les versions composites de MPE, afin de comprendre les propriétés sous-jacentes à l'amélioration de leurs performances. Nous avons également testé les valeurs attendues du MPE pour les processus stochastiques gaussiens largement utilisés, ce qui permet d'obtenir un repère d'entropie lorsque l'on utilise ces modèles pour simuler des signaux réels. Enfin, nous appliquons les méthodes MPE classique et composite sur des données électromyographiques de surface (sEMG), afin de différencier les différentes dynamiques d'activité musculaire dans les contractions isométriques.À la suite de notre projet, nous avons constaté que l'EPM est une statistique biaisée, qui diminue le respect du facteur multiéchelle indépendamment de la distribution de probabilité des signaux. Nous avons constaté que la variance de la statistique de l'EMM est fortement dépendante de la valeur de l'EMM elle-même, et presque égale à sa limite inférieure de Cramér-Rao, ce qui signifie que c'est un estimateur efficace. Nous avons constaté que les versions composites, bien qu'elles constituent une amélioration, mesurent également des informations réductrices, ce qui modifie l'estimation du PPE. En réponse, nous avons fourni un nouvel algorithme comme alternative à la multi-échelle à gros grains, ce qui améliore encore les estimations.Appliqué à des modèles gaussiens corrélés généraux, nous avons constaté que le MPE était entièrement caractérisé par les paramètres du modèle. Ainsi, nous avons développé une formulation générale de l'EMT attendue pour les dimensions d'encastrement faibles. Lorsque nous l'avons appliqué à des signaux sEMG réels, nous avons été en mesure de distinguer les états de fatigue et de non-fatigue avec toutes les méthodes, en particulier pour les dimensions d'encastrement élevées. En outre, nous avons constaté que la méthode de l'EMT que nous proposons permettait d'améliorer la différence entre les états d'activité.Par conséquent, nous fournissons au lecteur non seulement un développement sur la théorie actuelle du MPE, mais aussi les implications de ces résultats, à la fois dans le contexte de la modélisation et de l'application de ces techniques dans le domaine biomédical
Permutation Entropy (PE) and Multiscale Permutation Entropy (MPE) are extensively used in the analysis of time series searching for regularities, particularly in the context of biomedical signal. The researchers need to find optimal interpretations, which can be compromised by not taking in account the properties of the MPE algorithm, particularly regarding its statistical properties.Therefore, in the present work we expand on the statistical theory behind MPE, particularly regarding to the characterization of its first two moments in the context of multiscaling. We then explore the composite versions of MPE, in order to understand the underlying properties behind their improved performance. We also tested the expected MPE values for widely used Gaussian stochastic processes, which allows to obtain an Entropy benchmark when using these models to simulate real signals. Finally, we apply both the classical and composite MPE methods on surface Electromyographic (sEMG) data, in order to differentiate different muscle activity dynamics in isometric contractions.As a result of our project, we found the MPE to be a biased statistic, which decreases respect to the multiscaling factor regardless of the signals probability distribution. We found the MPE statistic’s variance to be highly dependent to the value of MPE itself, and almost equal to its Cramér-Rao Lower Bound, which means it is an efficient estimator. We found the composite versions, albeit an improvement, also measure reduntant information, which modifies the MPE estimation. In response, we provided a new algorithm as an alternative to the coarse-grain multiscaling, which further improve the estimations.When applied to general correlated Gaussian models, we found the MPE to be completely characterized by the model parameters. Thus, we developed a general formulation for the expected MPE for low embedding dimensions. When we applied to real sEMG signals, we were able to distinguish between fatigue and non-fatigue states with all methods, particularly for high embedding dimensions. Moreover, we found our proposed MPE method to enhance de difference between activity states.Therefore, we provide the reader with not only a development over the current MPE theory, but also with the implications of these findings, both in the context of modelization, and the application of these techniques in the biomedical field
APA, Harvard, Vancouver, ISO, and other styles
21

Vieira, Joice Melo 1980. "Transição para a vida adulta em São Paulo : cenários e tendências socio-demográficas." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280644.

Full text
Abstract:
Orientador: Maria Coleta F. A. de Oliveira
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas
Made available in DSpace on 2018-08-13T05:46:12Z (GMT). No. of bitstreams: 1 Vieira_JoiceMelo_D.pdf: 1263074 bytes, checksum: de552ec197c3aad33e2e5818b19691ff (MD5) Previous issue date: 2009
Resumo: A transição para a vida adulta é um momento crítico do curso de vida dos sujeitos. É uma fase que se caracteriza por importantes mudanças de status, que assinalam de diferentes formas a passagem da condição de dependente à condição de independente. Do ponto de vista sócio-demográfico, as mudanças de status mais importantes são: a conversão do indivíduo de estudante em trabalhador, de membro dependente de um domicílio em chefe de domicílio, de solteiro em pessoa em união, de filho (a) em pai ou mãe. O principal objetivo é descrever e analisar o processo de transição para a vida adulta no Estado de São Paulo em dois momentos de alargamento da coorte jovem, 1970 e 2000. Tanto em um quanto em outro se observam as chamadas ondas jovens, apesar da diferença nos cenários sócio-demográficos no país e no Estado de São Paulo. Busca-se uma apreensão integrada da passagem para a vida adulta, refletindo sobre como fatores de ordem econômica e institucional podem influenciar motivações e decisões acerca de eventos da trajetória de vida dos indivíduos. As fontes de dados utilizadas foram os censos demográficos de 1970 e 2000. Do ponto de vista metodológico, a principal inovação consiste na aplicação da análise de entropia de coortes sintéticas. Com ela é possível mensurar a (des) padronização do curso da vida, além de descrever o ritmo da passagem para a vida adulta ao longo das idades consideradas jovens. Com vistas a explorar algumas dimensões da vida juvenil no Estado de São Paulo, são utilizadas as informações da Pesquisa de Condições de Vida de 2006. A principal contribuição deste estudo consiste em explorar os diferenciais encontrados no tempo de transição para a vida adulta. Determinadas características sóciodemográficas dos sujeitos - como o sexo, a situação de domicílio, a cor/raça e a camada de renda - estão claramente associadas ao tempo de juventude e ao ritmo das transições. A duração da transição, se mais curta ou mais longa, muda de acordo com essas variáveis sócio-demográficas. Uma questão pertinente é em que medida esta diversidade no processo de transição para a vida adulta é produto de desigualdades sociais e, concomitantemente, reprodutora dessas mesmas desigualdades.
Abstract: The transition to adulthood is a critical moment in the individuals' life courses. This stage is characterized by important status changes, which may be responsible for different roots in the passage from a dependent towards an independent condition. From a sociodemographic perspective, the more important status changes are that from a student to a working condition, from a dependent to a head of a household, from a single to a married status, and from the condition of a child to that of a mother or a father. Those are the dimensions selected for the analysis of the process of transition to adulthood in the State of São Paulo, focusing on two points in time, 1970 and 2000. These where moments in which young waves have occurred, due to demographic factors affecting age structures, especially fertility changes in the past. Young cohorts have increased in volume at both moments, facing different sociodemographic contexts at the time. The aim of this analysis is to develop an integrated approach of demographic factors as well as economic and institutional ones, in a way that the timing of the transitions can be described and individuals' motivations and decisions can be understood. The main data sources used are de demographic censuses of 1970 and 2000. In addition, data from the Survey of Life Conditions (PCV) of 2006 were also used. From the methodological point of view, the principal innovation of this study is the application of the analysis of entropy of synthetic cohorts to the Brazilian census data. Entropy measures give an image of the process of standardization / de-standardization of life courses, and make it possible to describe the pace of the transition to adulthood across time. The contribution of this doctoral thesis is to revel differences in the timing of the transitions and exploring their meaning in the context were they occur. The variables taken for this endeavor are sex, household position, color/race, and income level. A shorter or longer transition depends on these sociodemographic characteristics. The results show that differentials in the pace of transition to adulthood are both a result of and an influential factor on social inequality.
Doutorado
Doutor em Demografia
APA, Harvard, Vancouver, ISO, and other styles
22

Lundquist, Anders. "Contributions to the theory of unequal probability sampling." Doctoral thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cohn, Trevor A. "Scaling conditional random fields for natural language processing /." Connect to thesis, 2007. http://eprints.unimelb.edu.au/archive/00002874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Posani, Lorenzo. "Caratterizzazione della misura di entropia di singolo nodo nell'ambito della teoria statistica dei network." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6067/.

Full text
Abstract:
In questo lavoro si è affrontata la definizione e la caratterizzazione di una misura di entropia di singolo nodo nell’ambito della teoria statistica dei network, per ottenere informazioni a livello di singolo nodo a fini di analisi e classificazione. Sono state introdotte e studiate alcune proprietà di questi osservabili in quanto la Network Entropy, precedentemente definita e utilizzata nello stesso contesto, fornisce un’informazione globale a livello dell’intero network. I risultati delle analisi svolte con questa definizione sono stati confrontati con una seconda definizione di entropia di singolo nodo proveniente dalla letteratura, applicando entrambe le misure allo stesso problema di caratterizzazione di due classi di nodi all’interno di un network
APA, Harvard, Vancouver, ISO, and other styles
25

Barkino, Iliam. "Summary Statistic Selection with Reinforcement Learning." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-390838.

Full text
Abstract:
Multi-armed bandit (MAB) algorithms could be used to select a subset of the k most informative summary statistics, from a pool of m possible summary statistics, by reformulating the subset selection problem as a MAB problem. This is suggested by experiments that tested five MAB algorithms (Direct, Halving, SAR, OCBA-m, and Racing) on the reformulated problem and comparing the results to two established subset selection algorithms (Minimizing Entropy and Approximate Sufficiency). The MAB algorithms yielded errors at par with the established methods, but in only a fraction of the time. Establishing MAB algorithms as a new standard for summary statistics subset selection could therefore save numerous scientists substantial amounts of time when selecting summary statistics for approximate bayesian computation.
APA, Harvard, Vancouver, ISO, and other styles
26

Vigneaux, Juan Pablo. "Topology of statistical systems : a cohomological approach to information theory." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC070.

Full text
Abstract:
Cette thèse étend dans plusieurs directions l’étude cohomologique de la théorie de l’information initiée par Baudot et Bennequin. On introduit une notion d'espace statistique basée sur les topos, puis on étudie plusieurs invariants cohomologiques. Les fonctions d’information et quelques objets associés apparaissent comme des classes de cohomologie distinguées ; les équations de cocycle correspondantes codent les propriétés récursives de ces fonctions. L'information a donc une signification topologique et la topologie sert de cadre unificateur.La première partie traite des fondements géométriques de la théorie. Les structures d’information sont présentées sous forme de catégories qui codent les relations de raffinement entre différents observables statistiques. On étudie les produits et coproduits des structures d’information, ainsi que leur représentation par des fonctions mesurables ou des opérateurs hermitiens. Chaque structure d’information donne lieu à un site annelé ; la cohomologie de l'information est introduite avec les outils homologiques développés par Artin, Grothendieck, Verdier et leurs collaborateurs.La deuxième partie étudie la cohomologie des variables aléatoires discrètes. Les fonctions d'information — l'entropie de Shannon, l'alpha-entropie de Tsallis, et la divergence de Kullback-Leibler — apparaissent sous la forme de 1-cocycles pour certains modules de coefficients probabilistes (fonctions de lois de probabilité). Dans le cas combinatoire (fonctions des histogrammes), le seul 0-cocycle est la fonction exponentielle, et les 1-cocycles sont des coefficients multinomiaux généralisés (Fontené-Ward). Il existe une relation asymptotique entre les cocycles combinatoires et probabilistes.La troisième partie étudie en détail les coefficients q-multinomiaux, en montrant que leur taux de croissance est lié à la 2-entropie de Tsallis (entropie quadratique). Lorsque q est une puissance première, ces coefficients q-multinomiaux comptent les drapeaux d'espaces vectoriels finis de longueur et de dimensions prescrites. On obtient une explication combinatoire de la non-additivité de l'entropie quadratique et une justification fréquentiste du principe de maximisation d'entropie quadratique. On introduit un processus stochastique à temps discret associé à la distribution de probabilité q-binomial qui génère des espaces vectoriels finis (drapeaux de longueur 2). La concentration de la mesure sur certains sous-espaces typiques permet d'étendre la théorie de Shannon à ce cadre.La quatrième partie traite de la généralisation de la cohomologie de l'information aux variables aléatoires continues. On étudie les propriétés de fonctorialité du conditionnement (vu comme désintégration) et sa compatibilité avec la marginalisation. Les calculs cohomologiques sont limités aux variables réelles gaussiennes. Lorsque les coordonnées sont fixées, les 1-cocycles sont l’entropie différentielle ainsi que les moments généralisés. Les catégories grassmanniennes permettent de traiter les calculs canoniquement et retrouver comme seuls classes de cohomologie de degré 1 l'entropie et la dimension. Ceci constitue une nouvelle caractérisation algébrique de l'entropie différentielle
This thesis extends in several directions the cohomological study of information theory pioneered by Baudot and Bennequin. We introduce a topos-theoretical notion of statistical space and then study several cohomological invariants. Information functions and related objects appear as distinguished cohomology classes; the corresponding cocycle equations encode recursive properties of these functions. Information has thus topological meaning and topology serves as a unifying framework.Part I discusses the geometrical foundations of the theory. Information structures are introduced as categories that encode the relations of refinement between different statistical observables. We study products and coproducts of information structures, as well as their representation by measurable functions or hermitian operators. Every information structure gives rise to a ringed site; we discuss in detail the definition of information cohomology using the homological tools developed by Artin, Grothendieck, Verdier and their collaborators.Part II studies the cohomology of discrete random variables. Information functions—Shannon entropy, Tsallis alpha-entropy, Kullback-Leibler divergence—appear as 1-cocycles for appropriate modules of probabilistic coefficients (functions of probability laws). In the combinatorial case (functions of histograms), the only 0-cocycle is the exponential function, and the 1-cocycles are generalized multinomial coefficients (Fontené-Ward). There is an asymptotic relation between the combinatorial and probabilistic cocycles.Part III studies in detail the q-multinomial coefficients, showing that their growth rate is connected to Tsallis 2-entropy (quadratic entropy). When q is a prime power, these q-multinomial coefficients count flags of finite vector spaces with prescribed length and dimensions. We obtain a combinatorial explanation for the nonadditivity of the quadratic entropy and a frequentist justification for the maximum entropy principle with Tsallis statistics. We introduce a discrete-time stochastic process associated to the q-binomial probability distribution that generates finite vector spaces (flags of length 2). The concentration of measure on certain typical subspaces allows us to extend Shannon's theory to this setting.Part IV discusses the generalization of information cohomology to continuous random variables. We study the functoriality properties of conditioning (seen as disintegration) and its compatibility with marginalization. The cohomological computations are restricted to the real valued, gaussian case. When coordinates are fixed, the 1-cocycles are the differential entropy as well as generalized moments. When computations are done in a coordinate-free manner, with the so-called grassmannian categories, we recover as the only degree-one cohomology classes the entropy and the dimension. This constitutes a novel algebraic characterization of differential entropy
APA, Harvard, Vancouver, ISO, and other styles
27

Cosma, Ioana Ada. "Dimension reduction of streaming data via random projections." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:09eafd84-8cb3-4e54-8daf-18db7832bcfc.

Full text
Abstract:
A data stream is a transiently observed sequence of data elements that arrive unordered, with repetitions, and at very high rate of transmission. Examples include Internet traffic data, networks of banking and credit transactions, and radar derived meteorological data. Computer science and engineering communities have developed randomised, probabilistic algorithms to estimate statistics of interest over streaming data on the fly, with small computational complexity and storage requirements, by constructing low dimensional representations of the stream known as data sketches. This thesis combines techniques of statistical inference with algorithmic approaches, such as hashing and random projections, to derive efficient estimators for cardinality, l_{alpha} distance and quasi-distance, and entropy over streaming data. I demonstrate an unexpected connection between two approaches to cardinality estimation that involve indirect record keeping: the first using pseudo-random variates and storing selected order statistics, and the second using random projections. I show that l_{alpha} distances and quasi-distances between data streams, and entropy, can be recovered from random projections that exploit properties of alpha-stable distributions with full statistical efficiency. This is achieved by the method of L-estimation in a single-pass algorithm with modest computational requirements. The proposed estimators have good small sample performance, improved by the methods of trimming and winsorising; in other words, the value of these summary statistics can be approximated with high accuracy from data sketches of low dimension. Finally, I consider the problem of convergence assessment of Markov Chain Monte Carlo methods for simulating from complex, high dimensional, discrete distributions. I argue that online, fast, and efficient computation of summary statistics such as cardinality, entropy, and l_{alpha} distances may be a useful qualitative tool for detecting lack of convergence, and illustrate this with simulations of the posterior distribution of a decomposable Gaussian graphical model via the Metropolis-Hastings algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, Zijun. "Theoretical and Numerical Analysis of Phase Changes in Soft Condensed Matter." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case15620007885239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Torku, Thomas K. "Takens Theorem with Singular Spectrum Analysis Applied to Noisy Time Series." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3013.

Full text
Abstract:
The evolution of big data has led to financial time series becoming increasingly complex, noisy, non-stationary and nonlinear. Takens theorem can be used to analyze and forecast nonlinear time series, but even small amounts of noise can hopelessly corrupt a Takens approach. In contrast, Singular Spectrum Analysis is an excellent tool for both forecasting and noise reduction. Fortunately, it is possible to combine the Takens approach with Singular Spectrum analysis (SSA), and in fact, estimation of key parameters in Takens theorem is performed with Singular Spectrum Analysis. In this thesis, we combine the denoising abilities of SSA with the Takens theorem approach to make the manifold reconstruction outcomes of Takens theorem less sensitive to noise. In particular, in the course of performing the SSA on a noisy time series, we branch of into a Takens theorem approach. We apply this approach to a variety of noisy time series.
APA, Harvard, Vancouver, ISO, and other styles
30

Matějíček, Jaroslav. "Generátory náhodných čísel pro kryptografii." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236519.

Full text
Abstract:
The content of this thesis is the design and statistical tests of two di erent hardware random number generators. It also includes an overview of the sources of entropy, algorithms used to correct deviations from the normal distribution and the description of statistical tests.
APA, Harvard, Vancouver, ISO, and other styles
31

Thomaz, Carlos Eduardo. "Maximum entropy covariance estimate for statistical pattern recognition." Thesis, Imperial College London, 2004. http://hdl.handle.net/10044/1/8755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mihelich, Martin. "Vers une compréhension du principe de maximisation de production d'entropie." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS038/document.

Full text
Abstract:
Dans cette thèse nous essayons de comprendre pourquoi le Principe de Maximisation de Production d'Entropie (MEP) donne de très bons résultats dans de nombreux domaines de la physique hors équilibre et notamment en climatologie. Pour ce faire nous étudions ce principe sur des systèmes jouets de la physique statistique qui reproduisent les comportements des modèles climatiques. Nous avons notamment travaillé sur l'Asymmetric Simple Exclusion Process (ASEP) et le Zero Range Process (ZRP). Ceci nous a permis tout d'abord de relier MEP à un autre principe qui est le principe de maximisation d'entropie de Kolmogorov-Sinai (MKS). De plus, l'application de MEP à ces systèmes jouets donne des résultats physiquement cohérents. Nous avons ensuite voulu étendre le lien entre MEP et MKS dans des systèmes plus compliqués avant de montrer que, pour les chaines de Markov, maximiser l'entropie de KS revenait à minimiser le temps que le système prend pour atteindre son état stationnaire (mixing time). En fin nous avons appliqué MEP à la convection atmosphérique
In this thesis we try to understand why the maximum entropy production principlegives really good results in a wide range of Physics fields and notably in climatology. Thus we study this principle on classical toy models which mimic the behaviour of climat models. In particular we worked on the Asymmetric Simple Exclusion Process(ASEP) and on the Zero Range Process (ZRP). This enabled us first to connect MEP to an other principle which is the maximum Kolmogorov-Sinaï entropy principle (MKS). Moreover the application of MEP on these systems gives results that are physically coherent. We then wanted to extend this link between MEP and MKS in more complicated systems, before showing that, for Markov Chains, maximise the KS entropy is the same as minimise the time the system takes to reach its stationnary state (mixing time). Thus, we applied MEP to the atmospheric convection
APA, Harvard, Vancouver, ISO, and other styles
33

Granado, Elvalicia A. "Comparing Three Effect Sizes for Latent Class Analysis." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc822835/.

Full text
Abstract:
Traditional latent class analysis (LCA) considers entropy R2 as the only measure of effect size. However, entropy may not always be reliable, a low boundary is not agreed upon, and good separation is limited to values of greater than .80. As applications of LCA grow in popularity, it is imperative to use additional sources to quantify LCA classification accuracy. Greater classification accuracy helps to ensure that the profile of the latent classes reflect the profile of the true underlying subgroups. This Monte Carlo study compared the quantification of classification accuracy and confidence intervals of three effect sizes, entropy R2, I-index, and Cohen’s d. Study conditions included total sample size, number of dichotomous indicators, latent class membership probabilities (γ), conditional item-response probabilities (ρ), variance ratio, sample size ratio, and distribution types for a 2-class model. Overall, entropy R2 and I-index showed the best accuracy and standard error, along with the smallest confidence interval widths. Results showed that I-index only performed well for a few cases.
APA, Harvard, Vancouver, ISO, and other styles
34

Basile, Chiara <1982&gt. "Entropy and Semantics: Textual Information Extraction through Statistical Methods." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/3224/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Raquepas, Renaud. "Outils et résultats dans l'étude de la production d'entropie." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM052.

Full text
Abstract:
Cette thèse consiste principalement en une collection d'articles portant sur l'étude du comportement asymptotique à temps longs de la production d'entropie et sur des problèmes de mélanges qui y sont associés. Après avoir introduit les notions centrales, on présente, dans l'ordre~: une étude de la limite du bruit disparaissant de principes de grandes déviations locaux pour des fonctionnelles de production d'entropie dans le cadre des diffusions non dégénérées; un résultat de mélange exponentiel pour des équations différentielles stochastiques avec bruit brownien dégénéré; un résultat de mélange exponentiel pour des équations différentielles stochastiques avec bruit poissonien dégénéré; une étude des comportements à temps longs d'ensembles de marcheurs fermioniques interagissant avec un environnement structuré; une étude des courants et de la production d'entropie dans ce même cadre
This thesis consists mainly of a collection of papers on the study of the large-time asymptotics of entropy production and associated mixing problems. After having introduced the central notions, we present, in order: a study of the vanishing-noise limit for the large deviations of entropy production functionals in nondegenerate diffusions; an exponential mixing result for stochastic differential equations driven by degenerate Brownian noise; an exponential mixing result for stochastic differential equations driven by degenerate Poissonian noise; a study of the large-time behaviour of ensembles of fermionic walkers interacting with a structured environment; a study of currents and entropy production in a similar framework
APA, Harvard, Vancouver, ISO, and other styles
36

Failla, Roberto. "Random growth of interfaces: Statistical analysis of single columns and detection of critical events." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4550/.

Full text
Abstract:
The dynamics of growth and formation of surfaces and interfaces is becoming very important for the understanding of the origin and the behavior of a wide range of natural and industrial dynamical processes. The first part of the paper is focused on the interesting field of the random growth of surfaces and interfaces, which finds application in physics, geology, biology, economics, and engineering among others. In this part it is studied the random growth of surfaces from within the perspective of a single column, namely, the fluctuation of the column height around the mean value, which is depicted as being subordinated to a standard fluctuation-dissipation process with friction g. It is argued that the main properties of Kardar-Parisi-Zhang theory are derived by identifying the distribution of return times to y(0) = 0, which is a truncated inverse power law, with the distribution of subordination times. The agreement of the theoretical prediction with the numerical treatment of the model of ballistic deposition is remarkably good, in spite of the finite size effects affecting this model. The second part of the paper deals with the efficiency of the diffusion entropy analysis (DEA) when applied to the studies of stromatolites. In this case it is shown that this tool can be confidently used for the detection of complexity. The connection between the two studies is established by the use of the DEA itself. In fact, in both analyses, that is, the random growth of interfaces and the study of stromatolites, the method of diffusion entropy is able to detect the real scaling of the system, namely, the scaling of the process is determined by genuinely random events, also called critical events.
APA, Harvard, Vancouver, ISO, and other styles
37

Mikolov, Tomáš. "Statistické jazykové modely založené na neuronových sítích." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261268.

Full text
Abstract:
Statistické jazykové modely jsou důležitou součástí mnoha úspěšných aplikací, mezi něž patří například automatické rozpoznávání řeči a strojový překlad (příkladem je známá aplikace Google Translate). Tradiční techniky pro odhad těchto modelů jsou založeny na tzv. N-gramech. Navzdory známým nedostatkům těchto technik a obrovskému úsilí výzkumných skupin napříč mnoha oblastmi (rozpoznávání řeči, automatický překlad, neuroscience, umělá inteligence, zpracování přirozeného jazyka, komprese dat, psychologie atd.), N-gramy v podstatě zůstaly nejúspěšnější technikou. Cílem této práce je prezentace několika architektur jazykových modelůzaložených na neuronových sítích. Ačkoliv jsou tyto modely výpočetně náročnější než N-gramové modely, s technikami vyvinutými v této práci je možné jejich efektivní použití v reálných aplikacích. Dosažené snížení počtu chyb při rozpoznávání řeči oproti nejlepším N-gramovým modelům dosahuje 20%. Model založený na rekurentní neurovové síti dosahuje nejlepších publikovaných výsledků na velmi známé datové sadě (Penn Treebank).
APA, Harvard, Vancouver, ISO, and other styles
38

Amitai, Shahar. "Statistical mechanics, entropy and macroscopic properties of granular and porous materials." Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/54766.

Full text
Abstract:
Granular materials are an intriguing phase of matter. They can support stresses like a solid, but can also flow down a slope like a liquid. They compress under tapping, but dilate under shearing. Granular materials have fascinated the research community for centuries, but are still not fully understood. A granular statistical mechanical formalism was introduced a quarter of a century ago. However, it is still very much a theory in evolution. In this thesis, we present a few developments of the theory, which make it more rigorous and testable. We adjust the original formalism by replacing the volume function by a more suitable connectivity function. We identify the structural degrees of freedom as the edges of a spanning tree of the contact network graph. We extend the formalism to include constraints on these degrees of freedom and correlations between them. We combine between this formalism and the better established stress ensemble, and then derive an equipartition principle and an equation of state, relating the macroscopic volume and boundary stress to the analogue of the temperature, the contactivity. This makes the theory testable by macroscopic measurements. We then address two application-orientated problems, involving the porous media made by consolidated granular materials. First, we present a scheme to design porous fuel cell electrodes such that the three-phase boundary (TPB) is maximised. These electrodes are made of sintered bi-disperse powders, and the longer their TPB the more efficient the fuel cell. Using a systematic analysis for a commonly used set of given constraints, we find optimal design parameters that yield a TPB that is three times longer than the conventional design under the same constraints. Then, we focus on transport in the pore space of such materials. We study the diffusion of finite-size particles in porous media, and what makes them anomalous. Having pinned-down the causes for sub-diffusion, we develop a continuous-time random walk-based model that predicts correctly the anomaly parameter.
APA, Harvard, Vancouver, ISO, and other styles
39

Durão, Lisan Marcos Marques 1991. "Entropia estatística de sistemas abertos." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276945.

Full text
Abstract:
Orientador: Amir Ordacgi Caldeira
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-27T17:13:50Z (GMT). No. of bitstreams: 1 Durao_LisanMarcosMarques_M.pdf: 1224728 bytes, checksum: b2631019a5882f7a0fca6cb609bfff9f (MD5) Previous issue date: 2015
Resumo: Sistemas quânticos abertos, ou não isolados, podem ser caracterizados a partir de uma entropia estatística. A entropia é um conceito fundamental na física e usualmente é interpretada como falta de informação a respeito do estado do sistema. O programa usual da física estatística para sistemas não isolados é propor um Hamiltoniano para o "Universo", descrito por uma distribuição de Gibbs e aplicando uma abordagem do tipo "Sistema + Reservatório", de onde podemos avaliar o operador densidade reduzido do sistema através de um processo de traço parcial. Outra maneira de obter o operador densidade seria obter a entropia de Von Neumann do sistema completo e a partir dela o operador densidade total pelo princípio de máxima entropia para então tomar o traço parcial com respeito as coordenadas do reservatório. Por outro lado, podemos tentar obter esse operador densidade e as propriedades termodinâmicas do sistema diretamente do princípio de máxima entropia Tal tarefa pode exigir o uso de outras formas de entropia não necessariamente extensivas. Partindo de uma abordagem do tipo sistema + reservatório, estudamos a entropia de uma partícula Browniana acoplada com um reservatório harmônico. Usando isso como o ponto de partida para a maximização de entropias dependentes de um parâmetro, busca-se uma correspondência entre as constantes emergentes do nosso modelo e os parâmetros ajustáveis de algumas bem conhecidas entropias generalizadas a fim de determinar qual ansatz é mais apropriado para nosso sistema
Abstract: The main goal of our project is the characterization of open quantum systems by means of a statistical entropy. Entropy is a fundamental physical quantity and is usually interpreted as the lack of knowledge about the state of the system, which means it is an informational metric. The statistical mechanical program for non-isolated quantum systems consists in creating a Hamiltonian for the "universe" within the so-called system-plus-reservoir approach from which one can evaluate the reduced density operator of the system of interest through the partial trace of the full density operator with respect to the reservoir coordinates. Notice that, in so doing we are tacitly assuming that the equilibrium state of the whole universe can be described by a Gibbsian distribution. Alternatively one can evaluate the Von Neumman entropy for the whole universe from which the above mentioned full density operator can be obtained via the maximum entropy priciple, and then, by the same partial tracing procedure, obtain the desired reduced density operator. Now we can ask ourselves what happens if one insists in obtaining a density operator and the thermodynamical properties for the system of interest directly from a maximum entropy principle. Such a task can require the use of other forms of entropy not necessarily extensive. Starting from the system-plus-reservoir approach we study the entropy and mean values of a Brownian particle coupled to a harmonic reservoir. Using this as the starting point to the maximization of non-extensive "parameter depending" entropies, we aim at finding a correspondence between the constants arising from our model and the adjustable parameters of some well-known generalized entropies which may turn out to be more appropriate to our needs
Mestrado
Física
Mestre em Física
APA, Harvard, Vancouver, ISO, and other styles
40

Van, Der Merwe Ruan Henry. "Triplet entropy loss: improving the generalisation of short speech language identification systems." Master's thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33953.

Full text
Abstract:
Spoken language identification systems form an integral part in many speech recognition tools today. Over the years many techniques have been used to identify the language spoken, given just the audio input, but in recent years the trend has been to use end to end deep learning systems. Most of these techniques involve converting the audio signal into a spectrogram which can be fed into a Convolutional Neural Network which can then predict the spoken language. This technique performs very well when the data being fed to model originates from the same domain as the training examples, but as soon as the input comes from a different domain these systems tend to perform poorly. Examples could be when these systems were trained on WhatsApp recordings but are put into production in an environment where the system receives recordings from a phone line. The research presented investigates several methods to improve the generalisation of language identification systems to new speakers and to new domains. These methods involve Spectral augmentation, where spectrograms are masked in the frequency or time bands during training and CNN architectures that are pre-trained on the Imagenet dataset. The research also introduces the novel Triplet Entropy Loss training method. This training method involves training a network simultaneously using Cross Entropy and Triplet loss. Several tests were run with three different CNN architectures to investigate what the effect all three of these methods have on the generalisation of an LID system. The tests were done in a South African context on six languages, namely Afrikaans, English, Sepedi, Setswanna, Xhosa and Zulu. The two domains tested were data from the NCHLT speech corpus, used as the training domain, with the Lwazi speech corpus being the unseen domain. It was found that all three methods improved the generalisation of the models, though not significantly. Even though the models trained using Triplet Entropy Loss showed a better understanding of the languages and higher accuracies, it appears as though the models still memorise word patterns present in the spectrograms rather than learning the finer nuances of a language. The research shows that Triplet Entropy Loss has great potential and should be investigated further, but not only in language identification tasks but any classification task.
APA, Harvard, Vancouver, ISO, and other styles
41

Guo, Weiyu. "Implementing the principle of maximum entropy in option pricing /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9946259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

LeBlanc, Raymond. "The maximum entropy principle as a basis for statistical models in epidemiology /." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74600.

Full text
Abstract:
We propose an approach for the construction of statistical models based on the maximum entropy principle in conjunction with a constructive method relying on a precise description of the individual contribution of each possible unit of observation of the population. This procedure is applied to the analysis of 2 x 2 tables, ubiquitous in biostatistics. This approach provides a new perspective and understanding of the fundamental nature of logistic regression, of Cox's proportional hazard model and of the noncentral hypergeometric model. Application of this method to analyse the odds ratio produces new distributions for this random variable and gives new means of estimating the odds ratio by confidence intervals. We present basic properties of these distributions and compare results with other methods.
Finally, this constructive approach that proceeds from the lower level of the individual contribution of the experimental units to the global level of the population is applied to sample size determination for comparative studies when, in the compared groups, there is attrition due to noncompliance to the specific regimen. This attrition reduces the apparent treatment effect in the analysis. This presentation constitutes a foundation for a more general and elegant solution to the problem.
APA, Harvard, Vancouver, ISO, and other styles
43

Chan, Oscar. "Prosodic features for a maximum entropy language model." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0244.

Full text
Abstract:
A statistical language model attempts to characterise the patterns present in a natural language as a probability distribution defined over word sequences. Typically, they are trained using word co-occurrence statistics from a large sample of text. In some language modelling applications, such as automatic speech recognition (ASR), the availability of acoustic data provides an additional source of knowledge. This contains, amongst other things, the melodic and rhythmic aspects of speech referred to as prosody. Although prosody has been found to be an important factor in human speech recognition, its use in ASR has been limited. The goal of this research is to investigate how prosodic information can be employed to improve the language modelling component of a continuous speech recognition system. Because prosodic features are largely suprasegmental, operating over units larger than the phonetic segment, the language model is an appropriate place to incorporate such information. The prosodic features and standard language model features are combined under the maximum entropy framework, which provides an elegant solution to modelling information obtained from multiple, differing knowledge sources. We derive features for the model based on perceptually transcribed Tones and Break Indices (ToBI) labels, and analyse their contribution to the word recognition task. While ToBI has a solid foundation in linguistic theory, the need for human transcribers conflicts with the statistical model's requirement for a large quantity of training data. We therefore also examine the applicability of features which can be automatically extracted from the speech signal. We develop representations of an utterance's prosodic context using fundamental frequency, energy and duration features, which can be directly incorporated into the model without the need for manual labelling. Dimensionality reduction techniques are also explored with the aim of reducing the computational costs associated with training a maximum entropy model. Experiments on a prosodically transcribed corpus show that small but statistically significant reductions to perplexity and word error rates can be obtained by using both manually transcribed and automatically extracted features.
APA, Harvard, Vancouver, ISO, and other styles
44

Rietsch, Théo. "Théorie des valeurs extrêmes et applications en environnement." Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-00876217.

Full text
Abstract:
Dans cette thèse nous apportons plusieurs contributions, à la fois théoriques et appliquées, à la théorie des valeurs extrêmes. Les deux premiers chapitres de cette thèse s'attachent à répondre à des questions cruciales en climatologie. La première question est de savoir si un changement dans le comportement des extrêmes de température peut être détecté entre le début du siècle et aujourd'hui. Pour cela nous proposons d'utiliser la divergence de Kullback Leibler, que nous adaptons au contexte des extrêmes. Des résultats théoriques et des simulations permettent de valider l'approche proposée, dont les performances sont ensuite illustrées sur un jeu de données réelles. La deuxième question quant à elle combine les réseaux de neurones à la théorie des valeurs extrêmes afin de savoir où ajouter (resp. retirer) des stations dans un réseau pour gagner (resp. perdre) le plus (resp. le moins) d'information sur le comportement des extrêmes. Un algorithme, le Query By Committee, issu de la théorie du machine learning est développé puis appliqué à un jeu de données réelles. Les avantages, inconvénients et limites de cette approche sont ainsi mis en exergue. La dernier chapitre de la thèse traite d'un sujet plus théorique, à savoir l'estimation robuste du paramètre de queue d'une distribution de type Weibull en présence de covariables aléatoires. Nous proposons un estimateur robuste en utilisant un critère de minimisation de la divergence entre deux densités et étudions ses propriétés asymptotiques. Des simulations illustrent les performances de l'estimateur à distance finie. Cette thèse offre de nombreuses perspectives dont une liste non exhaustive est dressée en conclusion.
APA, Harvard, Vancouver, ISO, and other styles
45

Atas, Yasar Yilmaz. "Quelques aspects du chaos quantique dans les systèmes de N-corps en interaction : chaînes de spins quantiques et matrices aléatoires." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112221/document.

Full text
Abstract:
Mon travail de thèse est consacré à l’étude de quelques aspects de la physique quantique des systèmes quantiques à N corps en interaction. Il est orienté vers l’étude des chaînes de spins quantiques. Je me suis intéressé à plusieurs questions relatives aux chaînes de spins quantiques, du point de vue numérique et analytique à la fois. J'aborde en particulier les questions relatives à la structure des fonctions d'onde, la forme de la densité d'états et les propriétés spectrales des Hamiltoniens de chaînes de spins. Dans un premier temps, je présenterais très rapidement les techniques numériques de base pour le calcul des vecteurs et valeurs propres des Hamiltonien de chaînes de spins. Les densités d’états des modèles quantiques constituent des quantités importantes et très simples qui permettent de caractériser les propriétés spectrales des systèmes avec un grand nombre de degrés de liberté. Alors que dans la limite thermodynamique, les densités d'états de la plupart des modèles intégrables sont bien décrites par une loi gaussienne, dans certaines limites de couplage de la chaîne de spins au champ magnétique et pour un nombre de spins N fini sur la chaîne, on observe l’apparition de pics dans la densité d’états. Je montrerais que la connaissance des deux premiers moments du Hamiltonien dans le sous-espace dégénéré associé à chaque pics donne une bonne approximation de la densité d’états. Dans un deuxième temps je m'intéresserais aux propriétés spectrales des Hamiltoniens de chaînes de spins quantiques. L’un des principal résultats sur la statistique spectrale des systèmes quantiques concerne le comportement universel des fluctuations des mesures telles que l’espacement entre valeurs propres consécutives. Ces fluctuations sont bien décrites par la théorie des matrices aléatoires mais la comparaison avec les prédictions de cette théorie nécessite généralement une opération sur le spectre du Hamiltonien appelée unfolding. Dans les problèmes quantiques de N corps, la taille de l’espace de Hilbert croît généralement exponentiellement avec le nombre de particules, entraînant un manque de données pour pouvoir faire une statistique. Ces limitations ont amené l’introduction d’une nouvelle mesure se passant de la procédure d’unfolding basée sur le rapport d’espacements successifs plutôt que les espacements. En suivant l’idée du “surmise” de Wigner pour le calcul de la distribution de l’espacement, je montre comment calculer une approximation de la distribution du rapport d’espacements dans les trois ensembles gaussiens invariants en faisant le calcul pour des matrices 3x3. Les résultats obtenus pour les différents ensembles de matrices aléatoires se sont révélés être en excellent accord avec les résultats numériques. Enfin je m’intéresserais à la structure des fonctions d’ondes fondamentales des modèles de chaînes de spins quantiques. Les fonctions d’onde constituent, avec le spectre en énergie, les objets fondamentaux des systèmes quantiques : leur structure est assez compliquée et n’est pas très bien comprise pour la plupart des systèmes à N corps. En raison de la croissance exponentielle de la taille de l’espace de Hilbert avec le nombre de particules, l’étude des vecteurs propres est une tâche très difficile, non seulement du point de vue analytique mais aussi du point de vue numérique. Je démontrerais en particulier que l’état fondamental de tous les modèles que nous avons étudiés est multifractal avec en général une dimension fractale non triviale
My thesis is devoted to the study of some aspects of many body quantum interacting systems. In particular we focus on quantum spin chains. I have studied several aspects of quantum spin chains, from both numerical and analytical perspectives. I addressed especially questions related to the structure of eigenfunctions, the level densities and the spectral properties of spin chain Hamiltonians. In this thesis, I first present the basic numerical techniques used for the computation of eigenvalues and eigenvectors of spin chain Hamiltonians. Level densities of quantum models are important and simple quantities that allow to characterize spectral properties of systems with large number of degrees of freedom. It is well known that the level densities of most integrable models tend to the Gaussian in the thermodynamic limit. However, it appears that in certain limits of coupling of the spin chain to the magnetic field and for finite number of spins on the chain, one observes peaks in the level density. I will show that the knowledge of the first two moments of the Hamiltonian in the degenerate subspace associated with each peak give a good approximation to the level density. Next, I study the statistical properties of the eigenvalues of spin chain Hamiltonians. One of the main achievements in the study of the spectral statistics of quantum complex systems concerns the universal behaviour of the fluctuation of measure such as the distribution of spacing between two consecutive eigenvalues. These fluctuations are very well described by the theory of random matrices but the comparison with the theoretical prediction generally requires a transformation of the spectrum of the Hamiltonian called the unfolding procedure. For many-body quantum systems, the size of the Hilbert space generally grows exponentially with the number of particles leading to a lack of data to make a proper statistical study. These constraints have led to the introduction of a new measure free of the unfolding procedure and based on the ratio of consecutive level spacings rather than the spacings themselves. This measure is independant of the local level density. By following the Wigner surmise for the computation of the level spacing distribution, I obtained approximation for the distribution of the ratio of consecutive level spacings by analyzing random 3x3 matrices for the three canonical ensembles. The prediction are compared with numerical results showing excellent agreement. Finally, I investigate eigenfunction statistics of some canonical spin-chain Hamiltonians. Eigenfunctions together with the energy spectrum are the fundamental objects of quantum systems: their structure is quite complicated and not well understood. Due to the exponential growth of the size of the Hilbert space, the study of eigenfunctions is a very difficult task from both analytical and numerical points of view. I demonstrate that the groundstate eigenfunctions of all canonical models of spin chain are multifractal, by computing numerically the Rényi entropy and extrapolating it to obtain the multifractal dimensions
APA, Harvard, Vancouver, ISO, and other styles
46

Zander, Claudia. "Information measures, entanglement and quantum evolution." Diss., University of Pretoria, 2007. http://upetd.up.ac.za/thesis/available/etd-04212008-090506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Chao. "Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Saboe, Michael S. "A software technology transition entropy based engineering model." Monterey, Calif. : Naval Postgraduate School, 2002. http://edocs.nps.edu/npspubs/scholarly/dissert/2002/Mar/02Mar%5FSaboe%5FPhD.pdf.

Full text
Abstract:
Dissertation (Ph.D. in Software Engineering)--Naval Postgraduate School, March 2002.
Dissertation supervisor: Luqi. "March 2002." Description based on title screen as viewed on July 15, 2010. Author(s) subject terms: Software Engineering, Technology Transfer, Information Theory, Communication Theory, Statistical Mechanics, Dynamical Systems, Control Theory, Learning Curves, Entropy, Information Temperature, Temperature of Software (o Saboe), Technology Transfer Dynamics, Research Management, Diffusion of Innovation, Project Management, Physics of Software Includes bibliographical references (p. 457-489). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
49

Moruzzi, Lara. "Dal formalismo Hamiltoniano alla meccanica statistica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18212/.

Full text
Abstract:
L'obiettivo di questa tesi è quello di descrivere i principi della meccanica statistica a partire da risultati della meccanica hamiltoniana. Si tratta soprattutto di arrivare a formulare delle definizioni ragionevoli delle variabili macroscopiche della termodinamica (quali l’entropia e la temperatura) a partire da quelle microscopiche (quali il flusso hamiltoniano). Si dà una particolare attenzione allo studio di sistemi isolati (per i quali viene definita la statistica microcanonica) e a quella di sistemi in interazione con un réservoir (per i quali viene definita la statistica canonica). Alcuni esempi semplici vengono poi analizzati.
APA, Harvard, Vancouver, ISO, and other styles
50

Simões, Lucas Silva. "Emergent Collective Properties in Societies of Neural Networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-17092018-154433/.

Full text
Abstract:
This project deals with the study of the social learning dynamics of agents in a society. For that we employ techniques from statistical mechanics, machine learning and probability theory. Agents interact in pairs by exchanging for/against opinions about issues using an algorithm constrained by available information. Making use of a maximum entropy analysis one can describe the interacting pair as a dynamics along the gradient of the logarithm of the evidence. This permits introducing energy like quantities and approximate global Hamiltonians. We test different hypothesis having in mind the limitations and advantages of each one. Knowledge of the expected value of the Hamiltonian is relevant information for the state of the society, inducing a canonical distribution by maximum entropy. The results are interpreted with the usual tools from statistical mechanics and thermodynamics. Some of the questions we discuss are: the existence of phase transitions separating ordered and disordered phases depending on the society parameters; how the issue being discussed by the agents influences the outcomes of the discussion, and how this reflects on the overall organization of the group; and the possible different interactions between opposing parties, and to which extent disagreement affects the cohesiveness of the society.
Esse projeto lida com o estudo da dinâmica de aprendizado social de agentes em uma sociedade. Para isso empregamos técnicas de mecânica estatística, aprendizado de máquina e teoria de probabilidades. Agentes interagem em pares trocando opiniões pró/contra questões usando um algoritmo restringido pela informação disponível. Fazendo-se uso de uma análise de máxima entropia, pode-se descrever o par da interação como uma dinâmica ao longo do gradiente do logaritmo da evidência. Isso permite introduzir quantidades similares a energia e Hamiltonianos globais aproximados. Testamos diferentes hipóteses tendo em mente as limitações e as vantagens de cada uma. Conhecimento do valor esperado do Hamiltoniano é informação relevante para o estado da sociedade, induzindo uma distribuição canônica a partir de máxima entropia. Os resultados são interpretados com as ferramentas usuais de mecânica estatística e termodinâmica. Algumas das questões que discutimos são: a existência de transições de fase separando fases ordenada e desordenada dependendo dos parâmetros da sociedade; o como a questão sendo discutida pelos agentes influencia os resultados da discussão, e como isso se reflete na organização do grupo como um todo; e as possíveis diferentes interações entre partidos opostos, e até que ponto o desacordo afeta a coesão da sociedade.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography