To see the other types of publications on this topic, follow the link: Hidden Variables.

Dissertations / Theses on the topic 'Hidden Variables'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 dissertations / theses for your research on the topic 'Hidden Variables.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ramachandran, Sowmya. "Theory refinement of Bayesian networks with hidden variables /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Johansson, Lars-Göran. "Understanding quantum mechanics : a realist interpretation without hidden variables." Doctoral thesis, Stockholms universitet, Filosofiska institutionen, 1992. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-81416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hallett, Joseph J. "Hidden Type Variables and Conditional Extension for More Expressive Generic Programs." Boston University Computer Science Department, 2007. https://hdl.handle.net/2144/1689.

Full text
Abstract:
Generic object-oriented programming languages combine parametric polymorphism and nominal subtype polymorphism, thereby providing better data abstraction, greater code reuse, and fewer run-time errors. However, most generic object-oriented languages provide a straightforward combination of the two kinds of polymorphism, which prevents the expression of advanced type relationships. Furthermore, most generic object-oriented languages have a type-erasure semantics: instantiations of type parameters are not available at run time, and thus may not be used by type-dependent operations. This dissertation shows that two features, which allow the expression of many advanced type relationships, can be added to a generic object-oriented programming language without type erasure: 1. type variables that are not parameters of the class that declares them, and 2. extension that is dependent on the satisfiability of one or more constraints. We refer to the first feature as hidden type variables and the second feature as conditional extension. Hidden type variables allow: covariance and contravariance without variance annotations or special type arguments such as wildcards; a single type to extend, and inherit methods from, infinitely many instantiations of another type; a limited capacity to augment the set of superclasses after that class is defined; and the omission of redundant type arguments. Conditional extension allows the properties of a collection type to be dependent on the properties of its element type. This dissertation describes the semantics and implementation of hidden type variables and conditional extension. A sound type system is presented. In addition, a sound and terminating type checking algorithm is presented. Although designed for the Fortress programming language, hidden type variables and conditional extension can be incorporated into other generic object-oriented languages. Many of the same problems would arise, and solutions analogous to those we present would apply.
APA, Harvard, Vancouver, ISO, and other styles
4

Paneru, Dilip. "Experimental Tests of Multiplicative Bell Inequalities." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41621.

Full text
Abstract:
This thesis is the synthesis of theoretical and experimental works performed in the area of quantum foundations, particularly on quantum correlations and experimental tests of multiplicative Bell inequalities. First we begin with a comprehensive theoretical work performed on the foundations of quantum mechanics, focusing on the puzzling concepts of quantum entanglement, and hidden variable theories. Specifically, we present a broad overview of different classes of hidden variable theories such as local, crypto-nonlocal, contextual and non-local theories, along with several Bell like inequalities for these theories, providing theoretical proofs based on quantum mechanics for the falsification of some of these theories. Second we present a body of experimental, and theoretical works performed on a new class of Bell inequalities, i.e., the multiplicative Bell inequalities. We experimentally report the observation of the Bell parameters close to the Tsirelson (quantum) limit, upto a large number of measurement devices $(n)$, and compare the results with a particular deterministic strategy. We also obtain classical bounds for some $n$, and report the experimental violation of these classical limits. We theoretically derive new richer bounds on the CHSH inequality (named after John Clauser, Michael Horne, Abnor Shimony and Richard Holt) and the multiplicative Bell parameter for $n=2$, based on the principle of ``relativistic independence'', and experimentally observe the distribution of Bell parameters as predicted by these bounds.
APA, Harvard, Vancouver, ISO, and other styles
5

Pope, Damian. "Contrasting quantum mechanics to local hidden variables theories in quantum optics and quantum information science /." [St. Luica, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16765.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Touron, Augustin. "Modélisation multivariée de variables météorologiques." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS264/document.

Full text
Abstract:
La production d'énergie renouvelable et la consommation d'électricité dépendent largement des conditions météorologiques : température, précipitations, vent, rayonnement solaire... Ainsi, pour réaliser des études d'impact sur l'équilibre offre-demande, on peut utiliser un générateur de temps, c'est-à-dire un modèle permettant de simuler rapidement de longues séries de variables météorologiques réalistes, au pas de temps journalier. L'une des approches possibles pour atteindre cet objectif utilise les modèles de Markov caché : l'évolution des variables à modéliser est supposée dépendre d'une variable latente que l'on peut interpréter comme un type de temps. En adoptant cette approche, nous proposons dans cette thèse un modèle permettant de simuler simultanément la température, la vitesse du vent et les précipitations, en tenant compte des non-stationnarités qui caractérisent les variables météorologiques. D'autre part, nous nous intéressons à certaines propriétés théoriques des modèles de Markov caché cyclo-stationnaires : nous donnons des conditions simples pour assurer leur identifiabilité et la consistance forte de l'estimateur du maximum de vraisemblance. On montre aussi cette propriété de l'EMV pour des modèles de Markov caché incluant des tendances de long terme sous forme polynomiale
Renewable energy production and electricity consumption both depend heavily on weather: temperature, precipitations, wind, solar radiation... Thus, making impact studies on the supply/demand equilibrium may require a weather generator, that is a model capable of quickly simulating long, realistic times series of weather variables, at the daily time step. To this aim, one of the possible approaches is using hidden Markov models : we assume that the evolution of the weather variables are governed by a latent variable that can be interpreted as a weather type. Using this approach, we propose a model able to simulate simultaneously temperature, wind speed and precipitations, accounting for the specific non-stationarities of weather variables. Besides, we study some theoretical properties of cyclo-stationary hidden Markov models : we provide simple conditions of identifiability and we show the strong consistency of the maximum likelihood estimator. We also show this property of the MLE for hidden Markov models including long-term polynomial trends
APA, Harvard, Vancouver, ISO, and other styles
7

Arribas, Gil Ana. "Estimation dans des modèles à variables cachées : alignement des séquences biologiques et modèles d'évolution." Paris 11, 2007. http://www.theses.fr/2007PA112054.

Full text
Abstract:
Cette thèse est consacrée à l'estimation paramétrique dans certains modèles d'alignement de séquences biologiques. Ce sont des modèles construits à partir des considérations sur le processus d'évolution des séquences. Dans le cas de deux séquences, le processus d'évolution classique résulte dans un modèle d'alignement appelé pair-Hidden Markov Model (pair-HMM). Dans le pair-HMM les observations sont formées par le couple de séquences à aligner et l'alignement caché est une chaîne de Markov. D'un point de vue théorique nous donnons un cadre rigoureux pour ce modèle et étudions la consistance des estimateurs bayésien et par maximum de vraisemblance. D'un point de vue appliqué nous nous intéressons à la détection de motifs conservés dans les séquences à travers de l'alignement. Pour cela nous introduisons un processus d'évolution permettant différents comportements évolutifs à différents endroits de la séquence et pour lequel le modèle d'alignement est toujours un pair-HMM. Nous proposons des algorithmes d'estimation d'alignements et paramètres d'évolution adaptés à la complexité du modèle. Finalement, nous nous intéressons à l'alignement multiple (plus de deux séquences). Le processus d'évolution classique résulte dans ce cas dans un modèle d'alignement à variables cachées plus complexe et dans lequel il faut prendre en compte les relations phylogénétiques entre les séquences. Nous donnons le cadre théorique pour ce modèle et étudions, comme dans le cas de deux séquences, la propriété de consistance des estimateurs
This thesis is devoted to parameter estimation in models for biological sequence alignment. These are models constructed considering an evolution process on the sequences. In the case of two sequences evolving under the classical evolution process, the alignment model is called a pair-Hidden Markov Model (pair-HMM). Observations in a pair-HMM are formed by the couple of sequences to be aligned and the hidden alignment is a Markov chain. From a theoretical point of view, we provide a rigorous formalism for these models and study consistency of maximum likelihood and bayesian estimators. From the point of view of applications, we are interested in detection of conserved motifs in the sequences. To do this we present an evolution process that allows heterogeneity along the sequence. The alignment under this process still fits the pair-HMM. We propose efficient estimation algorithms for alignments and evolution parameters. Finally we are interested in multiple alignment (more than two sequences). The classical evolution process for the sequences provides a complex hidden variable model for the alignment in which the phylogenetic relationships between the sequences must be taken into account. We provide a theoretical framework for this model and study, as for the pairwise alignment, the consistency of estimators
APA, Harvard, Vancouver, ISO, and other styles
8

Harms, Heather. "Hidden Variable." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5289.

Full text
Abstract:
Hidden Variable is a novel that blends linear storytelling with the novel-in-stories form. It poses questions about the nature of identity as well as the feasibility of personal power, particularly with respect to disorders of the mind. Darla Pierson, the novel's protagonist, is a woman in crisis. She is steeped in self-loathing brought on by the knowledge that she has, in effect, become her dead father—a genius with an epic libido, habitually using and discarding people. Her father has another habit that Darla doesn't share: being struck by lightning. After the second strike kills him, Darla makes a conscious attempt to recreate herself. But soon the new Darla sinks into depression, and her act begins to crumble, damaging her and those around her. Throughout the years, she and her family members experience periodic clashes with nature, never fully realizing that sometimes the most powerful, most devastating opponent comes not from without, but within.
M.F.A.
Masters
English
Arts and Humanities
Creative Writing
APA, Harvard, Vancouver, ISO, and other styles
9

Dubarry, Cyrille. "Méthodes de lissage et d'estimation dans des modèles à variables latentes par des méthodes de Monte-Carlo séquentielles." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00762243.

Full text
Abstract:
Les modèles de chaînes de Markov cachées ou plus généralement ceux de Feynman-Kac sont aujourd'hui très largement utilisés. Ils permettent de modéliser une grande diversité de séries temporelles (en finance, biologie, traitement du signal, ...) La complexité croissante de ces modèles a conduit au développement d'approximations via différentes méthodes de Monte-Carlo, dont le Markov Chain Monte-Carlo (MCMC) et le Sequential Monte-Carlo (SMC). Les méthodes de SMC appliquées au filtrage et au lissage particulaires font l'objet de cette thèse. Elles consistent à approcher la loi d'intérêt à l'aide d'une population de particules définies séquentiellement. Différents algorithmes ont déjà été développés et étudiés dans la littérature. Nous raffinons certains de ces résultats dans le cas du Forward Filtering Backward Smoothing et du Forward Filtering Backward Simulation grâce à des inégalités de déviation exponentielle et à des contrôles non asymptotiques de l'erreur moyenne. Nous proposons également un nouvel algorithme de lissage consistant à améliorer une population de particules par des itérations MCMC, et permettant d'estimer la variance de l'estimateur sans aucune autre simulation. Une partie du travail présenté dans cette thèse concerne également les possibilités de mise en parallèle du calcul des estimateurs particulaires. Nous proposons ainsi différentes interactions entre plusieurs populations de particules. Enfin nous illustrons l'utilisation des chaînes de Markov cachées dans la modélisation de données financières en développant un algorithme utilisant l'Expectation-Maximization pour calibrer les paramètres du modèle exponentiel d'Ornstein-Uhlenbeck multi-échelles
APA, Harvard, Vancouver, ISO, and other styles
10

Besson, Virgile. "L’interprétation causale de la mécanique quantique : biographie d’un programme de recherche minoritaire (1951–1964)." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1014/document.

Full text
Abstract:
L'interprétation causale de la mécanique quantique a été décrite en premier lieu par les historiens comme une conséquence de l'influence croissante du marxisme chez les physiciens des pays occidentaux. En effet, au cours des années 1950, le noyau du groupe de physiciens impliqués dans le programme causal autour de Jean-Pierre Vigier et Louis de Broglie à l'Institut Henri Poincaré est majoritairement constitué soit de membres, soit de sympathisants du PCF. Leurs travaux sont fortement influencés par les critiques soviétiques contre l'interprétation dominante de la mécanique quantique, l'interprétation dite de Copenhague. Entre autres, Vigier critique le pragmatisme qui règne dans la physique de l'après-guerre et pense que le manque de réflexion philosophique est en grande partie responsable de la crise que traverse la physique fondamentale, telle que le problème de la renormalisation. Le groupe a également porté la question de l'interprétation de la théorie au sein du PCF d'où est née une controverse au sein du parti qui a soulevé la problématique de la relation entre le marxisme et la science.La théorie fait également partie d'un programme de recherche plus global lié aux questions contemporaines en physique. Ce point est souvent oublié, ce qui mène à la conclusion erronée que la motivation du groupe IHP est seulement de nature idéologique et, par conséquent, que leur activité est hors de la science. Dès 1957, en collaboration avec des physiciens japonais, le groupe a proposé une théorie des particules élémentaires et un système de classification, à une époque où une théorie consensuelle manque encore
The Causal Interpretation of Quantum Mechanics was in the first place described by historians as a consequence of the growing influence of Marxism among physicists in Western countries. Indeed, during the 1950s, the core of the group of physicists involved in the Causal program around Jean-Pierre Vigier and Louis de Broglie at the Institut Henri Poincaré was mainly constituted either of members or sympathizers of the PCF. Their works were strongly influenced by critics from Soviet Union against the mainstream interpretation of Quantum Mechanics, the so called Copenhagen interpretation. Vigier criticized the pragmatism which prevailed in the Postwar physics and thought that the lack of philosophical considerations was in great part responsible for the crisis in fundamental physics, such as the problem of renormalization. They also put the issue of the interpretation of the theory inside the PCF and created a controversy inside the party which raised the relationship between Marxism and science. The theory was also part of a more global research program linked with contemporary questions in physics. This point is often forgotten which leads to the erroneous conclusion that the motivation of the IHP group was only ideological and, therefore, their activity was out of science. As early as 1957, in collaboration with Japanese physicists, the group proposed a theory for elementary particles and a method of their classification, in a period in where a standard theory was still missing
APA, Harvard, Vancouver, ISO, and other styles
11

SANTOS, GEAN R. dos. "Algoritmo de colônia de formigas e redes neurais artificiais aplicados na monitoração e detecção de falhas em centrais nucleares." reponame:Repositório Institucional do IPEN, 2016. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26798.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-11-11T09:45:23Z No. of bitstreams: 0
Made available in DSpace on 2016-11-11T09:45:23Z (GMT). No. of bitstreams: 0
Um desafio recorrente em processos produtivos é o desenvolvimento de sistemas de monitoração e diagnóstico. Esses sistemas ajudam na detecção de mudanças inesperadas e interrupções, prevenindo perdas e mitigando riscos. Redes Neurais Artificiais (RNA) têm sido largamente utilizadas na criação de sistemas de monitoração. Normalmente as RNA utilizadas para resolver este tipo de problema são criadas levando-se em conta apenas parâmetros como o número de entradas, saídas e quantidade de neurônios nas camadas escondidas. Assim, as redes resultantes geralmente possuem uma configuração onde há uma total conexão entre os neurônios de uma camada e os da camada seguinte, sem que haja melhorias em sua topologia. Este trabalho utiliza o algoritmo de Otimização por Colônia de Formigas (OCF) para criar redes neurais otimizadas. O algoritmo de busca OCF utiliza a técnica de retropropagação de erros para otimizar a topologia da rede neural sugerindo as melhores conexões entre os neurônios. A RNA resultante foi aplicada para monitorar variáveis do reator de pesquisas IEA-R1 do IPEN. Os resultados obtidos mostram que o algoritmo desenvolvido é capaz de melhorar o desempenho do modelo que estima o valor de variáveis do reator. Em testes com diferentes números de neurônios na camada escondida, utilizando como comparativos o erro quadrático médio, o erro absoluto médio e o coeficiente de correlação, o desempenho da RNA otimizada foi igual ou superior ao da tradicional.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
12

Budroni, Costantino [Verfasser]. "Temporal quantum correlations and hidden variable models / Costantino Budroni." Siegen : Universitätsbibliothek der Universität Siegen, 2014. http://d-nb.info/106415042X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Koo, Terry 1981. "Parse reranking with WordNet using a hidden variable model." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28431.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 79-80).
We present a new parse reranking algorithm that extends work in (Michael Collins and Terry Koo 2004) by incorporating WordNet (Miller et al. 1993) word senses. Instead of attempting explicit word sense disambiguation, we retain word sense ambiguity in a hidden variable model. We define a probability distribution over candidate parses and word sense assignments with a feature-based log-linear model, and we employ belief propagation to obtain an efficient implementation. Our main results are a relative improvement of [approximately] 0.97% over the baseline parser in development testing, which translated into a [approximately] 0.5% improvement in final testing. We also performed experiments in which our reranker was appended to the (Michael Collins and Terry Koo 2004) boosting reranker. The cascaded system achieved a development set improvement of [approximately] 0.15% over the boosting reranker by itself, but this gain did not carry over into final testing.
by Terry Koo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Grave, Edouard. "A Markovian approach to distributional semantics." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-00940575.

Full text
Abstract:
This thesis, which is organized in two independent parts, presents work on distributional semantics and on variable selection. In the first part, we introduce a new method for learning good word representations using large quantities of unlabeled sentences. The method is based on a probabilistic model of sentence, using a hidden Markov model and a syntactic dependency tree. The latent variables, which correspond to the nodes of the dependency tree, aim at capturing the meanings of the words. We develop an efficient algorithm to perform inference and learning in those models, based on online EM and approximate message passing. We then evaluate our models on intrinsic tasks such as predicting human similarity judgements or word categorization, and on two extrinsic tasks: named entity recognition and supersense tagging. In the second part, we introduce, in the context of linear models, a new penalty function to perform variable selection in the case of highly correlated predictors. This penalty, called the trace Lasso, uses the trace norm of the selected predictors, which is a convex surrogate of their rank, as the criterion of model complexity. The trace Lasso interpolates between the $\ell_1$-norm and $\ell_2$-norm. In particular, it is equal to the $\ell_1$-norm if all predictors are orthogonal and to the $\ell_2$-norm if all predictors are equal. We propose two algorithms to compute the solution of least-squares regression regularized by the trace Lasso, and perform experiments on synthetic datasets to illustrate the behavior of the trace Lasso.
APA, Harvard, Vancouver, ISO, and other styles
15

Siami, Navid. "An investigation of no-go theorems in hidden variable models of quantum mechanics." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57364.

Full text
Abstract:
Realism defined in EPR paper as “In a complete theory there is an element corresponding to each element of reality.” Bell showed that there is a forbidden triangle (Realism, Quantum Statistics, and Locality), and we are only allowed to pick two out of three. In this thesis, we investigate other inequalities and no-go theorems that we face. We also discuss possible Hidden Variable Models that are tailored to be consistent with Quantum Mechanics and the specific no-go theorems. In the special case of the Leggett Inequality the proposed hidden variable is novel in the sense that the hidden variable is in the measurement device rather than the wave-function.
Science, Faculty of
Physics and Astronomy, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
16

Channarond, Antoine. "Recherche de structure dans un graphe aléatoire : modèles à espace latent." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112338/document.

Full text
Abstract:
Cette thèse aborde le problème de la recherche d'une structure (ou clustering) dans lesnoeuds d'un graphe. Dans le cadre des modèles aléatoires à variables latentes, on attribue à chaque noeud i une variable aléatoire non observée (latente) Zi, et la probabilité de connexion des noeuds i et j dépend conditionnellement de Zi et Zj . Contrairement au modèle d'Erdos-Rényi, les connexions ne sont pas indépendantes identiquement distribuées; les variables latentes régissent la loi des connexions des noeuds. Ces modèles sont donc hétérogènes, et leur structure est décrite par les variables latentes et leur loi; ce pourquoi on s'attache à en faire l'inférence à partir du graphe, seule variable observée.La volonté commune des deux travaux originaux de cette thèse est de proposer des méthodes d'inférence de ces modèles, consistentes et de complexité algorithmique au plus linéaire en le nombre de noeuds ou d'arêtes, de sorte à pouvoir traiter de grands graphes en temps raisonnable. Ils sont aussi tous deux fondés sur une étude fine de la distribution des degrés, normalisés de façon convenable selon le modèle.Le premier travail concerne le Stochastic Blockmodel. Nous y montrons la consistence d'un algorithme de classiffcation non supervisée à l'aide d'inégalités de concentration. Nous en déduisons une méthode d'estimation des paramètres, de sélection de modèles pour le nombre de classes latentes, et un test de la présence d'une ou plusieurs classes latentes (absence ou présence de clustering), et nous montrons leur consistence.Dans le deuxième travail, les variables latentes sont des positions dans l'espace ℝd, admettant une densité f, et la probabilité de connexion dépend de la distance entre les positions des noeuds. Les clusters sont définis comme les composantes connexes de l'ensemble de niveau t > 0 fixé de f, et l'objectif est d'en estimer le nombre à partir du graphe. Nous estimons la densité en les positions latentes des noeuds grâce à leur degré, ce qui permet d'établir une correspondance entre les clusters et les composantes connexes de certains sous-graphes du graphe observé, obtenus en retirant les nœuds de faible degré. En particulier, nous en déduisons un estimateur du nombre de clusters et montrons saconsistence en un certain sens
.This thesis addresses the clustering of the nodes of a graph, in the framework of randommodels with latent variables. To each node i is allocated an unobserved (latent) variable Zi and the probability of nodes i and j being connected depends conditionally on Zi and Zj . Unlike Erdos-Renyi's model, connections are not independent identically distributed; the latent variables rule the connection distribution of the nodes. These models are thus heterogeneous and their structure is fully described by the latent variables and their distribution. Hence we aim at infering them from the graph, which the only observed data.In both original works of this thesis, we propose consistent inference methods with a computational cost no more than linear with respect to the number of nodes or edges, so that large graphs can be processed in a reasonable time. They both are based on a study of the distribution of the degrees, which are normalized in a convenient way for the model.The first work deals with the Stochastic Blockmodel. We show the consistency of an unsupervised classiffcation algorithm using concentration inequalities. We deduce from it a parametric estimation method, a model selection method for the number of latent classes, and a clustering test (testing whether there is one cluster or more), which are all proved to be consistent. In the second work, the latent variables are positions in the ℝd space, having a density f. The connection probability depends on the distance between the node positions. The clusters are defined as connected components of some level set of f. The goal is to estimate the number of such clusters from the observed graph only. We estimate the density at the latent positions of the nodes with their degree, which allows to establish a link between clusters and connected components of some subgraphs of the observed graph, obtained by removing low degree nodes. In particular, we thus derive an estimator of the cluster number and we also show the consistency in some sense
APA, Harvard, Vancouver, ISO, and other styles
17

Sastry, Avinash. "N-gram modeling of tabla sequences using Variable-Length Hidden Markov Models for improvisation and composition." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42792.

Full text
Abstract:
This work presents a novel approach for the design of a predictive model of music that can be used to analyze and generate musical material that is highly context dependent. The system is based on an approach known as n-gram modeling, often used in language processing and speech recognition algorithms, implemented initially upon a framework of Variable-Length Markov Models (VLMMs) and then extended to Variable-Length Hidden Markov Models (VLHMMs). The system brings together various principles like escape probabilities, smoothing schemes and uses multiple representations of the data stream to construct a multiple viewpoints system that enables it to draw complex relationships between the different input n-grams, and use this information to provide a stronger prediction scheme. It is implemented as a MAX/MSP external in C++ and is intended to be a predictive framework that can be used to create generative music systems and educational and compositional tools for music. A formal quantitative evaluation scheme based on entropy of the predictions is used to evaluate the model in sequence prediction tasks on a database of tabla compositions. The results show good model performance for both the VLMM and the VLHMM while highlighting the expensive computational cost of higher-order VLHMMs.
APA, Harvard, Vancouver, ISO, and other styles
18

Perl, Robert. "Die Erwartungstheorie der Zinsstruktur : variable Zeitprämien, Regimeunsicherheit und Markov-Switching-Modelle ; eine empirischen Analyse für den deutschen Rentenmarkt /." Frankfurt am Main [u.a.] : Lang, 2003. http://www.gbv.de/dms/zbw/362547947.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rech, Gianluigi. "Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks." Doctoral thesis, Handelshögskolan i Stockholm, Ekonomisk Statistik (ES), 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-591.

Full text
Abstract:
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here.

Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas

APA, Harvard, Vancouver, ISO, and other styles
20

Ridall, Peter Gareth. "Bayesian Latent Variable Models for Biostatistical Applications." Queensland University of Technology, 2004. http://eprints.qut.edu.au/16164/.

Full text
Abstract:
In this thesis we develop several kinds of latent variable models in order to address three types of bio-statistical problem. The three problems are the treatment effect of carcinogens on tumour development, spatial interactions between plant species and motor unit number estimation (MUNE). The three types of data looked at are: highly heterogeneous longitudinal count data, quadrat counts of species on a rectangular lattice and lastly, electrophysiological data consisting of measurements of compound muscle action potential (CMAP) area and amplitude. Chapter 1 sets out the structure and the development of ideas presented in this thesis from the point of view of: model structure, model selection, and efficiency of estimation. Chapter 2 is an introduction to the relevant literature that has in influenced the development of this thesis. In Chapter 3 we use the EM algorithm for an application of an autoregressive hidden Markov model to describe longitudinal counts. The data is collected from experiments to test the effect of carcinogens on tumour growth in mice. Here we develop forward and backward recursions for calculating the likelihood and for estimation. Chapter 4 is the analysis of a similar kind of data using a more sophisticated model, incorporating random effects, but estimation this time is conducted from the Bayesian perspective. Bayesian model selection is also explored. In Chapter 5 we move to the two dimensional lattice and construct a model for describing the spatial interaction of tree types. We also compare the merits of directed and undirected graphical models for describing the hidden lattice. Chapter 6 is the application of a Bayesian hierarchical model (MUNE), where the latent variable this time is multivariate Gaussian and dependent on a covariate, the stimulus. Model selection is carried out using the Bayes Information Criterion (BIC). In Chapter 7 we approach the same problem by using the reversible jump methodology (Green, 1995) where this time we use a dual Gaussian-Binary representation of the latent data. We conclude in Chapter 8 with suggestions for the direction of new work. In this thesis, all of the estimation carried out on real data has only been performed once we have been satisfied that estimation is able to retrieve the parameters from simulated data. Keywords: Amyotrophic lateral sclerosis (ALS), carcinogens, hidden Markov models (HMM), latent variable models, longitudinal data analysis, motor unit disease (MND), partially ordered Markov models (POMMs), the pseudo auto- logistic model, reversible jump, spatial interactions.
APA, Harvard, Vancouver, ISO, and other styles
21

Alat, Gokcen. "A Variable Structure - Autonomous - Interacting Multiple Model Ground Target Tracking Algorithm In Dense Clutter." Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615512/index.pdf.

Full text
Abstract:
Tracking of a single ground target using GMTI radar detections is considered. A Variable Structure- Autonomous- Interactive Multiple Model (VS-A-IMM) structure is developed to address challenges of ground target tracking, while maintaining an acceptable level computational complexity at the same time. The following approach is used in this thesis: Use simple tracker structures
incorporate a priori information such as topographic constraints, road maps as much as possible
use enhanced gating techniques to minimize the eect of clutter
develop methods against stop-move motion and hide motion of the target
tackle on-road/o-road transitions and junction crossings
establish measures against non-detections caused by environment. The tracker structure is derived using a composite state estimation set-up that incorporate multi models and MAP and MMSE estimations. The root mean square position and velocity error performances of the VS-A-IMM algorithm are compared with respect to the baseline IMM and the VS-IMM methods found in the literature. It is observed that the newly developed VS-A-IMM algorithm performs better than the baseline methods in realistic conditions such as on-road/o-road transitions, tunnels, stops, junction crossings, non-detections.
APA, Harvard, Vancouver, ISO, and other styles
22

Avila, Manuel. "Optimisation de modèles markoviens pour la reconnaissance de l'écrit." Rouen, 1996. http://www.theses.fr/1996ROUES034.

Full text
Abstract:
Cette thèse traite de l'optimisation de modèles markoviens dédiés à la reconnaissance de textes manuscrits, dans le cas particulier d'une application à vocabulaire réduit : la lecture des montants littéraux de chèques. Le premier chapitre décrit brièvement les techniques utilisées pour la reconnaissance de l'écrit. Nous présentons également les descriptions des mots que nous avons utilisées. Le second chapitre présente les modèles de Markov cache. Nous présentons notamment les différents niveaux de représentation du problème de la lecture de l'écrit dans le cas de modélisations markoviennes : les niveaux phrase, mot et lettre. Finalement, nous présentons les algorithmes couramment utilisés pour exploiter des modèles de Markov : les algorithmes de Viterbi et de Baum-welch, avec des variantes que nous avons adaptées à nos besoins. Dans le troisième chapitre, nous traitons du problème d'une optimisation des descriptions des mots. Nous donnons trois méthodes de représentation des mots. Nous présentons ensuite une méthode de recherche de l'ordre optimal d'un processus de Markov basée sur la minimisation de critères d'information de type Akaike soit AIC, BIC etc. Finalement, nous comparons les résultats des trois alphabets pour les ordres de 1 à 3. Ceci nous permet de valider le choix de la description des mots et de l'ordre du modèle de Markov correspondant. Nous réutilisons ces résultats au chapitre 4. Dans ce chapitre, trois approches sont proposées pour la reconnaissance des mots : la première est une approche globale qui par définition ne s'attache pas à l'identification des lettres, la seconde est une approche analytique basée sur une modélisation complètement explicitée, la troisième méthode est une approche pseudo-analytique intermédiaire entre les deux approches précédentes. Elle modélise le mot de manière analytique en utilisant des modèles globaux de lettres. Finalement, les résultats de ces trois méthodes sont ensuite fusionnés : chapitre 5. Ce chapitre traite de l'identification des montants littéraux de chèques. La stratégie développée se décompose en trois parties : validation de la segmentation des mots, identification des mots et reconstitution de la phrase. A chaque partie correspond une modélisation markovienne adaptée.
APA, Harvard, Vancouver, ISO, and other styles
23

Mattrand, Cécile. "Approche probabiliste de la tolérance aux dommages." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00738947.

Full text
Abstract:
En raison de la gravité des accidents liés au phénomène de fatigue-propagation de fissure, les préoccupations de l'industrie aéronautique à assurer l'intégrité des structures soumises à ce mode de sollicitation revêtent un caractère tout à fait essentiel. Les travaux de thèse présentés dans ce mémoire visent à appréhender le problème de sûreté des structures aéronautiques dimensionnées en tolérance aux dommages sous l'angle probabiliste. La formulation et l'application d'une approche fiabiliste menant à des processus de conception et de maintenance fiables des structures aéronautiques en contexte industriel nécessitent cependant de lever un nombre important de verrous scientifiques. Les efforts ont été concentrés au niveau de trois domaines dans ce travail. Une méthodologie a tout d'abord été développée afin de capturer et de retranscrire fidèlement l'aléa du chargement de fatigue à partir de séquences de chargement observées sur des structures en service et monitorées, ce qui constitue une réelle avancée scientifique. Un deuxième axe de recherche a porté sur la sélection d'un modèle mécanique apte à prédire l'évolution de fissure sous chargement d'amplitude variable à coût de calcul modéré. Les travaux se sont ainsi appuyés sur le modèle PREFFAS pour lequel des évolutions ont également été proposées afin de lever l'hypothèse restrictive de périodicité de chargement. Enfin, les analyses probabilistes, produits du couplage entre le modèle mécanique et les modélisations stochastiques préalablement établies, ont entre autre permis de conclure que le chargement est un paramètre qui influe notablement sur la dispersion du phénomène de propagation de fissure. Le dernier objectif de ces travaux a ainsi porté sur la formulation et la résolution du problème de fiabilité en tolérance aux dommages à partir des modèles stochastiques retenus pour le chargement, constituant un réel enjeu scientifique. Une méthode de résolution spécifique du problème de fiabilité a été mise en place afin de répondre aux objectifs fixés et appliquée à des structures jugées représentatives de problèmes réels.
APA, Harvard, Vancouver, ISO, and other styles
24

Vourdas, Apostolos. "Subsystems of a finite quantum system and Bell-like inequalities." 2014. http://hdl.handle.net/10454/10806.

Full text
Abstract:
Yes
The set of subsystems Sigma(m) of a finite quantum system Sigma(n) with variables in Z(n) together with logical connectives, is a Heyting algebra. The probabilities tau(m vertical bar rho(n)) Tr vertical bar B(m)rho(n)] (where B(m) is the projector to Sigma(m)) are compatible with associativity of the join in the Heyting algebra, only if the variables belong to the same chain. Consequently, contextuality in the present formalism, has the chains as contexts. Various Bell-like inequalities are discussed. They are violated, and this proves that quantum mechanics is a contextual theory.
APA, Harvard, Vancouver, ISO, and other styles
25

Dwivedi, Saurav. "Probabilistic Interpretation of Quantum Mechanics with Schrödinger Quantization Rule." Habilitation à diriger des recherches, 2011. http://tel.archives-ouvertes.fr/tel-00573846.

Full text
Abstract:
Quantum theory is a probabilistic theory, where certain variables are hidden or non-accessible. It results in lack of representation of systems under study. However, I deduce system's representation in probabilistic manner, introducing probability of existence w, and quantize it exploiting Schrödinger's quantization rule. The formalism enriches probabilistic quantum theory, and enables system's representation in probabilistic manner.
APA, Harvard, Vancouver, ISO, and other styles
26

Vourdas, Apostolos. "The complete Heyting algebra of subsystems and contextuality." Thesis, 2013. http://hdl.handle.net/10454/9747.

Full text
Abstract:
no
The finite set of subsystems of a finite quantum system with variables in Z(n), is studied as a Heyting algebra. The physical meaning of the logical connectives is discussed. It is shown that disjunction of subsystems is more general concept than superposition. Consequently, the quantum probabilities related to commuting projectors in the subsystems, are incompatible with associativity of the join in the Heyting algebra, unless if the variables belong to the same chain. This leads to contextuality, which in the present formalism has as contexts, the chains in the Heyting algebra. Logical Bell inequalities, which contain "Heyting factors," are discussed. The formalism is also applied to the infinite set of all finite quantum systems, which is appropriately enlarged in order to become a complete Heyting algebra.
APA, Harvard, Vancouver, ISO, and other styles
27

Lemyre, Gabriel. "Modèles de Markov à variables latentes : matrice de transition non-homogène et reformulation hiérarchique." Thesis, 2021. http://hdl.handle.net/1866/25476.

Full text
Abstract:
Ce mémoire s’intéresse aux modèles de Markov à variables latentes, une famille de modèles dans laquelle une chaîne de Markov latente régit le comportement d’un processus stochastique observable à travers duquel transparaît une version bruitée de la chaîne cachée. Pouvant être vus comme une généralisation naturelle des modèles de mélange, ces processus stochastiques bivariés ont entre autres démontré leur faculté à capter les dynamiques variables de maintes séries chronologiques et, plus spécifiquement en finance, à reproduire la plupart des faits stylisés des rendements financiers. Nous nous intéressons en particulier aux chaînes de Markov à temps discret et à espace d’états fini, avec l’objectif d’étudier l’apport de leurs reformulations hiérarchiques et de la relaxation de l’hypothèse d’homogénéité de la matrice de transition à la qualité de l’ajustement aux données et des prévisions, ainsi qu’à la reproduction des faits stylisés. Nous présentons à cet effet deux structures hiérarchiques, la première permettant une nouvelle interprétation des relations entre les états de la chaîne, et la seconde permettant de surcroît une plus grande parcimonie dans la paramétrisation de la matrice de transition. Nous nous intéressons de plus à trois extensions non-homogènes, dont deux dépendent de variables observables et une dépend d’une autre variable latente. Nous analysons pour ces modèles la qualité de l’ajustement aux données et des prévisions sur la série des log-rendements du S&P 500 et du taux de change Canada-États-Unis (CADUSD). Nous illustrons de plus la capacité des modèles à reproduire les faits stylisés, et présentons une interprétation des paramètres estimés pour les modèles hiérarchiques et non-homogènes. Les résultats obtenus semblent en général confirmer l’apport potentiel de structures hiérarchiques et des modèles non-homogènes. Ces résultats semblent en particulier suggérer que l’incorporation de dynamiques non-homogènes aux modèles hiérarchiques permette de reproduire plus fidèlement les faits stylisés—même la lente décroissance de l’autocorrélation des rendements centrés en valeur absolue et au carré—et d’améliorer la qualité des prévisions obtenues, tout en conservant la possibilité d’interpréter les paramètres estimés.
This master’s thesis is centered on the Hidden Markov Models, a family of models in which an unobserved Markov chain dictactes the behaviour of an observable stochastic process through which a noisy version of the latent chain is observed. These bivariate stochastic processes that can be seen as a natural generalization of mixture models have shown their ability to capture the varying dynamics of many time series and, more specifically in finance, to reproduce the stylized facts of financial returns. In particular, we are interested in discrete-time Markov chains with finite state spaces, with the objective of studying the contribution of their hierarchical formulations and the relaxation of the homogeneity hypothesis for the transition matrix to the quality of the fit and predictions, as well as the capacity to reproduce the stylized facts. We therefore present two hierarchical structures, the first allowing for new interpretations of the relationships between states of the chain, and the second allowing for a more parsimonious parameterization of the transition matrix. We also present three non-homogeneous models, two of which have transition probabilities dependent on observed explanatory variables, and the third in which the probabilities depend on another latent variable. We first analyze the goodness of fit and the predictive power of our models on the series of log returns of the S&P 500 and the exchange rate between canadian and american currencies (CADUSD). We also illustrate their capacity to reproduce the stylized facts, and present interpretations of the estimated parameters for the hierarchical and non-homogeneous models. In general, our results seem to confirm the contribution of hierarchical and non-homogeneous models to these measures of performance. In particular, these results seem to suggest that the incorporation of non-homogeneous dynamics to a hierarchical structure may allow for a more faithful reproduction of the stylized facts—even the slow decay of the autocorrelation functions of squared and absolute returns—and better predictive power, while still allowing for the interpretation of the estimated parameters.
APA, Harvard, Vancouver, ISO, and other styles
28

Meila, Marina, Michael I. Jordan, and Quaid Morris. "Estimating Dependency Structure as a Hidden Variable." 1997. http://hdl.handle.net/1721.1/7245.

Full text
Abstract:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EMand the Minimum Spanning Tree algorithm to find the ML and MAP mixtureof trees for a variety of priors, including the Dirichlet and the MDL priors.
APA, Harvard, Vancouver, ISO, and other styles
29

Meila, Marina, Michael I. Jordan, and Quaid Morris. "Estimating Dependency Structure as a Hidden Variable." 1998. http://hdl.handle.net/1721.1/7257.

Full text
Abstract:
This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. We also show that the single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification.
APA, Harvard, Vancouver, ISO, and other styles
30

Vervoort, Louis. "Does Chance hide Necessity? : a reevaluation of the debate ‘determinism - indeterminism’ in the light of quantum mechanics and probability theory." Thèse, 2013. http://hdl.handle.net/1866/10221.

Full text
Abstract:
Dans cette thèse l’ancienne question philosophique “tout événement a-t-il une cause ?” sera examinée à la lumière de la mécanique quantique et de la théorie des probabilités. Aussi bien en physique qu’en philosophie des sciences la position orthodoxe maintient que le monde physique est indéterministe. Au niveau fondamental de la réalité physique – au niveau quantique – les événements se passeraient sans causes, mais par chance, par hasard ‘irréductible’. Le théorème physique le plus précis qui mène à cette conclusion est le théorème de Bell. Ici les prémisses de ce théorème seront réexaminées. Il sera rappelé que d’autres solutions au théorème que l’indéterminisme sont envisageables, dont certaines sont connues mais négligées, comme le ‘superdéterminisme’. Mais il sera argué que d’autres solutions compatibles avec le déterminisme existent, notamment en étudiant des systèmes physiques modèles. Une des conclusions générales de cette thèse est que l’interprétation du théorème de Bell et de la mécanique quantique dépend crucialement des prémisses philosophiques desquelles on part. Par exemple, au sein de la vision d’un Spinoza, le monde quantique peut bien être compris comme étant déterministe. Mais il est argué qu’aussi un déterminisme nettement moins radical que celui de Spinoza n’est pas éliminé par les expériences physiques. Si cela est vrai, le débat ‘déterminisme – indéterminisme’ n’est pas décidé au laboratoire : il reste philosophique et ouvert – contrairement à ce que l’on pense souvent. Dans la deuxième partie de cette thèse un modèle pour l’interprétation de la probabilité sera proposé. Une étude conceptuelle de la notion de probabilité indique que l’hypothèse du déterminisme aide à mieux comprendre ce que c’est qu’un ‘système probabiliste’. Il semble que le déterminisme peut répondre à certaines questions pour lesquelles l’indéterminisme n’a pas de réponses. Pour cette raison nous conclurons que la conjecture de Laplace – à savoir que la théorie des probabilités présuppose une réalité déterministe sous-jacente – garde toute sa légitimité. Dans cette thèse aussi bien les méthodes de la philosophie que de la physique seront utilisées. Il apparaît que les deux domaines sont ici solidement reliés, et qu’ils offrent un vaste potentiel de fertilisation croisée – donc bidirectionnelle.
In this thesis the ancient philosophical question whether ‘everything has a cause’ will be examined in the light of quantum mechanics and probability theory. In the physics and philosophy of science communities the orthodox position states that the physical world is indeterministic. On the deepest level of physical reality – the quantum level – things or events would have no causes but happen by chance, by irreducible hazard. Arguably the clearest and most convincing theorem that led to this conclusion is Bell’s theorem. Here the premises of this theorem will be re-evaluated, notably by investigating physical model systems. It will be recalled that other solutions to the theorem than indeterminism exist, some of which are known but neglected, such as ‘superdeterminism’. But it will be argued that also other solutions compatible with determinism exist. One general conclusion will be that the interpretation of Bell’s theorem and quantum mechanics hinges on the philosophical premises from which one starts. For instance, within a worldview à la Spinoza the quantum world may well be seen as deterministic. But it is argued that also much ‘softer’ determinism than Spinoza’s is not excluded by the existing experiments. If that is true the ‘determinism – indeterminism’ is not decided in the laboratory: it remains philosophical and open-ended – contrary to what is often believed. In the second part of the thesis a model for the interpretation of probability will be proposed. A conceptual study of the notion of probability indicates that the hypothesis of determinism is instrumental for understanding what ‘probabilistic systems’ are. It seems that determinism answers certain questions that cannot be answered by indeterminism. Therefore we believe there is room for the conjecture that probability theory cannot not do without a deterministic reality underneath probability – as Laplace claimed. Throughout the thesis the methods of philosophy and physics will be used. Both fields appear to be solidly intertwined here, and to offer a large potential for cross-fertilization – in both directions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography