Dissertations / Theses on the topic 'Continuous time Markov chain'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Continuous time Markov chain.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Rao, V. A. P. "Markov chain Monte Carlo for continuous-time discrete-state systems." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1349490/.
Full textAlharbi, Randa. "Bayesian inference for continuous time Markov chains." Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.
Full textWitte, Hugh Douglas. "Markov chain Monte Carlo and data augmentation methods for continuous-time stochastic volatility models." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/283976.
Full textDai, Pra Paolo, Pierre-Yves Louis, and Ida Minelli. "Monotonicity and complete monotonicity for continuous-time Markov chains." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/766/.
Full textHowever, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
Keller, Peter, Sylvie Roelly, and Angelo Valleriani. "On time duality for quasi-birth-and-death processes." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/5697/.
Full textAyana, Haimanot, and Sarah Al-Swej. "A review of two financial market models: the Black--Scholes--Merton and the Continuous-time Markov chain models." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55417.
Full textLo, Chia Chun. "Application of continuous time Markov chain models : option pricing, term structure of interest rates and stochastic filtering." Thesis, University of Essex, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496255.
Full textSokolović, Sonja [Verfasser]. "Multigrid methods for highdimensional, tensor structured continuous time Markov chains / Sonja Sokolović." Wuppertal : Universitätsbibliothek Wuppertal, 2017. http://d-nb.info/1135623945/34.
Full textLevin, Pavel. "Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22918.
Full textPopp, Anton [Verfasser], and N. [Akademischer Betreuer] Bäuerle. "Risk-Sensitive Stopping Problems for Continuous-Time Markov Chains / Anton Popp. Betreuer: N. Bäuerle." Karlsruhe : KIT-Bibliothek, 2016. http://d-nb.info/1110969678/34.
Full textPrezioso, Valentina. "Interest rate derivatives pricing when the short rate is a continuous time finite state Markov process." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421547.
Full textLo scopo di questo lavoro è prezzare i titoli derivati sui tassi di interesse assumendo che il tasso spot è una catena di Markov a tempo continuo con spazio degli stati finito. Il nostro modello si inspira all'articolo di Filipovic'-Zabczyk: noi estendiamo la loro struttura a tempo discreto con una con tempi aleatori, considerando in questo modo i salti aleatori che realisticamente avvengono nel mercato, e usiamo una tecnica basata su un operatore contraente. Riusciamo a prezzare con lo stesso approccio zero-coupon bond, cap e swaption; presentiamo inoltre dei risultati numerici per il prezzaggio di questi prodotti. Infine estendiamo il modello unifattoriale con uno multifattoriale.
Linzner, Dominik [Verfasser], Heinz [Akademischer Betreuer] Köppl, and Manfred [Akademischer Betreuer] Opper. "Scalable Inference in Graph-coupled Continuous-time Markov Chains / Dominik Linzner ; Heinz Köppl, Manfred Opper." Darmstadt : Universitäts- und Landesbibliothek, 2021. http://d-nb.info/1225040817/34.
Full textSchuster, Johann [Verfasser]. "Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures / Johann Schuster." München : Verlag Dr. Hut, 2012. http://d-nb.info/1020299347/34.
Full textSchneider, Olaf. "Krylov subspace methods and their generalizations for solving singular linear operator equations with applications to continuous time Markov chains." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2009. http://nbn-resolving.de/urn:nbn:de:bsz:105-1148840.
Full textConforti, Giovanni [Verfasser], and Sylvie [Akademischer Betreuer] Roelly. "Reciprocal classes of continuous time Markov Chains / Giovanni Conforti ; Betreuer: Sylvie Roelly ; Universita degli Studi di Padova." Potsdam : Universität Potsdam, 2015. http://d-nb.info/1218399740/34.
Full textBondesson, Carl. "Modelling of Safety Concepts for Autonomous Vehicles using Semi-Markov Models." Thesis, Uppsala universitet, Signaler och System, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353060.
Full textCharlou, Christophe. "Caractérisation et modélisation de l’écoulement de boues résiduaires dans un sécheur à palettes." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2014. http://www.theses.fr/2014EMAC0004/document.
Full textDrying is an unavoidable operation prior to sludge valorization in incineration, pyrolysis or gasification. The flexibility to adapt the solid content of the dried sludge to the demand is a major requirement of any drying system. This objective is difficult to reach for paddle dryers. Modeling the process is thus essential. Unfortunately, sludge rheological behavior is complex and computational fluid dynamics is out of reach for the time being. The concept of Residence Time Distribution (RTD) is used here to investigate sludge flow pattern in a paddle dryer. A reliable and reproducible protocol was established and implemented on a lab-Scale continuous dryer. Pulse injections of titanium oxide and of salt metals, with X-Ray fluorescence spectroscopy as detection method, were used to characterize the RTD of anhydrous solid and wet sludge, respectively. Premixing the pasty sludge, for tracer powder dispersion for instance, changes the structure of the material. This was highlighted through the measurements of particle size distributions and characterization of rheological properties. However, drying experiments performed in batch emphasized that premixing does not have any influence on the kinetic and the sticky phase. The RTD curves of the anhydrous solid are superimposed on those of the moist sludge. Consequently, a simpler protocol, based on pulse injection of chloride sodium and offline conductivity measurements, was established. Easier to implement in industry and cheaper, this method proves to be as reliable as the first one. The influence of storage duration prior to drying was assessed. The mean residence time doubles when the storage duration changes from 24h to 48h. Finally, a model based on the theory of Markov chains has been developed to represent the RTD. The flow of anhydrous solids is described by a chain of n perfectly mixed cells, n corresponding to the number of paddles. The transition probabilities between the cells are governed by two parameters: the ratio of internal recirculation, R, and the solids hold-Up, MS. R is determined from the Van der Laan's relation and MS is identified by fitting the model to the experimental RTD. The model describes the flow pattern with a good accuracy. The computed hold-Up is lower than the experimental one. Part of the sludge is stuck to the walls of the dryer, acting as dead volumes in the process
Schuster, Johann [Verfasser], Markus [Akademischer Betreuer] Siegle, and Holger [Akademischer Betreuer] Hermanns. "Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures / Johann Schuster. Universität der Bundeswehr München, Fakultät für Informatik. Gutachter: Holger Hermanns. Betreuer: Markus Siegle." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr, 2012. http://d-nb.info/102057920X/34.
Full textCHAOUCH, CHAKIB. "Cyber-physical systems in the framework of audio song recognition and reliability engineering." Doctoral thesis, Chakib Chaouch, 2021. http://hdl.handle.net/11570/3210939.
Full textTribastone, Mirco. "Scalable analysis of stochastic process algebra models." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4629.
Full textLiechty, John Calder. "MCMC methods and continuous-time, hidden Markov models." Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.625002.
Full textRýzner, Zdeněk. "Využití teorie hromadné obsluhy při návrhu a optimalizaci paketových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219285.
Full textRobacker, Thomas C. "Comparison of Two Parameter Estimation Techniques for Stochastic Models." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2567.
Full textDittmer, Evelyn [Verfasser]. "Hidden Markov Models with time-continuous output behavior / Evelyn Dittmer." Berlin : Freie Universität Berlin, 2009. http://d-nb.info/1023498081/34.
Full textChotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.
Full textIn this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored
Wan, Lijie. "CONTINUOUS TIME MULTI-STATE MODELS FOR INTERVAL CENSORED DATA." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/19.
Full textUnwala, Ishaq Hasanali. "Pipelined processor modeling with finite homogeneous discrete-time Markov chain /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.
Full textVeitch, John D. "Applications of Markov Chain Monte Carlo methods to continuous gravitational wave data analysis." Thesis, Connect to e-thesis to view abstract. Move to record for print version, 2007. http://theses.gla.ac.uk/35/.
Full textPh.D. thesis submitted to Information and Mathematical Sciences Faculty, Department of Mathematics, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
Shaikh, A. D. "Modelling data and voice traffic over IP networks using continuous-time Markov models." Thesis, Aston University, 2009. http://publications.aston.ac.uk/15385/.
Full textDespain, Lynnae. "A Mathematical Model of Amoeboid Cell Motion as a Continuous-Time Markov Process." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5671.
Full textWang, Yinglu. "A Markov Chain Based Method for Time Series Data Modeling and Prediction." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592395278430805.
Full textHelbert, Zachary T. "Modeling Enrollment at a Regional University using a Discrete-Time Markov Chain." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/honors/281.
Full textMamudu, Lohuwa. "Modeling Student Enrollment at ETSU Using a Discrete-Time Markov Chain Model." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3310.
Full textFrühwirth-Schnatter, Sylvia, Stefan Pittner, Andrea Weber, and Rudolf Winter-Ebmer. "Analysing plant closure effects using time-varying mixture-of-experts Markov chain clustering." Institute of Mathematical Statistics, 2018. http://dx.doi.org/10.1214/17-AOAS1132.
Full textYildirak, Sahap Kasirga. "The Identificaton Of A Bivariate Markov Chain Market Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1257898/index.pdf.
Full textGupta, Amrita. "Unsupervised learning of disease subtypes from continuous time Hidden Markov Models of disease progression." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54364.
Full textRodrigues, Caio César Graciani. "Control and filtering for continuous-time Markov jump linear systems with partial mode information." Laboratório Nacional de Computação Científica, 2017. https://tede.lncc.br/handle/tede/267.
Full textApproved for entry into archive by Maria Cristina (library@lncc.br) on 2017-08-10T18:45:57Z (GMT) No. of bitstreams: 1 tese_caio_cesar_graciani_rodrigues.pdf: 1550607 bytes, checksum: 740cf1e87f2a897b734accc7abd6ec11 (MD5)
Made available in DSpace on 2017-08-10T18:46:07Z (GMT). No. of bitstreams: 1 tese_caio_cesar_graciani_rodrigues.pdf: 1550607 bytes, checksum: 740cf1e87f2a897b734accc7abd6ec11 (MD5) Previous issue date: 2017-04-10
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes)
Over the past few decades, the study of systems subjected to abrupt changes in their structures has consolidated as a significant area of research, due, in part, to the increasing importance of dealing with the occurrence of random failures in complex systems. In this context, Markov jump linear system (MJLS) comes up as an approach of central interest, as a means of representing these dynamics. Among the numerous works that seek to establish design methods for control and filtering considering this class of systems, the scarcity of literature related to the partial observation scenarios is noticeable. This thesis features contributions to the H1 control and filtering for continuous-time MJLS with partial mode information. In order to overcome the challenge regarding the lack of information of the current state of the Markov chain, we use a detector-based formulation. In this formulation, we assume the existence of a detector, available at all times, which provides partial information about the operating mode of the jump process. A favorable feature of this strategy is that it allows us to recover (without being limited to) some recent results of partial information scenarios in which we have an explicit solution, such as the cases of complete information, mode-independent and cluster observations. Our results comprise a new bounded real lemma followed by the design of controllers and filters driven only by the informations given by the detector. Both, the H1 analysis and the design methods presented are established through the solutions of linear matrix inequalities. In addition, numerical simulations are also presented encompassing the H1 performance for particular structures of the detector process. From an application point of view, we highlight some examples related to the linearized dynamics for an unmanned aerial vehicle.
Nas últimas décadas, o estudo de sistemas cujas estruturas estão sujeitas a mudanças abruptas de comportamento tem se consolidado como uma significante área de pesquisa, devido, em parte, pela importância crescente de lidar com a ocorrência de falhas aleatórias em sistemas complexos. Neste contexto, os sistemas lineares com salto Markoviano (SLSM) surgem como uma abordagem de interesse central, como um meio de representar estas dinâmicas. Dentre os inúmeros trabalhos que buscam estabelecer técnicas de controle e filtragem considerando esta classe de sistemas, a escassez de literatura relacionada ao cenário de observações parciais é perceptível. Esta tese apresenta novos resultados de controle e filtragem H1 para SLSM a tempo contínuo e observações parciais no modo de operação. A fim de superar o desafio quanto a falta de informações do atual estado da cadeia de Markov, utilizamos uma formulação baseada em um detector. Com esta abordagem, assumimos a existência de um detector, disponível em todo instante de tempo, que fornece informações a respeito do modo de operação do processo de salto. Uma favorável característica desta estratégia é a de nos possibilitar o resgate (sem estar-se limitado a eles) de alguns resultados recentes dos cenários de informações parciais nos quais temos uma solução explícita, como os casos de informações completas, independentes do modo e cluster de observações. Os nossos resultados compreendem um novo bounded real lemma seguido do projeto de controladores e filtros que usam apenas as informações do detector. Tanto a análise H1 quanto os métodos de projeto apresentados são estabelecidos através da soluções de inequações matriciais lineares. Adicionalmente, também são apresentadas simulações numéricas que mostram a performance H1 para estruturas particulares do detector. Sob o ponto de vista de aplicações, destacamos os exemplos relacionados a dinâmicas linearizadas para um avião aéreo não tripulado.
Figueiredo, Danilo Zucolli. "Discrete-time jump linear systems with Markov chain in a general state space." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-18012017-115659/.
Full textEsta tese trata de sistemas lineares com saltos markovianos (MJLS) a tempo discreto com cadeia de Markov em um espaço geral de Borel S. Vários problemas de controle foram abordados para esta classe de sistemas dinâmicos, incluindo estabilidade estocástica (SS), síntese de controle ótimo linear quadrático (LQ), projeto de filtros e um princípio da separação. Condições necessárias e suficientes para a SS foram obtidas. Foi demonstrado que SS é equivalente ao raio espectral de um operador ser menor que 1 ou à existência de uma solução para uma equação de Lyapunov. Os problemas de controle ótimo a horizonte finito e infinito foram abordados com base no conceito de SS. A solução para o problema de controle ótimo LQ a horizonte finito (infinito) foi obtida a partir das associadas equações a diferenças (algébricas) de Riccati S-acopladas de controle. Por S-acopladas entende-se que as equações são acopladas por uma integral sobre o kernel estocástico com densidade de transição em relação a uma medida in-finita no espaço de Borel S. O projeto de filtros lineares markovianos foi analisado e uma solução para o problema da filtragem a horizonte finito (infinito) foi obtida com base nas associadas equações a diferenças (algébricas) de Riccati S-acopladas de filtragem. Condições para a existência e unicidade de uma solução positiva semi-definida e estabilizável para as equações algébricas de Riccati S-acopladas associadas aos problemas de controle e filtragem também foram obtidas. Por último, foi estabelecido um princípio da separação para MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral. Foi demonstrado que o controlador ótimo para um problema de controle ótimo com informação parcial separa o problema de controle com informação parcial em dois problemas, um deles associado a um problema de filtragem e o outro associado a um problema de controle ótimo com informação completa. Espera-se que os resultados obtidos nesta tese possam motivar futuras pesquisas sobre MJLS a tempo discreto com cadeia de Markov em um espaço de estados geral.
Manrique, Garcia Aurora. "Econometric analysis of limited dependent time series." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389797.
Full textO'Ruanaidh, Joseph J. K. "Numerical Bayesian methods applied to signal processing." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339465.
Full textHorký, Miroslav. "Modely hromadné obsluhy." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232033.
Full textSpade, David Allen. "Investigating Convergence of Markov Chain Monte Carlo Methods for Bayesian Phylogenetic Inference." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1372173121.
Full textYang, Linji. "Phase transitions in spin systems: uniqueness, reconstruction and mixing time." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47593.
Full textVILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.
Full textThe analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
Atamna, Asma. "Analysis of Randomized Adaptive Algorithms for Black-Box Continuous Constrained Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS010/document.
Full textWe investigate various aspects of adaptive randomized (or stochastic) algorithms for both constrained and unconstrained black-box continuous optimization. The first part of this thesis focuses on step-size adaptation in unconstrained optimization. We first present a methodology for assessing efficiently a step-size adaptation mechanism that consists in testing a given algorithm on a minimal set of functions, each reflecting a particular difficulty that an efficient step-size adaptation algorithm should overcome. We then benchmark two step-size adaptation mechanisms on the well-known BBOB noiseless testbed and compare their performance to the one of the state-of-the-art evolution strategy (ES), CMA-ES, with cumulative step-size adaptation. In the second part of this thesis, we investigate linear convergence of a (1 + 1)-ES and a general step-size adaptive randomized algorithm on a linearly constrained optimization problem, where an adaptive augmented Lagrangian approach is used to handle the constraints. To that end, we extend the Markov chain approach used to analyze randomized algorithms for unconstrained optimization to the constrained case. We prove that when the augmented Lagrangian associated to the problem, centered at the optimum and the corresponding Lagrange multipliers, is positive homogeneous of degree 2, then for algorithms enjoying some invariance properties, there exists an underlying homogeneous Markov chain whose stability (typically positivity and Harris-recurrence) leads to linear convergence to both the optimum and the corresponding Lagrange multipliers. We deduce linear convergence under the aforementioned stability assumptions by applying a law of large numbers for Markov chains. We also present a general framework to design an augmented-Lagrangian-based adaptive randomized algorithm for constrained optimization, from an adaptive randomized algorithm for unconstrained optimization
Stettler, John. "The Discrete Threshold Regression Model." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440369876.
Full textMa, Xinyuan. "Research on dynamic correlation based on stochastic time-varying beta and stochastic volatility." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/120023/1/Xinyuan_Ma_Thesis.pdf.
Full textSingh, Jasdeep. "Schedulability Analysis of Probabilistic Real-Time Systems." Thesis, Toulouse, ISAE, 2020. http://www.theses.fr/2020ESAE0010.
Full textThe thesis is a study of probabilistic approaches for modelling and analyzing real-time systems. The objective is to understand and improve of the pessimism that exists in the system analysis. Real-time systems must produce results with real world timing constraints. The execution of the tasks within the system is based on their worst case execution time. In practice, there can be many possible execution times below the worst case. We use probabilistic Worst Case Execution Time which is a worst case probability distribution which upper bounds all those possible execution times. We approach with Continuous Time Markov Chain model to obtain probabilities of missing real- world timing constraint. We also study Mixed Criticality (MC) systems because MC systems also tend to cope with pessimism with safety in mind. MC systems consist of tasks with different importance or criticalities. The system operates under different criticality modes in which the execution of the tasks of the same or higher criticality is ensured. We first approach MC systems using Discrete Time Markov Chain to obtain the probability of system entering higher criticalities. We observe certain limitations of our approaches and we proceed to model the MC probabilistic systems using Graph models. We question the existing approaches in the literature and provide our own. We obtain schedules for MC systems which is optimized for resource usage. We also make the first step towards dependence among the tasks due their scheduling
Wang, Chiying. "Contributions to Collective Dynamical Clustering-Modeling of Discrete Time Series." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/198.
Full textPetersson, Mikael. "Perturbed discrete time stochastic models." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-128979.
Full textAt the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 4: Manuscript. Paper 5: Manuscript. Paper 6: Manuscript.