To see the other types of publications on this topic, follow the link: Markov processes.

Dissertations / Theses on the topic 'Markov processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Markov processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Desharnais, Josée. "Labelled Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0031/NQ64546.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Balan, Raluca M. "Set-Markov processes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ66119.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eltannir, Akram A. "Markov interactive processes." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/30745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Haugomat, Tristan. "Localisation en espace de la propriété de Feller avec application aux processus de type Lévy." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S046/document.

Full text
Abstract:
Dans cette thèse, nous donnons une localisation en espace de la théorie des processus de Feller. Un premier objectif est d’obtenir des résultats simples et précis sur la convergence de processus de Markov. Un second objectif est d’étudier le lien entre les notions de propriété de Feller, problème de martingales et topologie de Skorokhod. Dans un premier temps nous donnons une version localisée de la topologie de Skorokhod. Nous en étudions les notions de compacité et tension. Nous faisons le lien entre les topologies de Skorokhod localisée et non localisée, grâce à la notion de changement de t
APA, Harvard, Vancouver, ISO, and other styles
5

莊競誠 and King-sing Chong. "Explorations in Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

James, Huw William. "Transient Markov decision processes." Thesis, University of Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ku, Ho Ming. "Interacting Markov branching processes." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/2002759/.

Full text
Abstract:
In engineering, biology and physics, in many systems, the particles or members give birth and die through time. These systems can be modeled by continuoustime Markov Chains and Markov Processes. Applications of Markov Processes are investigated by many scientists, Jagers [1975] for example . In ordinary Markov branching processes, each particles or members are assumed to be identical and independent. However, in some cases, each two members of the species may interact/collide together to give new birth. In considering these cases, we need to have some more general processes. We may use collisi
APA, Harvard, Vancouver, ISO, and other styles
8

Chong, King-sing. "Explorations in Markov processes /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18736105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Full text
Abstract:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
10

Ferns, Norman Francis. "Metrics for Markov decision processes." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80263.

Full text
Abstract:
We present a class of metrics, defined on the state space of a finite Markov decision process (MDP), each of which is sound with respect to stochastic bisimulation, a notion of MDP state equivalence derived from the theory of concurrent processes. Such metrics are based on similar metrics developed in the context of labelled Markov processes, and like those, are suitable for state space aggregation. Furthermore, we restrict our attention to a subset of this class that is appropriate for certain reinforcement learning (RL) tasks, specifically, infinite horizon tasks with an expected tota
APA, Harvard, Vancouver, ISO, and other styles
11

Chaput, Philippe. "Approximating Markov processes by averaging." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66654.

Full text
Abstract:
We recast the theory of labelled Markov processes in a new setting, in a way "dual" to the usual point of view. Instead of considering state transitions as a collection of subprobability distributions on the state space, we view them as transformers of real-valued functions. By generalizing the operation of conditional expectation, we build a category consisting of labelled Markov processes viewed as a collection of operators; the arrows of this category behave as projections on a smaller state space. We define a notion of equivalence for such processes, called bisimulation,
APA, Harvard, Vancouver, ISO, and other styles
12

Baxter, Martin William. "Discounted functionals of Markov processes." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Furloni, Walter. "Controle em horizonte finito com restriçoes de sistemas lineares discretos com saltos markovianos." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259271.

Full text
Abstract:
Orientador: João Bosco Ribeiro do Val<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-13T17:57:42Z (GMT). No. of bitstreams: 1 Furloni_Walter_M.pdf: 917619 bytes, checksum: 10cdfc1afdfa09f1573d3e30d14415c4 (MD5) Previous issue date: 2009<br>Resumo: O objetivo deste trabalho é propor e resolver o problema de controle em horizonte finito com restrições de Sistemas Lineares Discretos com Saltos Markovianos (SLDSM) na presença de ruído. As restrições dos vetores de estado e de controle não são
APA, Harvard, Vancouver, ISO, and other styles
14

Pinheiro, Maicon Aparecido. "Processos pontuais no modelo de Guiol-Machado-Schinazi de sobrevivência de espécies." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-01062016-191528/.

Full text
Abstract:
Recentemente, Guiol, Machado e Schinazi propuseram um modelo estocástico para a evolução de espécies. Nesse modelo, as intensidades de nascimentos de novas espécies e de ocorrências de extinções são invariantes ao longo do tempo. Ademais, no instante de nascimento de uma nova espécie, a mesma é rotulada com um número aleatório gerado de uma distribuição absolutamente contínua. Toda vez que ocorre uma extinção, apenas uma espécie morre - a com o menor número vinculado. Quando a intensidade com que surgem novas espécies é maior que a com que ocorrem extinções, existe um valor crítico f_c tal que
APA, Harvard, Vancouver, ISO, and other styles
15

Pra, Paolo Dai, Pierre-Yves Louis, and Ida G. Minelli. "Complete monotone coupling for Markov processes." Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2008/1828/.

Full text
Abstract:
We formalize and analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuoustime but not in discrete-time.
APA, Harvard, Vancouver, ISO, and other styles
16

De, Stavola Bianca Lucia. "Multistate Markov processes with incomplete information." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Carpio, Kristine Joy Espiritu, and kjecarpio@lycos com. "Long-Range Dependence of Markov Processes." The Australian National University. School of Mathematical Sciences, 2006. http://thesis.anu.edu.au./public/adt-ANU20061024.131933.

Full text
Abstract:
Long-range dependence in discrete and continuous time Markov chains over a countable state space is defined via embedded renewal processes brought about by visits to a fixed state. In the discrete time chain, solidarity properties are obtained and long-range dependence of functionals are examined. On the other hand, the study of LRD of continuous time chains is defined via the number of visits in a given time interval. Long-range dependence of Markov chains over a non-countable state space is also carried out through positive Harris chains. Embedded renewal processes in these chains exist via
APA, Harvard, Vancouver, ISO, and other styles
18

Castro, Rivadeneira Pablo Samuel. "Bayesian exploration in Markov decision processes." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18479.

Full text
Abstract:
Markov Decision Processes are a mathematical framework widely used for stochastic optimization and control problems. Reinforcement Learning is a branch of Artificial Intelligence that deals with stochastic environments where the dynamics of the system are unknown. A major issue for learning algorithms is the need to balance the amount of exploration of new experiences with the exploitation of existing knowledge. We present three methods for dealing with this exploration-exploitation tradeoff for Markov Decision Processes. The approach taken is Bayesian, in that we use and maintain a model e
APA, Harvard, Vancouver, ISO, and other styles
19

Propp, Michael Benjamin. "The thermodynamic properties of Markov processes." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/17193.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1985.<br>MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.<br>Includes glossary.<br>Bibliography: leaves 87-91.<br>by Michael Benjamin Propp.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Korpas, Agata K. "Occupation Times of Continuous Markov Processes." Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1151347146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chu, Shanyun. "Some contributions to Markov decision processes." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2038000/.

Full text
Abstract:
In a nutshell, this thesis studies discrete-time Markov decision processes (MDPs) on Borel Spaces, with possibly unbounded costs, and both expected (discounted) total cost and long-run expected average cost criteria. In Chapter 2, we systematically investigate a constrained absorbing MDP with expected total cost criterion and possibly unbounded (from both above and below) cost functions. We apply the convex analytic approach to derive the optimality and duality results, along with the existence of an optimal finite mixing policy. We also provide mild conditions under which a general constraine
APA, Harvard, Vancouver, ISO, and other styles
22

Carpio, Kristine Joy Espiritu. "Long-range dependence of Markov processes /." View thesis entry in Australian Digital Theses Program, 2006. http://thesis.anu.edu.au/public/adt-ANU20061024.131933/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

LIKA, ADA. "MARKOV PROCESSES IN FINANCE AND INSURANCE." Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249618.

Full text
Abstract:
In this thesis we tried to make one more step in the application of Markov processes in the actuarial and financial field. Two main problems have been dealt. The first one regarded the application of a Markov process for the description of the salary lines of participants in an Italian Pension Scheme of the First Pillar. A semi-Markov process with backward recurrence time was proposed. A statistic test has been applied in order to determine whether the null hypothesis of a geometrical distribution of the waiting times of the process should be accepted or not. The test showed that the null hypo
APA, Harvard, Vancouver, ISO, and other styles
24

Durrell, Fernando. "Constrained portfolio selection with Markov and non-Markov processes and insiders." Doctoral thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/4379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wright, James M. "Stable processes with opposing drifts /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/5807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Werner, Ivan. "Contractive Markov systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/15173.

Full text
Abstract:
We introduce a theory of contractive Markov systems (CMS) which provides a unifying framework in so-called "fractal" geometry. It extends the known theory of iterated function systems (IFS) with place dependent probabilities [1][8] in a way that it also covers graph directed constructions of "fractal" sets [18]. Such systems naturally extend finite Markov chains and inherit some of their properties. In Chapter 1, we consider iterations of a Markov system and show that they preserve the essential structure of it. In Chapter 2, we show that the Markov operator defined by such a system has a uniq
APA, Harvard, Vancouver, ISO, and other styles
27

Elsayad, Amr Lotfy. "Numerical solution of Markov Chains." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bartholme, Carine. "Self-similarity and exponential functionals of Lévy processes." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209256.

Full text
Abstract:
La présente thèse couvre deux principaux thèmes de recherche qui seront présentés dans deux parties et précédés par un prolegomenon commun. Dans ce dernier nous introduisons les concepts essentiels et nous exploitons aussi le lien entre les deux parties.<p><p>Dans la première partie, le principal objet d’intérêt est la soi-disant fonctionnelle exponentielle de processus de Lévy. La loi de cette variable aléatoire joue un rôle primordial dans de nombreux domaines divers tant sur le plan théorique que dans des domaines appliqués. Doney dérive une factorisation de la loi arc-sinus en termes de su
APA, Harvard, Vancouver, ISO, and other styles
29

Dendievel, Sarah. "Skip-free markov processes: analysis of regular perturbations." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209050.

Full text
Abstract:
A Markov process is defined by its transition matrix. A skip-free Markov process is a stochastic system defined by a level that can only change by one unit either upwards or downwards. A regular perturbation is defined as a modification of one or more parameters that is small enough not to change qualitatively the model.<p>This thesis focuses on a category of methods, called matrix analytic methods, that has gained much interest because of good computational properties for the analysis of a large family of stochastic processes. Those methods are used in this work in order i) to analyze the eff
APA, Harvard, Vancouver, ISO, and other styles
30

Manstavicius, Martynas. "The p-variation of strong Markov processes /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2003.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 2003.<br>Advisers: Richard M. Dudley; Marjorie G. Hahn. Submitted to the Dept. of Mathematics. Includes bibliographical references (leaves 109-113). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
31

葉錦元 and Kam-yuen William Yip. "Simulation and inference of aggregated Markov processes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yip, Kam-yuen William. "Simulation and inference of aggregated Markov processes." [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13787391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Patrascu, Relu-Eugen. "Linear approximations from factored Markov Dicision Processes." Waterloo, Ont. : University of Waterloo, 2004. http://etd.uwaterloo.ca/etd/rpatrasc2004.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Waterloo, 2004.<br>"A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science". Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
34

Patrascu, Relu-Eugen. "Linear Approximations For Factored Markov Decision Processes." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1171.

Full text
Abstract:
A Markov Decision Process (MDP) is a model employed to describe problems in which a decision must be made at each one of several stages, while receiving feedback from the environment. This type of model has been extensively studied in the operations research community and fundamental algorithms have been developed to solve associated problems. However, these algorithms are quite inefficient for very large problems, leading to a need for alternatives; since MDP problems are provably hard on compressed representations, one becomes content even with algorithms which may perform well at leas
APA, Harvard, Vancouver, ISO, and other styles
35

Cheng, Hsien-Te. "Algorithms for partially observable Markov decision processes." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29073.

Full text
Abstract:
The thesis develops methods to solve discrete-time finite-state partially observable Markov decision processes. For the infinite horizon problem, only discounted reward case is considered. Several new algorithms for the finite horizon and the infinite horizon problems are developed. For the finite horizon problem, two new algorithms are developed. The first algorithm is called the relaxed region algorithm. For each support in the value function, this algorithm determines a region not smaller than its support region and modifies it implicitly in later steps until the exact support region is fo
APA, Harvard, Vancouver, ISO, and other styles
36

Mundt, André Philipp. "Dynamic risk management with Markov decision processes." Karlsruhe Univ.-Verl. Karlsruhe, 2007. http://d-nb.info/987216511/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mundt, André Philipp. "Dynamic risk management with Markov decision processes." Karlsruhe, Baden : Universitätsverl. Karlsruhe, 2008. http://www.uvka.de/univerlag/volltexte/2008/294/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Saeedi, Ardavan. "Nonparametric Bayesian models for Markov jump processes." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42963.

Full text
Abstract:
Markov jump processes (MJPs) have been used as models in various fields such as disease progression, phylogenetic trees, and communication networks. The main motivation behind this thesis is the application of MJPs to data modeled as having complex latent structure. In this thesis we propose a nonparametric prior, the gamma-exponential process (GEP), over MJPs. Nonparametric Bayesian models have recently attracted much attention in the statistics community, due to their flexibility, adaptability, and usefulness in analyzing complex real world datasets. The GEP is a prior over infinite rate mat
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Wenzong. "Spatial queueing systems and reversible markov processes." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Paduraru, Cosmin. "Off-policy evaluation in Markov decision processes." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=117008.

Full text
Abstract:
This dissertation is set in the context of a widely used framework for formalizing autonomous decision-making, namely Markov decision processes (MDPs). One of the key problems that arise in MDPs is that of evaluating a decision-making strategy, typically called a policy. It is often the case that data collected under the policy one wishes to evaluate is difficult or even impossible to obtain. In this case, data collected under some other policy needs to be used, a setting known as off-policy evaluation. The main goal of this dissertation is to offer new insights into the properties of methods
APA, Harvard, Vancouver, ISO, and other styles
41

Dai, Peng. "FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/428.

Full text
Abstract:
Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by
APA, Harvard, Vancouver, ISO, and other styles
42

Black, Mary. "Applying Markov decision processes in asset management." Thesis, University of Salford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Nieto-Barajas, Luis E. "Bayesian nonparametric survival analysis via Markov processes." Thesis, University of Bath, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Marbach, Peter 1966. "Simulation-based optimization of Markov decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9660.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.<br>Includes bibliographical references (p. 127-129).<br>Markov decision processes have been a popular paradigm for sequential decision making under uncertainty. Dynamic programming provides a framework for studying such problems, as well as for devising algorithms to compute an optimal control policy. Dynamic programming methods rely on a suitably defined value function that has to be computed for every state in the state space. However, many interesting problems involve very lar
APA, Harvard, Vancouver, ISO, and other styles
45

Winder, Lee F. (Lee Francis) 1973. "Hazard avoidance alerting with Markov decision processes." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28860.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2004.<br>Includes bibliographical references (p. 123-125).<br>(cont.) (incident rate and unnecessary alert rate), the MDP-based logic can meet or exceed that of alternate logics.<br>This thesis describes an approach to designing hazard avoidance alerting systems based on a Markov decision process (MDP) model of the alerting process, and shows its benefits over standard design methods. One benefit of the MDP method is that it accounts for future decision opportunities when choosing whether or not to a
APA, Harvard, Vancouver, ISO, and other styles
46

Vera, Ruiz Victor. "Recoding of Markov Processes in Phylogenetic Models." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/13433.

Full text
Abstract:
Under a Markov model of evolution, lumping the state space (S) into fewer groups has been historically used to focus on specific types of substitutions or to reduce compositional heterogeneity and saturation. However, working with reduced state spaces (S’) may yield misleading results unless the Markovian property is kept. A Markov process X(t) is lumpable if the reduced process X’(t) of S’ is Markovian. The aim of this Thesis is to develop a test able to detect if a given X(t) is lumpable with respect to a given S’. This test should allow flexibility to any possible non-trivial S’ and should
APA, Harvard, Vancouver, ISO, and other styles
47

Yu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Includes bibliographical references (p. 165-169).<br>We consider approximation methods for discrete-time infinite-horizon partially observable Markov and semi-Markov decision processes (POMDP and POSMDP). One of the main contributions of this thesis is a lower cost approximation method for finite-space POMDPs with the average cos
APA, Harvard, Vancouver, ISO, and other styles
48

Ciolek, Gabriela. "Bootstrap and uniform bounds for Harris Markov chains." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT024/document.

Full text
Abstract:
Cette thèse se concentre sur certaines extensions de la théorie des processus empiriques lorsque les données sont Markoviennes. Plus spécifiquement, nous nous concentrons sur plusieurs développements de la théorie du bootstrap, de la robustesse et de l’apprentissage statistique dans un cadre Markovien Harris récurrent positif. Notre approche repose sur la méthode de régénération qui s’appuie sur la décomposition d’une trajectoire de la chaîne de Markov atomique régénérative en blocs d’observations indépendantes et identiquement distribuées (i.i.d.). Les blocs de régénération correspondent à de
APA, Harvard, Vancouver, ISO, and other styles
49

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Full text
Abstract:
Cette thèse étudie le comportement en temps long des particules run-and-tumble (RTPs), un modèle pour les bactéries en physique statistique hors équilibre, en utilisant des processus de Markov déterministes par morceaux (PDMPs). La motivation est d'améliorer la compréhension au niveau particulaire des phénomènes actifs, en particulier la séparation de phase induite par la motilité (MIPS). La mesure invariante pour deux RTPs avec jamming sur un tore 1D est déterminée pour mécanismes de tumble et jamming généraux, révélant deux classes d'universalité hors équilibre. De plus, la dépendance du tem
APA, Harvard, Vancouver, ISO, and other styles
50

Wortman, M. A. "Vacation queues with Markov schedules." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/54468.

Full text
Abstract:
Vacation systems represent an important class of queueing models having application in both computer communication systems and integrated manufacturing systems. By specifying an appropriate server scheduling discipline, vacation systems are easily particularized to model many practical situations where the server's effort is divided between primary and secondary customers. A general stochastic framework that subsumes a wide variety of server scheduling disciplines for the M/GI/1/L vacation system is developed. Here, a class of server scheduling disciplines, called Markov schedules, is introdu
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!