Academic literature on the topic 'Branching Markov chains'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Branching Markov chains.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Branching Markov chains"

1

Müller, Sebastian. "Recurrence for branching Markov chains." Electronic Communications in Probability 13 (2008): 576–605. http://dx.doi.org/10.1214/ecp.v13-1424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baier, Christel, Joost-Pieter Katoen, Holger Hermanns, and Verena Wolf. "Comparative branching-time semantics for Markov chains." Information and Computation 200, no. 2 (2005): 149–214. http://dx.doi.org/10.1016/j.ic.2005.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schinazi, Rinaldo. "On multiple phase transitions for branching Markov chains." Journal of Statistical Physics 71, no. 3-4 (1993): 507–11. http://dx.doi.org/10.1007/bf01058434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Athreya, Krishna B., and Hye-Jeong Kang. "Some limit theorems for positive recurrent branching Markov chains: I." Advances in Applied Probability 30, no. 3 (1998): 693–710. http://dx.doi.org/10.1239/aap/1035228124.

Full text
Abstract:
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
APA, Harvard, Vancouver, ISO, and other styles
5

Athreya, Krishna B., and Hye-Jeong Kang. "Some limit theorems for positive recurrent branching Markov chains: I." Advances in Applied Probability 30, no. 03 (1998): 693–710. http://dx.doi.org/10.1017/s0001867800008557.

Full text
Abstract:
In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.
APA, Harvard, Vancouver, ISO, and other styles
6

LIU, YUANYUAN, HANJUN ZHANG, and YIQIANG ZHAO. "COMPUTABLE STRONGLY ERGODIC RATES OF CONVERGENCE FOR CONTINUOUS-TIME MARKOV CHAINS." ANZIAM Journal 49, no. 4 (2008): 463–78. http://dx.doi.org/10.1017/s1446181108000114.

Full text
Abstract:
AbstractIn this paper, we investigate computable lower bounds for the best strongly ergodic rate of convergence of the transient probability distribution to the stationary distribution for stochastically monotone continuous-time Markov chains and reversible continuous-time Markov chains, using a drift function and the expectation of the first hitting time on some state. We apply these results to birth–death processes, branching processes and population processes.
APA, Harvard, Vancouver, ISO, and other styles
7

BACCI, GIORGIO, GIOVANNI BACCI, KIM G. LARSEN, and RADU MARDARE. "Converging from branching to linear metrics on Markov chains." Mathematical Structures in Computer Science 29, no. 1 (2017): 3–37. http://dx.doi.org/10.1017/s0960129517000160.

Full text
Abstract:
We study two well-known linear-time metrics on Markov chains (MCs), namely, the strong and strutter trace distances. Our interest in these metrics is motivated by their relation to the probabilistic linear temporal logic (LTL)-model checking problem: we prove that they correspond to the maximal differences in the probability of satisfying the same LTL and LTL−X(LTL without next operator) formulas, respectively.The threshold problem for these distances (whether their value exceeds a given threshold) is NP-hard and not known to be decidable. Nevertheless, we provide an approximation schema where
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Ying, and Arthur F. Veinott. "Markov Branching Decision Chains with Interest-Rate-Dependent Rewards." Probability in the Engineering and Informational Sciences 9, no. 1 (1995): 99–121. http://dx.doi.org/10.1017/s0269964800003715.

Full text
Abstract:
Finite-state-and-action Markov branching decision chains are studied with bounded endogenous expected population sizes and interest-rate-dependent one-period rewards that are analytic in the interest rate at zero. The existence of a stationary strong-maximum-present-value policy is established. Miller and Veinott's [1969] strong policy-improvement method is generalized to find in finite time a stationary n-present-value optimal policy and, when the one-period rewards are rational in the interest rate, a stationary strong-maximum-present-value policy. This extends previous studies of Blackwell
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Dihe. "Infinitely dimensional control Markov branching chains in random environments." Science in China Series A 49, no. 1 (2006): 27–53. http://dx.doi.org/10.1007/s11425-005-0024-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cox, J. T. "On the ergodic theory of critical branching Markov chains." Stochastic Processes and their Applications 50, no. 1 (1994): 1–20. http://dx.doi.org/10.1016/0304-4149(94)90144-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Branching Markov chains"

1

Nordvall, Lagerås Andreas. "Markov Chains, Renewal, Branching and Coalescent Processes : Four Topics in Probability Theory." Doctoral thesis, Stockholm University, Department of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6637.

Full text
Abstract:
<p>This thesis consists of four papers.</p><p>In paper 1, we prove central limit theorems for Markov chains under (local) contraction conditions. As a corollary we obtain a central limit theorem for Markov chains associated with iterated function systems with contractive maps and place-dependent Dini-continuous probabilities.</p><p>In paper 2, properties of inverse subordinators are investigated, in particular similarities with renewal processes. The main tool is a theorem on processes that are both renewal and Cox processes.</p><p>In paper 3, distributional properties of supercritical and esp
APA, Harvard, Vancouver, ISO, and other styles
2

Nordvall, Lagerås Andreas. "Markov chains, renewal, branching and coalescent processes : four topics in probability theory /." Stockholm : Department of Mathematics, Stockholm university, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adam, Etienne. "Persistance et vitesse d'extinction pour des modèles de populations stochastiques multitypes en temps discret." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX019/document.

Full text
Abstract:
Cette thèse porte sur l'étude mathématique de modèles stochastiques de dynamique de populations structurées.Dans le premier chapitre, nous introduisons un modèle stochastique à temps discret prenant en compte les diverses interactions possibles entre les individus, que ce soit de la compétition, de la migration, des mutations, ou bien de la prédation. Nous montrons d'abord un résultat de type ``loi des grands nombres'', où on montre que si la population initiale tend vers l'infini, alors sur un intervalle de temps fini, le processus stochastique converge en probabilité vers un processus déterm
APA, Harvard, Vancouver, ISO, and other styles
4

Pham, Thi Da Cam. "Théorèmes limite pour un processus de Galton-Watson multi-type en environnement aléatoire indépendant." Thesis, Tours, 2018. http://www.theses.fr/2018TOUR4005/document.

Full text
Abstract:
La théorie des processus de branchement multi-type en environnement i.i.d. est considérablement moins développée que dans le cas univarié, et les questions fondamentales ne sont pas résolues en totalité à ce jour. Les réponses exigent une compréhension profonde du comportement des produits des matrices i.i.d. à coefficients positifs. Sous des hypothèses assez générales et lorsque les fonctions génératrices de probabilité des lois de reproduction sont “linéaire fractionnaires”, nous montrons que la probabilité de survie à l’instant n du processus de branchement multi-type en environnement aléat
APA, Harvard, Vancouver, ISO, and other styles
5

Weibel, Julien. "Graphons de probabilités, limites de graphes pondérés aléatoires et chaînes de Markov branchantes cachées." Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1031.

Full text
Abstract:
Les graphes sont des objets mathématiques qui servent à modéliser tout type de réseaux, comme les réseaux électriques, les réseaux de communications et les réseaux sociaux. Formellement un graphe est composé d'un ensemble de sommets et d'un ensemble d'arêtes reliant des paires de sommets. Les sommets représentent par exemple des individus, tandis que les arêtes représentent les interactions entre ces individus. Dans le cas d'un graphe pondéré, chaque arête possède un poids ou une décoration pouvant modéliser une distance, une intensité d'interaction, une résistance. La modélisation de réseaux
APA, Harvard, Vancouver, ISO, and other styles
6

Razetti, Agustina. "Modélisation et caractérisation de la croissance des axones à partir de données in vivo." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4016/document.

Full text
Abstract:
La construction du cerveau et de ses connexions pendant le développement reste une question ouverte dans la communauté scientifique. Des efforts fructueux ont été faits pour élucider les mécanismes de la croissance axonale, tels que la guidance axonale et les molécules de guidage. Cependant, des preuves récentes suggèrent que d'autres acteurs seraient impliqués dans la croissance des neurones in vivo. Notamment, les axones se développent dans des environnements mécaniquement contraints. Ainsi, pour bien comprendre ce processus dynamique, il faut prendre en compte les mécanismes collectifs et l
APA, Harvard, Vancouver, ISO, and other styles
7

Ye, Yinna. "PROBABILITÉ DE SURVIE D'UN PROCESSUS DE BRANCHEMENT DANS UN ENVIRONNEMENT ALÉATOIRE MARKOVIEN." Phd thesis, Université François Rabelais - Tours, 2011. http://tel.archives-ouvertes.fr/tel-00605751.

Full text
Abstract:
L'objet de cette thèse est d'étudier la probabilité de survie d'un processus de branchement en environnement aléatoire markovien et d'étendre dans ce cadre les résultats connus en milieu aléatoire indépendant et identiquement distribué. Le coeur de l'étude repose sur l'utilisation des théorèmes limites locaux pour une marche aléatoire centrée (Sn)n 0 sur R à pas markoviens et pour (mn)n 0, où mn = min (0; S1; ; Sn). Pour traiter le cas d'un environnement aléatoire markovien, nous développons dans un premier temps une étude des théorèmes locaux pour une chaîne semi-markovienne à valeurs réelles
APA, Harvard, Vancouver, ISO, and other styles
8

Olivier, Adelaïde. "Analyse statistique des modèles de croissance-fragmentation." Thesis, Paris 9, 2015. http://www.theses.fr/2015PA090047/document.

Full text
Abstract:
Cette étude théorique est pensée en lien étroit avec un champ d'application : il s'agit de modéliser la croissance d'une population de cellules qui se divisent selon un taux de division inconnu, fonction d’une variable dite structurante – l’âge et la taille des cellules étant les deux exemples paradigmatiques étudiés. Le champ mathématique afférent se situe à l'interface de la statistique des processus, de l’estimation non-paramétrique et de l’analyse des équations aux dérivées partielles. Les trois objectifs de ce travail sont les suivants : reconstruire le taux de division (fonction de l’âge
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Branching Markov chains"

1

Krell, Nathalie. "Self-Similar Branching Markov Chains." In Lecture Notes in Mathematics. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01763-6_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dynkin, E. B. "Branching Exit Markov System and their Applications to Partial Differential Equations." In Markov Processes and Controlled Markov Chains. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Guangping, and Jinzhao Wu. "Branching Time Equivalences for Interactive Markov Chains." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30233-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baier, Christel, Holger Hermanns, Joost-Pieter Katoen, and Verena Wolf. "Comparative Branching-Time Semantics for Markov Chains." In CONCUR 2003 - Concurrency Theory. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45187-7_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bacci, Giorgio, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. "Converging from Branching to Linear Metrics on Markov Chains." In Theoretical Aspects of Computing - ICTAC 2015. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25150-9_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arora, Shiraj, and M. V. Panduranga Rao. "Model Checking Branching Time Properties for Incomplete Markov Chains." In Model Checking Software. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30923-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hahn, Ernst Moritz, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, and Dominik Wojtczak. "Model-Free Reinforcement Learning for Branching Markov Decision Processes." In Computer Aided Verification. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_30.

Full text
Abstract:
AbstractWe study reinforcement learning for the optimal control of Branching Markov Decision Processes (BMDPs), a natural extension of (multitype) Branching Markov Chains (BMCs). The state of a (discrete-time) BMCs is a collection of entities of various types that, while spawning other entities, generate a payoff. In comparison with BMCs, where the evolution of a each entity of the same type follows the same probabilistic pattern, BMDPs allow an external controller to pick from a range of options. This permits us to study the best/worst behaviour of the system. We generalise model-free reinfor
APA, Harvard, Vancouver, ISO, and other styles
8

Grimmett, Geoffrey R., and David R. Stirzaker. "Markov chains." In Probability and Random Processes. Oxford University PressOxford, 2001. http://dx.doi.org/10.1093/oso/9780198572237.003.0006.

Full text
Abstract:
Abstract A Markov chain is a random process with the property that, conditional on its present value, the future is independent of the past. The Chapman– Kolmogorov equations are derived, and used to explore the persistence and transience of states. Stationary distributions are studied at length, and the ergodic theorem for irreducible chains is proved using coupling. The reversibility of Markov chains is discussed. After a section devoted to branching processes, the theory of Poisson processes and birth–death processes is considered in depth, and the theory of continuous-time chains is sketch
APA, Harvard, Vancouver, ISO, and other styles
9

Grimmett, Geoffrey R., and David R. Stirzaker. "Renewals." In Probability and Random Processes. Oxford University PressOxford, 2001. http://dx.doi.org/10.1093/oso/9780198572237.003.0010.

Full text
Abstract:
Abstract A renewal process is a recurrent-event process with independent identically distributed interevent times. The asymptotic behaviour of a renewal process is described by the renewal theorem and the elementary renewal theorem, and the key renewal theorem is often useful. The waiting-time paradox leads to a discussion of excess and current lifetimes, and their asymptotic distributions are found. Other renewal-type processes are studied, including alternating and delayed renewal processes, and the use of renewal is illustrated in applications to Markov chains and age-dependent branching pr
APA, Harvard, Vancouver, ISO, and other styles
10

Grenander, Ulf, and Michael I. Miller. "Probabilistic Directed Acyclic Graphs and Their Entropies." In Pattern Theory. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780198505709.003.0004.

Full text
Abstract:
Probabilistic structures on the representations allow for expressing the variation of natural patterns. In this chapter the structure imposed through probabilistic directed graphs is studied. The essential probabilistic structure enforced through the directedness of the graphs is sites are conditionally independent of their nondescendants given their parents. The entropies and combinatorics of these processes are examined as well. Focus is given to the classical Markov chain and the branching process examples to illustrate the fundamentals of variability descriptions through probability and en
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Branching Markov chains"

1

Xia, Ning, Aishuang Li, Guizhi Zhu, Xiaoguo Niu, Chunsheng Hou, and Yangying Gan. "Study of Branching Responses of One Year Old Branches of Apple Trees to Heading Using Hidden Semi-Markov Chains." In 2009 Third International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA). IEEE, 2009. http://dx.doi.org/10.1109/pma.2009.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yan Hua, Qian Zhang, Bao Guo Li, and Bao Gui Zhang. "Characterizing Wheat Root Branching Using a Markov Chain Approach." In 2006 International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA). IEEE, 2006. http://dx.doi.org/10.1109/pma.2006.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!