Academic literature on the topic 'Convergence de processus de Markov'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convergence de processus de Markov.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Convergence de processus de Markov"

1

Abakuks, A., S. N. Ethier, and T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, no. 2 (1987): 484. http://dx.doi.org/10.2307/2531839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Perkins, Edwin, S. N. Ethier, and T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, no. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Amdaoud, Mounir, and Nadine Levratto. "Territoires d’industrie : hétérogénéité et convergence." Revue d'économie industrielle 181-182, no. 1 (2024): 199–229. https://doi.org/10.3917/rei.181.0199.

Full text
Abstract:
L’objectif de cet article est d’explorer la nature des dynamiques de l’emploi dans les territoires d’industrie en France sur la période 2014 à 2021. Deux méthodes d’investigation complémentaires sont déployées. La première s’appuie sur le modèle des chaînes de Markov en vue d’analyser la croissance relative des territoires d’industrie et mesurer les mouvements dans la distribution des taux d’emploi. La seconde, plus récente, teste l’hypothèse de convergence en clubs, elle complète la précédente en précisant et identifiant chacun des territoires faisant l’objet d’une mobilité au sein de la dist
APA, Harvard, Vancouver, ISO, and other styles
4

Swishchuk, Anatoliy, and M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas." International Journal of Stochastic Analysis 2010 (December 19, 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.

Full text
Abstract:
We consider the geometric Markov renewal processes as a model for a security market and study this processes in a diffusion approximation scheme. Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes in diffusion scheme are presented. We present European call option pricing formulas in the case of ergodic, double-averaged, and merged diffusion geometric Markov renewal processes.
APA, Harvard, Vancouver, ISO, and other styles
5

Aldous, David J. "Book Review: Markov processes: Characterization and convergence." Bulletin of the American Mathematical Society 16, no. 2 (1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Deng, Chang-Song, René L. Schilling, and Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination." Advances in Applied Probability 49, no. 1 (2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.

Full text
Abstract:
Abstract We are interested in the rate of convergence of a subordinate Markov process to its invariant measure. Given a subordinator and the corresponding Bernstein function (Laplace exponent), we characterize the convergence rate of the subordinate Markov process; the key ingredients are the rate of convergence of the original process and the (inverse of the) Bernstein function. At a technical level, the crucial point is to bound three types of moment (subexponential, algebraic, and logarithmic) for subordinators as time t tends to ∞. We also discuss some concrete models and we show that subo
APA, Harvard, Vancouver, ISO, and other styles
7

Macci, Claudio. "Continuous-time Markov additive processes: Composition of large deviations principles and comparison between exponential rates of convergence." Journal of Applied Probability 38, no. 4 (2001): 917–31. http://dx.doi.org/10.1239/jap/1011994182.

Full text
Abstract:
We consider a continuous-time Markov additive process (Jt,St) with (Jt) an irreducible Markov chain on E = {1,…,s}; it is known that (St/t) satisfies the large deviations principle as t → ∞. In this paper we present a variational formula H for the rate function κ∗ and, in some sense, we have a composition of two large deviations principles. Moreover, under suitable hypotheses, we can consider two other continuous-time Markov additive processes derived from (Jt,St): the averaged parameters model (Jt,St(A)) and the fluid model (Jt,St(F)). Then some results of convergence are presented and the va
APA, Harvard, Vancouver, ISO, and other styles
8

Franz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 3 (2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.

Full text
Abstract:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes ap
APA, Harvard, Vancouver, ISO, and other styles
9

Franz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 03 (2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.

Full text
Abstract:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes ap
APA, Harvard, Vancouver, ISO, and other styles
10

Lam, Hoang-Chuong. "Weak Convergence for Markov Processes on Discrete State Spaces." Markov Processes And Related Fields, no. 2024 № 4 (30) (February 8, 2025): 587–98. https://doi.org/10.61102/1024-2953-mprf.2024.30.4.004.

Full text
Abstract:
This study investigates the weak convergence of Markov processes on discrete state spaces under the assumption that the transition intensities converge to a constant. Additionally, the research determines the limits of higher-order moments of the Markov process, which are utilized to prove the existence of limit theorems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Convergence de processus de Markov"

1

Lachaud, Béatrice. "Détection de la convergence de processus de Markov." Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.

Full text
Abstract:
Notre travail porte sur le phénomène de cutoff pour des n-échantillons de processus de Markov, dans le but de l'appliquer à la détection de la convergence d'algorithmes parallélisés. Dans un premier temps, le processus échantillonné est un processus d'Ornstein-Uhlenbeck. Nous mettons en évidence le phénomène de cutoff pour le n-échantillon, puis nous faisons le lien avec la convergence en loi du temps d'atteinte par le processus moyen d'un niveau fixé. Dans un second temps, nous traitons le cas général où le processus échantillonné converge à vitesse exponentielle vers sa loi stationnaire. Nou
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.

Full text
Abstract:
Ma thèse de doctorat se concentre principalement sur le comportement en temps long des processus de Markov, les inégalités fonctionnelles et les techniques relatives. Plus spécifiquement, Je vais présenter les taux de convergence sous-exponentielle explicites des processus de Markov dans deux approches : la méthode Meyn-Tweedie et l'hypocoercivité (faible). Le document se divise en trois parties. Dans la première partie, Je vais présenter quelques résultats importants et des connaissances connexes. D'abord, un aperçu de mon domaine de recherche sera donné. La convergence exponentielle (ou sous
APA, Harvard, Vancouver, ISO, and other styles
3

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Full text
Abstract:
Cette thèse étudie le comportement en temps long des particules run-and-tumble (RTPs), un modèle pour les bactéries en physique statistique hors équilibre, en utilisant des processus de Markov déterministes par morceaux (PDMPs). La motivation est d'améliorer la compréhension au niveau particulaire des phénomènes actifs, en particulier la séparation de phase induite par la motilité (MIPS). La mesure invariante pour deux RTPs avec jamming sur un tore 1D est déterminée pour mécanismes de tumble et jamming généraux, révélant deux classes d'universalité hors équilibre. De plus, la dépendance du tem
APA, Harvard, Vancouver, ISO, and other styles
4

Bouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.

Full text
Abstract:
L'objet de cette thèse est d'étudier une certaine classe de processus de Markov, dits déterministes par morceaux, ayant de très nombreuses applications en modélisation. Plus précisément, nous nous intéresserons à leur comportement en temps long et à leur vitesse de convergence à l'équilibre lorsqu'ils admettent une mesure de probabilité stationnaire. L'un des axes principaux de ce manuscrit de thèse est l'obtention de bornes quantitatives fines sur cette vitesse, obtenues principalement à l'aide de méthodes de couplage. Le lien sera régulièrement fait avec d'autres domaines des mathématiques d
APA, Harvard, Vancouver, ISO, and other styles
5

Rochet, Sophie. "Convergence des algorithmes génétiques : modèles stochastiques et épistasie." Aix-Marseille 1, 1998. http://www.theses.fr/1998AIX11032.

Full text
Abstract:
Les algorithmes genetiques sont des algorithmes evolutifs introduits par john holland dans les annees 70. Ils sont utilises pour resoudre les problemes d'optimisation. Le travail realise dans cette these se concentre autour de l'idee de convergence dans ces algorithmes sur le plan theorique et pratique. Tout d'abord, deux modeles sont elabores dans le but d'acceder a une demonstration purement mathematique d'un theoreme enonce par holland, le theoreme des schemas, concernant les structures preservees lors d'un cycle de l'algorithme genetique. En ce qui concerne l'action de la selection, il fau
APA, Harvard, Vancouver, ISO, and other styles
6

Bertoncini, Olivier. "Convergence abrupte et métastabilité." Phd thesis, Rouen, 2007. http://www.theses.fr/2007ROUES038.

Full text
Abstract:
Le but de cette thèse est de relier deux phénomènes relatifs au comportement asymptotique des processus stochastiques, qui jusqu'à présent étaient restés dissociés. La convergence abrupte ou phénomène de cutoff d'une part, et la métastabilité d'autre part. Dans le cas du cutoff, une convergence abrupte vers la mesure d'équilibre du processus a lieu à un instant que l'on peut déterminer, alors que la métastabilité est liée à une grande incertitude sur l'instant où l'on va sortir d'un certain équilibre. On propose un cadre commun pour étudier et comparer les deux phénomènes : celui des chaînes d
APA, Harvard, Vancouver, ISO, and other styles
7

Bertoncini, Olivier. "Convergence abrupte et métastabilité." Phd thesis, Université de Rouen, 2007. http://tel.archives-ouvertes.fr/tel-00218132.

Full text
Abstract:
Le but de cette thèse est de relier deux phénomènes relatifs au comportement asymptotique des processus stochastiques, qui jusqu'à présent étaient restés dissociés. La convergence abrupte ou phénomène de cutoff d'une part, et la métastabilité d'autre part. Dans le cas du cutoff, une convergence abrupte vers la mesure d'équilibre du processus a lieu à un instant que l'on peut déterminer, alors que la métastabilité est liée à une grande incertitude sur l'instant où l'on va sortir d'un certain équilibre. On propose un cadre commun pour étudier et comparer les deux phénomènes : celui des chaînes d
APA, Harvard, Vancouver, ISO, and other styles
8

Gissler, Armand. "Linear convergence of evolution strategies with covariance matrix adaptation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX115.

Full text
Abstract:
En tant que méthode à l’état de l’art parmis les stratégies d’évolution, CMA-ES un algorithme d’optimisation sans dérivées avec de nombreuses applications, mais dont la convergence est restée un problème ouvert depuis plus de 20 ans. Le but de cette thèse est d’apporter des garanties théoriques de convergence de CMA-ES. Ainsi, nous prouvons que CMA-ES approche le minimum de fonctions ellipsoïdes avec une erreur géometrique, et nous vérifions la conjecture de la matrice de covariance dans CMA-ES qui estime l’inverse de la Hessienne d’une fonction convexe-quadratique.Notre démonstration s’appuie
APA, Harvard, Vancouver, ISO, and other styles
9

Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005/document.

Full text
Abstract:
Les processus décisionnels de Markov (MDP) sont un formalisme mathématique des domaines de l'intelligence artificielle telle que la planification, l'apprentissage automatique, l'apprentissage par renforcement... Résoudre un MDP permet d'identifier la stratégie (politique) optimale d'un agent en interaction avec un environnement stochastique. Lorsque la taille de ce système est très grande il devient difficile de résoudre ces processus par les moyens classiques. Cette thèse porte sur la résolution des MDP de grande taille. Elle étudie certaines méthodes de résolutions: comme les abstractions et
APA, Harvard, Vancouver, ISO, and other styles
10

Tagorti, Manel. "Sur les abstractions et les projections des processus décisionnels de Markov de grande taille." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0005.

Full text
Abstract:
Les processus décisionnels de Markov (MDP) sont un formalisme mathématique des domaines de l'intelligence artificielle telle que la planification, l'apprentissage automatique, l'apprentissage par renforcement... Résoudre un MDP permet d'identifier la stratégie (politique) optimale d'un agent en interaction avec un environnement stochastique. Lorsque la taille de ce système est très grande il devient difficile de résoudre ces processus par les moyens classiques. Cette thèse porte sur la résolution des MDP de grande taille. Elle étudie certaines méthodes de résolutions: comme les abstractions et
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Convergence de processus de Markov"

1

G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. Wiley, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roberts, Gareth O. Convergence of slice sampler Markov chains. University of Toronto, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baxter, John Robert. Rates of convergence for everywhere-positive markov chains. University of Toronto, Dept. of Statistics, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. University of Toronto, Dept. of Statistics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. University of Toronto, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. University of Toronto, Dept. of Statistics, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. University of Toronto, Dept. of Statistics, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. University of Toronto, Dept. of Statistics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wirsching, Günther J. The dynamical system generated by the 3n + 1 function. Springer, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Petrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. University of Toronto, Dept. of Statistics, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Convergence de processus de Markov"

1

Zhang, Hanjun, Qixiang Mei, Xiang Lin, and Zhenting Hou. "Convergence Property of Standard Transition Functions." In Markov Processes and Controlled Markov Chains. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Altman, Eitan. "Convergence of discounted constrained MDPs." In Constrained Markov Decision Processes. Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Altman, Eitan. "Convergence as the horizon tends to infinity." In Constrained Markov Decision Processes. Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kersting, G., and F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." In Athens Conference on Applied Probability and Time Series Analysis. Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used." In Markov Processes and Controlled Markov Chains. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes." In Lecture Notes in Mathematics. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Feng, Jin, and Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence." In Mathematical Surveys and Monographs. American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes." In Seminar on Stochastic Processes, 1984. Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chatterjee, Krishnendu, Mahdi JafariRaviz, Raimundo Saona, and Jakub Svoboda. "Value Iteration with Guessing for Markov Chains and Markov Decision Processes." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-90653-4_11.

Full text
Abstract:
Abstract Two standard models for probabilistic systems are Markov chains (MCs) and Markov decision processes (MDPs). Classic objectives for such probabilistic models for control and planning problems are reachability and stochastic shortest path. The widely studied algorithmic approach for these problems is the Value Iteration (VI) algorithm which iteratively applies local updates called Bellman updates. There are many practical approaches for VI in the literature but they all require exponentially many Bellman updates for MCs in the worst case. A preprocessing step is an algorithm that is dis
APA, Harvard, Vancouver, ISO, and other styles
10

Negoro, Akira, and Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators." In Lecture Notes in Mathematics. Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Convergence de processus de Markov"

1

Shi, Zhengbin. "Volatility Prediction Algorithm in Enterprise Financial Risk Management Based on Markov Chain Algorithm." In 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C). IEEE, 2023. http://dx.doi.org/10.1109/ici3c60830.2023.00039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Majeed, Sultan Javed, and Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.

Full text
Abstract:
Temporal-difference (TD) learning is an attractive, computationally efficient framework for model- free reinforcement learning. Q-learning is one of the most widely used TD learning technique that enables an agent to learn the optimal action-value function, i.e. Q-value function. Contrary to its widespread use, Q-learning has only been proven to converge on Markov Decision Processes (MDPs) and Q-uniform abstractions of finite-state MDPs. On the other hand, most real-world problems are inherently non-Markovian: the full true state of the environment is not revealed by recent observations. In th
APA, Harvard, Vancouver, ISO, and other styles
3

Amiri, Mohsen, and Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States." In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Takagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto, and Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen, and Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process." In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ding, Dongsheng, Kaiqing Zhang, Tamer Basar, and Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes." In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tayeb, Shahab, Miresmaeil Mirnabibaboli, and Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm." In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ferreira Salvador, Paulo J., and Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Chongyang, Yuheng Bu, and Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.

Full text
Abstract:
The paper studies information-theoretic opacity, an information-flow privacy property, in a setting involving two agents: A planning agent who controls a stochastic system and an observer who partially observes the system states. The goal of the observer is to infer some secret, represented by a random variable, from its partial observations, while the goal of the planning agent is to make the secret maximally opaque to the observer while achieving a satisfactory total return. Modeling the stochastic system using a Markov decision process, two classes of opacity properties are considered---Las
APA, Harvard, Vancouver, ISO, and other styles
10

Horák, Karel, Branislav Bošanský, and Krishnendu Chatterjee. "Goal-HSVI: Heuristic Search Value Iteration for Goal POMDPs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/662.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) are the standard models for planning under uncertainty with both finite and infinite horizon. Besides the well-known discounted-sum objective, indefinite-horizon objective (aka Goal-POMDPs) is another classical objective for POMDPs. In this case, given a set of target states and a positive cost for each transition, the optimization objective is to minimize the expected total cost until a target state is reached. In the literature, RTDP-Bel or heuristic search value iteration (HSVI) have been used for solving Goal-POMDPs. Neither of these
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Convergence de processus de Markov"

1

Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada255456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada308874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. Easy-to-Apply Results for Establishing Convergence of Markov Chains in Bayesian Analysis. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada264015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bledsoe, Keith C. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling. Office of Scientific and Technical Information (OSTI), 2015. http://dx.doi.org/10.2172/1234327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Šiljak, Dženita. The Effects of Institutions on the Transition of the Western Balkans. Külügyi és Külgazdasági Intézet, 2022. http://dx.doi.org/10.47683/kkielemzesek.ke-2022.19.

Full text
Abstract:
The Western Balkan countries have been lagging behind in their transition process, which started more than 30 years ago. While some justification can be made in the fact that the countries went through wars in the 1990s, the real problem is that they have not been able to create efficient institutions. Inefficient institutions hamper economic growth, as the countries do not attract foreign direct investment (FDI) to the extent they could and should. A lack of FDI affects three aspects of the transition process in the region: developing a functioning market economy, competitiveness, and converg
APA, Harvard, Vancouver, ISO, and other styles
6

Chea, Phal, Seyhakunthy Hun, and Sopheak Song. Permeability in Cambodian Post-secondary Education and Training: A Growing Convergence. Cambodia Development Resource Institute, 2021. https://doi.org/10.64202/wp.130.202109.

Full text
Abstract:
The distinction between vocational training and academic education can be traced back to different institutional structures in medieval Europe. However, owing to an increasing need for higher-level skills to respond to market demand, countries have resolved to establish flexible pathways for students on both tracks or systems to move or transfer across to each other. Permeability in education and training refers to the possibility for learners to transfer between different types of education and between different levels of qualifications. In its recommendations, UNESCO highlights the important
APA, Harvard, Vancouver, ISO, and other styles
7

Quevedo, Fernando, Paolo Giordano, and Mauricio Mesquita Moreira. El tratamiento de las asimetrías en los acuerdos de integración regional. Inter-American Development Bank, 2004. http://dx.doi.org/10.18235/0009450.

Full text
Abstract:
A pesar de la abundancia de literatura teórica sobre los aspectos distributivos de las políticas comerciales preferenciales, en la literatura aplicada reciente, el tema de cómo instrumentar políticas para tratar las asimetrías en el marco de los procesos de integración regional reluce por su escasez, especialmente si se restringe el análisis al caso de países en desarrollo. En lo que se refiere a América Latina, es probable que esta carencia se explique simultáneamente por el cambio de paradigma de la integración regional en la década de los noventa y por cierto optimismo que ha acompañado el
APA, Harvard, Vancouver, ISO, and other styles
8

Briones, Roehlano, Ivory Myka Galang, Isabel Espineli, Aniceto Jr Orbeta, and Marife Ballesteros. Endline Study Report and Policy Study for the ConVERGE Project. Philippine Institute for Development Studies, 2023. http://dx.doi.org/10.62986/dp2023.13.

Full text
Abstract:
The Department of Agrarian Reform (DAR), in partnership with the International Fund for Agricultural Development, implemented the project Convergence on Value Chain Enhancement for Rural Growth and Empowerment (ConVERGE) with the goal of empowering Agrarian Reform Beneficiaries (ARBs) to drive rural economic growth across 10 provinces spanning 3 regions. DAR engaged the Philippine Institute for Development Studies to undertake baseline and endline studies, serving as a crucial assessment tool for the project's performance and providing insights to inform the comprehensive ARC Cluster Developme
APA, Harvard, Vancouver, ISO, and other styles
9

Hori, Tsuneki, Sergio Lacambra Ayuso, Ana María Torres, et al. Índice de Gobernabilidad y Políticas Públicas en Gestión de Riesgo de Desastres (iGOPP): Informe Nacional de Perú. Inter-American Development Bank, 2015. http://dx.doi.org/10.18235/0010086.

Full text
Abstract:
El Perú es uno de los países de la región que ha experimentado recientemente uno de los más significativos y profundos procesos de modernización del marco normativo e institucional en materia de gestión del riesgo de desastres (GRD), y donde las inversiones públicas explícitas en esta materia han tenido mayor dinamismo. Como consecuencia de una maduración conceptual donde el riesgo de desastres es entendido como un problema esencialmente del desarrollo y con la consecuente convergencia de actores clave del más alto nivel político, el Perú logró en 2011 la creación del Sistema Nacional de Gesti
APA, Harvard, Vancouver, ISO, and other styles
10

Ocampo, José Antonio, Roberto Steiner Sampedro, Mauricio Villamizar Villegas, et al. Informe de la Junta Directiva al Congreso de la República - Marzo de 2023. Banco de la República, 2023. http://dx.doi.org/10.32468/inf-jun-dir-con-rep.3-2023.

Full text
Abstract:
Introducción En 2023 el Banco de la República celebra 100 años de su fundación. Este es un aniversario de gran significado, el cual ofrece la oportunidad de resaltar el aporte que el Banco ha hecho al desarrollo del país. Su trayectoria como garante de la estabilidad monetaria lo ha consolidado como la institución estatal independiente que genera mayor confianza entre los colombianos por su transparencia, capacidad de gestión y el cumplimiento efectivo de las funciones de banca central y culturales encomendadas en la Constitución y la Ley. En una fecha tan importante como esta, la Junta Direct
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!