Littérature scientifique sur le sujet « Structured continuous time Markov decision processes »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Structured continuous time Markov decision processes ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Structured continuous time Markov decision processes"

1

Shelton, C. R., and G. Ciardo. "Tutorial on Structured Continuous-Time Markov Processes." Journal of Artificial Intelligence Research 51 (December 23, 2014): 725–78. http://dx.doi.org/10.1613/jair.4415.

Texte intégral
Résumé :
A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing "flat" or unstructured Markov processes and
Styles APA, Harvard, Vancouver, ISO, etc.
2

D'Amico, Guglielmo, Jacques Janssen, and Raimondo Manca. "Monounireducible Nonhomogeneous Continuous Time Semi-Markov Processes Applied to Rating Migration Models." Advances in Decision Sciences 2012 (October 16, 2012): 1–12. http://dx.doi.org/10.1155/2012/123635.

Texte intégral
Résumé :
Monounireducible nonhomogeneous semi- Markov processes are defined and investigated. The mono- unireducible topological structure is a sufficient condition that guarantees the absorption of the semi-Markov process in a state of the process. This situation is of fundamental importance in the modelling of credit rating migrations because permits the derivation of the distribution function of the time of default. An application in credit rating modelling is given in order to illustrate the results.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 3 (1987): 644–56. http://dx.doi.org/10.2307/3214096.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies.We generalize uniformization to yield consistent results for stationary poli
Styles APA, Harvard, Vancouver, ISO, etc.
4

Beutler, Frederick J., and Keith W. Ross. "Uniformization for semi-Markov decision processes under stationary policies." Journal of Applied Probability 24, no. 03 (1987): 644–56. http://dx.doi.org/10.1017/s0021900200031375.

Texte intégral
Résumé :
Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary pol
Styles APA, Harvard, Vancouver, ISO, etc.
5

Dibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.

Texte intégral
Résumé :
Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be decentralized. This new Dec-POMDP formulation, which we call an occu
Styles APA, Harvard, Vancouver, ISO, etc.
6

Chornei, Ruslan. "Local Control in Gordon-Newell Networks." NaUKMA Research Papers. Computer Science 7 (May 12, 2025): 120–29. https://doi.org/10.18523/2617-3808.2024.7.120-129.

Texte intégral
Résumé :
We examine continuous-time stochastic processes with a general compact state space, which is organized by a fundamental graph defining a neighborhood structure of states. These neighborhoods establish local interactions among the coordinates of the spatial process. At any given moment, the random state of the system, as described by the stochastic process, forms a random field concerning the neighborhood graph.The process is assumed to have a semi-Markov temporal property, and its transition kernels exhibit a spatial Markov property relative to the basic graph. Additionally, a local control st
Styles APA, Harvard, Vancouver, ISO, etc.
7

Pazis, Jason, and Ronald Parr. "Sample Complexity and Performance Bounds for Non-Parametric Approximate Linear Programming." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 782–88. http://dx.doi.org/10.1609/aaai.v27i1.8696.

Texte intégral
Résumé :
One of the most difficult tasks in value function approximation for Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. Recent results in non-parametric approximate linear programming (NP-ALP), have demonstrated that this can be done effectively using nothing more than a smoothness assumption on the value function. In this paper we extend these results to the case where samples come from real world transitions instead of the full Bellma
Styles APA, Harvard, Vancouver, ISO, etc.
8

Abid, Amira, Fathi Abid, and Bilel Kaffel. "CDS-based implied probability of default estimation." Journal of Risk Finance 21, no. 4 (2020): 399–422. http://dx.doi.org/10.1108/jrf-05-2019-0079.

Texte intégral
Résumé :
Purpose This study aims to shed more light on the relationship between probability of default, investment horizons and rating classes to make decision-making processes more efficient. Design/methodology/approach Based on credit default swaps (CDS) spreads, a methodology is implemented to determine the implied default probability and the implied rating, and then to estimate the term structure of the market-implied default probability and the transition matrix of implied rating. The term structure estimation in discrete time is conducted with the Nelson and Siegel model and in continuous time wi
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mironov, Aleksey, Anna Mironova, and Vyacheslav Burlov. "Mathematical modeling of preemptive management by the stages complex of administrative production." Applied Mathematics and Control Sciences, no. 4 (December 12, 2022): 174–97. http://dx.doi.org/10.15593/2499-9873/2022.4.10.

Texte intégral
Résumé :
In line with the fundamental reform of the Russia administrative legislation, this article discusses the synthesis of a geoinformation system for preventive management of the full cycle of production on affairs about administrative offenses. According to the Anokhin - Sudakov's theory of functional systems, the tasks of forming a structural image and synthesizing a mathematical model to manage the administrative process stages, as well as the tasks of substantiating the mathematical criterion and its structural and functional implementation to prevent violations of a reasonable time in product
Styles APA, Harvard, Vancouver, ISO, etc.
10

Puterman, Martin L., and F. A. Van der Duyn Schouten. "Markov Decision Processes With Continuous Time Parameter." Journal of the American Statistical Association 80, no. 390 (1985): 491. http://dx.doi.org/10.2307/2287942.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Plus de sources

Thèses sur le sujet "Structured continuous time Markov decision processes"

1

VILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Texte intégral
Résumé :
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un l
Styles APA, Harvard, Vancouver, ISO, etc.
2

Saha, Subhamay. "Single and Multi-player Stochastic Dynamic Optimization." Thesis, 2013. https://etd.iisc.ac.in/handle/2005/3357.

Texte intégral
Résumé :
In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game
Styles APA, Harvard, Vancouver, ISO, etc.
3

Saha, Subhamay. "Single and Multi-player Stochastic Dynamic Optimization." Thesis, 2013. http://etd.iisc.ernet.in/2005/3357.

Texte intégral
Résumé :
In this thesis we investigate single and multi-player stochastic dynamic optimization prob-lems. We consider both discrete and continuous time processes. In the multi-player setup we investigate zero-sum games with both complete and partial information. We study partially observable stochastic games with average cost criterion and the state process be-ing discrete time controlled Markov chain. The idea involved in studying this problem is to replace the original unobservable state variable with a suitable completely observable state variable. We establish the existence of the value of the game
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Structured continuous time Markov decision processes"

1

Guo, Xianping, and Onésimo Hernández-Lerma. Continuous-Time Markov Decision Processes. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Piunovskiy, Alexey, and Yi Zhang. Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hernandez-Lerma, Onesimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications. Springer, 2010.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hernández-Lerma, Onésimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications. Springer, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Yi, Alexey Piunovskiy, and Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes: Borel Space Models and General Control Strategies. Springer International Publishing AG, 2021.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Yi, Alexey Piunovskiy, and Albert Nikolaevich Shiryaev. Continuous-Time Markov Decision Processes: Borel Space Models and General Control Strategies. Springer International Publishing AG, 2020.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hernandez-Lerma, Onesimo, and Xianping Guo. Continuous-Time Markov Decision Processes: Theory and Applications (Stochastic Modelling and Applied Probability Book 62). Springer, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Structured continuous time Markov decision processes"

1

Neuhäußer, Martin R., Mariëlle Stoelinga, and Joost-Pieter Katoen. "Delayed Nondeterminism in Continuous-Time Markov Decision Processes." In Foundations of Software Science and Computational Structures. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00596-1_26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Melchiors, Philipp. "Continuous-Time Markov Decision Processes." In Lecture Notes in Economics and Mathematical Systems. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-04540-5_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Guo, Xianping, and Onésimo Hernández-Lerma. "Continuous-Time Markov Decision Processes." In Stochastic Modelling and Applied Probability. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02547-1_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Piunovskiy, Alexey, and Yi Zhang. "Selected Properties of Controlled Processes." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Piunovskiy, Alexey, and Yi Zhang. "Description of CTMDPs and Preliminaries." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Piunovskiy, Alexey, and Yi Zhang. "The Discounted Cost Model." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Piunovskiy, Alexey, and Yi Zhang. "Reduction to DTMDP: The Total Cost Model." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Piunovskiy, Alexey, and Yi Zhang. "The Average Cost Model." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Piunovskiy, Alexey, and Yi Zhang. "The Total Cost Model: General Case." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Piunovskiy, Alexey, and Yi Zhang. "Gradual-Impulsive Control Models." In Continuous-Time Markov Decision Processes. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54987-9_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Structured continuous time Markov decision processes"

1

Huang, Yunhan, Veeraruna Kavitha, and Quanyan Zhu. "Continuous-Time Markov Decision Processes with Controlled Observations." In 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019. http://dx.doi.org/10.1109/allerton.2019.8919744.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rincon, Luis F., Yina F. Muñoz Moscoso, Jose Campos Matos, and Stefan Leonardo Leiva Maldonado. "Stochastic degradation model analysis for prestressed concrete bridges." In IABSE Symposium, Prague 2022: Challenges for Existing and Oncoming Structures. International Association for Bridge and Structural Engineering (IABSE), 2022. http://dx.doi.org/10.2749/prague.2022.1092.

Texte intégral
Résumé :
<p>Bridges in the road infrastructure represent a critical and strategic asset, due to their functionality, is vital for the economic and social development of the countries. Currently, approximately 50% of construction industry expenditures in most developed countries are associated with repairs, maintenance, and rehabilitation of existing structures, and are expected to increase in the future. In this sense, it is necessary to monitor the behaviour of bridges and obtain indicators that represent the evolution of the state of service over time.</p><p>Therefore, degradation m
Styles APA, Harvard, Vancouver, ISO, etc.
3

Neuhausser, Martin R., and Lijun Zhang. "Time-Bounded Reachability Probabilities in Continuous-Time Markov Decision Processes." In 2010 Seventh International Conference on the Quantitative Evaluation of Systems (QEST). IEEE, 2010. http://dx.doi.org/10.1109/qest.2010.47.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Qiu, Qinru, and Massoud Pedram. "Dynamic power management based on continuous-time Markov decision processes." In the 36th ACM/IEEE conference. ACM Press, 1999. http://dx.doi.org/10.1145/309847.309997.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Feinberg, Eugene A., Manasa Mandava, and Albert N. Shiryaev. "Sufficiency of Markov policies for continuous-time Markov decision processes and solutions to Kolmogorov's forward equation for jump Markov processes." In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC). IEEE, 2013. http://dx.doi.org/10.1109/cdc.2013.6760792.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Guo, Xianping. "Discounted Optimality for Continuous-Time Markov Decision Processes in Polish Spaces." In 2006 Chinese Control Conference. IEEE, 2006. http://dx.doi.org/10.1109/chicc.2006.280655.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Alasmari, Naif, and Radu Calinescu. "Synthesis of Pareto-optimal Policies for Continuous-Time Markov Decision Processes." In 2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2022. http://dx.doi.org/10.1109/seaa56994.2022.00071.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Lee, Donghwan, Han-Dong Lim, and Do Wan Kim. "Continuous-Time Distributed Dynamic Programming for Networked Multi-Agent Markov Decision Processes." In 2024 IEEE 18th International Conference on Control & Automation (ICCA). IEEE, 2024. http://dx.doi.org/10.1109/icca62789.2024.10591854.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cao, Xi-Ren. "A new model of continuous-time Markov processes and impulse stochastic control." In 2009 Joint 48th IEEE Conference on Decision and Control (CDC) and 28th Chinese Control Conference (CCC). IEEE, 2009. http://dx.doi.org/10.1109/cdc.2009.5399775.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tanaka, Takashi, Mikael Skoglund, and Valeri Ugrinovskii. "Optimal sensor design and zero-delay source coding for continuous-time vector Gauss-Markov processes." In 2017 IEEE 56th Annual Conference on Decision and Control (CDC). IEEE, 2017. http://dx.doi.org/10.1109/cdc.2017.8264246.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!