Journal articles on the topic 'Partially Observable Markov Decision Processes (POMDPs)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Partially Observable Markov Decision Processes (POMDPs).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
NI, YAODONG, and ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, no. 06 (2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Full textTennenholtz, Guy, Uri Shalit, and Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Full textCarr, Steven, Nils Jansen, and Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 72 (November 18, 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Full textKim, Sung-Kyun, Oren Salzman, and Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Full textWang, Chenggang, and Roni Khardon. "Relational Partially Observable MDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Full textHauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 13 (August 1, 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Full textVictorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas, and Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 01 (2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Full textZhang, N. L., and W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains." Journal of Artificial Intelligence Research 7 (November 1, 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Full textMaxtulus Junedy Nababan. "Perkembangan Perilaku Pembelajaran Peserta Didik dengan Menggunakan Partially Observable Markov Decision Processes." Edukasi Elita : Jurnal Inovasi Pendidikan 2, no. 1 (2024): 289–97. https://doi.org/10.62383/edukasi.v2i1.1034.
Full textOmidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How, and John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions." International Journal of Robotics Research 36, no. 2 (2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Full textRozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz, and Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Full textTheocharous, Georgios, and Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Full textWalraven, Erwin, and Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs." Journal of Artificial Intelligence Research 65 (July 11, 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Full textZhang, N. L., and W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 14 (February 1, 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Full textWang, Erli, Hanna Kurniawati, and Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Full textZhang, Zongzhang, Michael Littman, and Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Full textRoss, S., J. Pineau, S. Paquet, and B. Chaib-draa. "Online Planning Algorithms for POMDPs." Journal of Artificial Intelligence Research 32 (July 29, 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Full textKo, Li Ling, David Hsu, Wee Sun Lee, and Sylvie Ong. "Structured Parameter Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
Full textXIANG, YANG, and FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 06 (2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Full textLin, Yong, Xingjia Lu, and Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models." International Journal on Artificial Intelligence Tools 24, no. 04 (2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Full textSanner, Scott, and Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Full textCapitan, Jesus, Matthijs Spaan, Luis Merino, and Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 11, 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Full textBelly, Marius, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo A. Pérez, and Pierre Vandenhove. "Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26454–62. https://doi.org/10.1609/aaai.v39i25.34845.
Full textAras, R., and A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs." Journal of Artificial Intelligence Research 37 (March 26, 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Full textWen, Xian, Haifeng Huo, and Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes." AIMS Mathematics 8, no. 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Full textItoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya, and Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments." International Journal of Applied Mathematics and Computer Science 25, no. 3 (2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Full textChatterjee, Krishnendu, Martin Chmelik, and Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Full textDressel, Louis, and Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Full textPark, Jaeyoung, Kee-Eung Kim, and Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Full textDoshi, P., and P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs." Journal of Artificial Intelligence Research 34 (March 24, 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Full textShatkay, H., and L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap." Journal of Artificial Intelligence Research 16 (March 1, 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Full textLim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin, and Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs." Journal of Artificial Intelligence Research 77 (August 27, 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Full textSpaan, M. T. J., and N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs." Journal of Artificial Intelligence Research 24 (August 1, 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Full textKraemer, Landon, and Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Full textBanerjee, Bikramjit, Jeremy Lyle, Landon Kraemer, and Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Full textWu, Bo, Yan Peng Feng, and Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs." Advanced Materials Research 846-847 (November 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Full textBernstein, D. S., C. Amato, E. A. Hansen, and S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes." Journal of Artificial Intelligence Research 34 (March 1, 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Full textHahsler, Michael, and Anthony R. Cassandra. "Pomdp: A Computational Infrastructure for Partially Observable Markov Decision Processes." R Journal 16, no. 2 (2025): 116–33. https://doi.org/10.32614/rj-2024-021.
Full textAjdarów, Michal, Šimon Brlej, and Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Full textSimão, Thiago D., Marnix Suilen, and Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Full textZhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim, and Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Full textSonu, Ekhlas, Yingke Chen, and Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Full textBouton, Maxime, Jana Tumova, and Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Full textLemmel, Julian, and Radu Grosu. "Real-Time Recurrent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 18189–97. https://doi.org/10.1609/aaai.v39i17.34001.
Full textPetrik, Marek, and Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Full textDibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Full textNg, Brenda, Carol Meyers, Kofi Boakye, and John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 2 (2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Full textBoots, Byron, and Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Full textAndriushchenko, Roman, Milan Češka, Filip Macák, Sebastian Junges, and Joost-Pieter Katoen. "An Oracle-Guided Approach to Constrained Policy Synthesis Under Uncertainty." Journal of Artificial Intelligence Research 82 (February 3, 2025): 433–69. https://doi.org/10.1613/jair.1.16593.
Full textBanerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Full text