Articoli di riviste sul tema "Partially Observable Markov Decision Processes (POMDPs)"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Partially Observable Markov Decision Processes (POMDPs)".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.
NI, YAODONG, and ZHI-QIANG LIU. "BOUNDED-PARAMETER PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES: FRAMEWORK AND ALGORITHM." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, no. 06 (2013): 821–63. http://dx.doi.org/10.1142/s0218488513500396.
Testo completoTennenholtz, Guy, Uri Shalit, and Shie Mannor. "Off-Policy Evaluation in Partially Observable Environments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (2020): 10276–83. http://dx.doi.org/10.1609/aaai.v34i06.6590.
Testo completoCarr, Steven, Nils Jansen, and Ufuk Topcu. "Task-Aware Verifiable RNN-Based Policies for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 72 (November 18, 2021): 819–47. http://dx.doi.org/10.1613/jair.1.12963.
Testo completoKim, Sung-Kyun, Oren Salzman, and Maxim Likhachev. "POMHDP: Search-Based Belief Space Planning Using Multiple Heuristics." Proceedings of the International Conference on Automated Planning and Scheduling 29 (May 25, 2021): 734–44. http://dx.doi.org/10.1609/icaps.v29i1.3542.
Testo completoWang, Chenggang, and Roni Khardon. "Relational Partially Observable MDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1153–58. http://dx.doi.org/10.1609/aaai.v24i1.7742.
Testo completoHauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 13 (August 1, 2000): 33–94. http://dx.doi.org/10.1613/jair.678.
Testo completoVictorio-Meza, Hermilo, Manuel Mejía-Lavalle, Alicia Martínez Rebollar, Andrés Blanco Ortega, Obdulia Pichardo Lagunas, and Grigori Sidorov. "Searching for Cerebrovascular Disease Optimal Treatment Recommendations Applying Partially Observable Markov Decision Processes." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 01 (2017): 1860015. http://dx.doi.org/10.1142/s0218001418600157.
Testo completoZhang, N. L., and W. Liu. "A Model Approximation Scheme for Planning in Partially Observable Stochastic Domains." Journal of Artificial Intelligence Research 7 (November 1, 1997): 199–230. http://dx.doi.org/10.1613/jair.419.
Testo completoMaxtulus Junedy Nababan. "Perkembangan Perilaku Pembelajaran Peserta Didik dengan Menggunakan Partially Observable Markov Decision Processes." Edukasi Elita : Jurnal Inovasi Pendidikan 2, no. 1 (2024): 289–97. https://doi.org/10.62383/edukasi.v2i1.1034.
Testo completoOmidshafiei, Shayegan, Ali–Akbar Agha–Mohammadi, Christopher Amato, Shih–Yuan Liu, Jonathan P. How, and John Vian. "Decentralized control of multi-robot partially observable Markov decision processes using belief space macro-actions." International Journal of Robotics Research 36, no. 2 (2017): 231–58. http://dx.doi.org/10.1177/0278364917692864.
Testo completoRozek, Brandon, Junkyu Lee, Harsha Kokel, Michael Katz, and Shirin Sohrabi. "Partially Observable Hierarchical Reinforcement Learning with AI Planning (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23635–36. http://dx.doi.org/10.1609/aaai.v38i21.30504.
Testo completoTheocharous, Georgios, and Sridhar Mahadevan. "Compressing POMDPs Using Locality Preserving Non-Negative Matrix Factorization." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1147–52. http://dx.doi.org/10.1609/aaai.v24i1.7750.
Testo completoWalraven, Erwin, and Matthijs T. J. Spaan. "Point-Based Value Iteration for Finite-Horizon POMDPs." Journal of Artificial Intelligence Research 65 (July 11, 2019): 307–41. http://dx.doi.org/10.1613/jair.1.11324.
Testo completoZhang, N. L., and W. Zhang. "Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 14 (February 1, 2001): 29–51. http://dx.doi.org/10.1613/jair.761.
Testo completoWang, Erli, Hanna Kurniawati, and Dirk Kroese. "An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 273–77. http://dx.doi.org/10.1609/icaps.v28i1.13906.
Testo completoZhang, Zongzhang, Michael Littman, and Xiaoping Chen. "Covering Number as a Complexity Measure for POMDP Planning and Learning." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1853–59. http://dx.doi.org/10.1609/aaai.v26i1.8360.
Testo completoRoss, S., J. Pineau, S. Paquet, and B. Chaib-draa. "Online Planning Algorithms for POMDPs." Journal of Artificial Intelligence Research 32 (July 29, 2008): 663–704. http://dx.doi.org/10.1613/jair.2567.
Testo completoKo, Li Ling, David Hsu, Wee Sun Lee, and Sylvie Ong. "Structured Parameter Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1102–7. http://dx.doi.org/10.1609/aaai.v24i1.7744.
Testo completoXIANG, YANG, and FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 06 (2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.
Testo completoLin, Yong, Xingjia Lu, and Fillia Makedon. "Approximate Planning in POMDPs with Weighted Graph Models." International Journal on Artificial Intelligence Tools 24, no. 04 (2015): 1550014. http://dx.doi.org/10.1142/s0218213015500141.
Testo completoSanner, Scott, and Kristian Kersting. "Symbolic Dynamic Programming for First-order POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (2010): 1140–46. http://dx.doi.org/10.1609/aaai.v24i1.7747.
Testo completoCapitan, Jesus, Matthijs Spaan, Luis Merino, and Anibal Ollero. "Decentralized Multi-Robot Cooperation with Auctioned POMDPs." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 11, 2014): 515–18. http://dx.doi.org/10.1609/icaps.v24i1.13658.
Testo completoBelly, Marius, Nathanaël Fijalkow, Hugo Gimbert, Florian Horn, Guillermo A. Pérez, and Pierre Vandenhove. "Revelations: A Decidable Class of POMDPs with Omega-Regular Objectives." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 25 (2025): 26454–62. https://doi.org/10.1609/aaai.v39i25.34845.
Testo completoAras, R., and A. Dutech. "An Investigation into Mathematical Programming for Finite Horizon Decentralized POMDPs." Journal of Artificial Intelligence Research 37 (March 26, 2010): 329–96. http://dx.doi.org/10.1613/jair.2915.
Testo completoWen, Xian, Haifeng Huo, and Jinhua Cui. "The optimal probability of the risk for finite horizon partially observable Markov decision processes." AIMS Mathematics 8, no. 12 (2023): 28435–49. http://dx.doi.org/10.3934/math.20231455.
Testo completoItoh, Hideaki, Hisao Fukumoto, Hiroshi Wakuya, and Tatsuya Furukawa. "Bottom-up learning of hierarchical models in a class of deterministic POMDP environments." International Journal of Applied Mathematics and Computer Science 25, no. 3 (2015): 597–615. http://dx.doi.org/10.1515/amcs-2015-0044.
Testo completoChatterjee, Krishnendu, Martin Chmelik, and Ufuk Topcu. "Sensor Synthesis for POMDPs with Reachability Objectives." Proceedings of the International Conference on Automated Planning and Scheduling 28 (June 15, 2018): 47–55. http://dx.doi.org/10.1609/icaps.v28i1.13875.
Testo completoDressel, Louis, and Mykel Kochenderfer. "Efficient Decision-Theoretic Target Localization." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 70–78. http://dx.doi.org/10.1609/icaps.v27i1.13832.
Testo completoPark, Jaeyoung, Kee-Eung Kim, and Yoon-Kyu Song. "A POMDP-Based Optimal Control of P300-Based Brain-Computer Interfaces." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1559–62. http://dx.doi.org/10.1609/aaai.v25i1.7956.
Testo completoDoshi, P., and P. J. Gmytrasiewicz. "Monte Carlo Sampling Methods for Approximating Interactive POMDPs." Journal of Artificial Intelligence Research 34 (March 24, 2009): 297–337. http://dx.doi.org/10.1613/jair.2630.
Testo completoShatkay, H., and L. P. Kaelbling. "Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap." Journal of Artificial Intelligence Research 16 (March 1, 2002): 167–207. http://dx.doi.org/10.1613/jair.874.
Testo completoLim, Michael H., Tyler J. Becker, Mykel J. Kochenderfer, Claire J. Tomlin, and Zachary N. Sunberg. "Optimality Guarantees for Particle Belief Approximation of POMDPs." Journal of Artificial Intelligence Research 77 (August 27, 2023): 1591–636. http://dx.doi.org/10.1613/jair.1.14525.
Testo completoSpaan, M. T. J., and N. Vlassis. "Perseus: Randomized Point-based Value Iteration for POMDPs." Journal of Artificial Intelligence Research 24 (August 1, 2005): 195–220. http://dx.doi.org/10.1613/jair.1659.
Testo completoKraemer, Landon, and Bikramjit Banerjee. "Informed Initial Policies for Learning in Dec-POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 2433–34. http://dx.doi.org/10.1609/aaai.v26i1.8426.
Testo completoBanerjee, Bikramjit, Jeremy Lyle, Landon Kraemer, and Rajesh Yellamraju. "Sample Bounded Distributed Reinforcement Learning for Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (2021): 1256–62. http://dx.doi.org/10.1609/aaai.v26i1.8260.
Testo completoWu, Bo, Yan Peng Feng, and Hong Yan Zheng. "Point-Based Monte Carto Online Planning in POMDPs." Advanced Materials Research 846-847 (November 2013): 1388–91. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.1388.
Testo completoBernstein, D. S., C. Amato, E. A. Hansen, and S. Zilberstein. "Policy Iteration for Decentralized Control of Markov Decision Processes." Journal of Artificial Intelligence Research 34 (March 1, 2009): 89–132. http://dx.doi.org/10.1613/jair.2667.
Testo completoHahsler, Michael, and Anthony R. Cassandra. "Pomdp: A Computational Infrastructure for Partially Observable Markov Decision Processes." R Journal 16, no. 2 (2025): 116–33. https://doi.org/10.32614/rj-2024-021.
Testo completoAjdarów, Michal, Šimon Brlej, and Petr Novotný. "Shielding in Resource-Constrained Goal POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14674–82. http://dx.doi.org/10.1609/aaai.v37i12.26715.
Testo completoSimão, Thiago D., Marnix Suilen, and Nils Jansen. "Safe Policy Improvement for POMDPs via Finite-State Controllers." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15109–17. http://dx.doi.org/10.1609/aaai.v37i12.26763.
Testo completoZhang, Zongzhang, David Hsu, Wee Sun Lee, Zhan Wei Lim, and Aijun Bai. "PLEASE: Palm Leaf Search for POMDPs with Large Observation Spaces." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 249–57. http://dx.doi.org/10.1609/icaps.v25i1.13706.
Testo completoSonu, Ekhlas, Yingke Chen, and Prashant Doshi. "Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs." Proceedings of the International Conference on Automated Planning and Scheduling 25 (April 8, 2015): 202–10. http://dx.doi.org/10.1609/icaps.v25i1.13712.
Testo completoBouton, Maxime, Jana Tumova, and Mykel J. Kochenderfer. "Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (2020): 10061–68. http://dx.doi.org/10.1609/aaai.v34i06.6563.
Testo completoLemmel, Julian, and Radu Grosu. "Real-Time Recurrent Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 18189–97. https://doi.org/10.1609/aaai.v39i17.34001.
Testo completoPetrik, Marek, and Shlomo Zilberstein. "Linear Dynamic Programs for Resource Management." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1377–83. http://dx.doi.org/10.1609/aaai.v25i1.7794.
Testo completoDibangoye, Jilles Steeve, Christopher Amato, Olivier Buffet, and François Charpillet. "Optimally Solving Dec-POMDPs as Continuous-State MDPs." Journal of Artificial Intelligence Research 55 (February 24, 2016): 443–97. http://dx.doi.org/10.1613/jair.4623.
Testo completoNg, Brenda, Carol Meyers, Kofi Boakye, and John Nitao. "Towards Applying Interactive POMDPs to Real-World Adversary Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 2 (2021): 1814–20. http://dx.doi.org/10.1609/aaai.v24i2.18818.
Testo completoBoots, Byron, and Geoffrey Gordon. "An Online Spectral Learning Algorithm for Partially Observable Nonlinear Dynamical Systems." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 293–300. http://dx.doi.org/10.1609/aaai.v25i1.7924.
Testo completoAndriushchenko, Roman, Milan Češka, Filip Macák, Sebastian Junges, and Joost-Pieter Katoen. "An Oracle-Guided Approach to Constrained Policy Synthesis Under Uncertainty." Journal of Artificial Intelligence Research 82 (February 3, 2025): 433–69. https://doi.org/10.1613/jair.1.16593.
Testo completoBanerjee, Bikramjit. "Pruning for Monte Carlo Distributed Reinforcement Learning in Decentralized POMDPs." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (2013): 88–94. http://dx.doi.org/10.1609/aaai.v27i1.8670.
Testo completo