Academic literature on the topic 'Goal-conditioned reinforcement learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Goal-conditioned reinforcement learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Goal-conditioned reinforcement learning"

1

Yin, Xiangyu, Sihao Wu, Jiaxu Liu, et al. "Representation-Based Robustness in Goal-Conditioned Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (2024): 21761–69. http://dx.doi.org/10.1609/aaai.v38i19.30176.

Full text
Abstract:
While Goal-Conditioned Reinforcement Learning (GCRL) has gained attention, its algorithmic robustness against adversarial perturbations remains unexplored. The attacks and robust representation training methods that are designed for traditional RL become less effective when applied to GCRL. To address this challenge, we first propose the Semi-Contrastive Representation attack, a novel approach inspired by the adversarial contrastive attack. Unlike existing attacks in RL, it only necessitates information from the policy function and can be seamlessly implemented during deployment. Then, to miti
APA, Harvard, Vancouver, ISO, and other styles
2

Levine, Alexander, and Soheil Feizi. "Goal-Conditioned Q-learning as Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8500–8509. http://dx.doi.org/10.1609/aaai.v37i7.26024.

Full text
Abstract:
Many applications of reinforcement learning can be formalized as goal-conditioned environments, where, in each episode, there is a "goal" that affects the rewards obtained during that episode but does not affect the dynamics. Various techniques have been proposed to improve performance in goal-conditioned environments, such as automatic curriculum generation and goal relabeling. In this work, we explore a connection between off-policy reinforcement learning in goal-conditioned settings and knowledge distillation. In particular: the current Q-value function and the target Q-value estimate are b
APA, Harvard, Vancouver, ISO, and other styles
3

YAMADA, Takaya, and Koich OGAWARA. "Goal-Conditioned Reinforcement Learning with Latent Representations using Contrastive Learning." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2021 (2021): 1P1—I15. http://dx.doi.org/10.1299/jsmermd.2021.1p1-i15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Zhifeng, Mingyu You, Hongjun Zhou, and Bin He. "Weakly Supervised Disentangled Representation for Goal-Conditioned Reinforcement Learning." IEEE Robotics and Automation Letters 7, no. 2 (2022): 2202–9. http://dx.doi.org/10.1109/lra.2022.3141148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

TANIGUCHI, Asuto, Fumihiro SASAKI, and Ryota YAMASHINA. "Goal-Conditioned Reinforcement Learning with Extended Floyd-Warshall method." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 2A1—L01. http://dx.doi.org/10.1299/jsmermd.2020.2a1-l01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yao, YuHui Wang, and XiaoYang Tan. "Highly valued subgoal generation for efficient goal-conditioned reinforcement learning." Neural Networks 181 (January 2025): 106825. http://dx.doi.org/10.1016/j.neunet.2024.106825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Elguea-Aguinaco, Íñigo, Antonio Serrano-Muñoz, Dimitrios Chrysostomou, Ibai Inziarte-Hidalgo, Simon Bøgh, and Nestor Arana-Arexolaleiba. "Goal-Conditioned Reinforcement Learning within a Human-Robot Disassembly Environment." Applied Sciences 12, no. 22 (2022): 11610. http://dx.doi.org/10.3390/app122211610.

Full text
Abstract:
The introduction of collaborative robots in industrial environments reinforces the need to provide these robots with better cognition to accomplish their tasks while fostering worker safety without entering into safety shutdowns that reduce workflow and production times. This paper presents a novel strategy that combines the execution of contact-rich tasks, namely disassembly, with real-time collision avoidance through machine learning for safe human-robot interaction. Specifically, a goal-conditioned reinforcement learning approach is proposed, in which the removal direction of a peg, of vary
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Bo, Yihao Feng, Qiang Liu, and Peter Stone. "Metric Residual Network for Sample Efficient Goal-Conditioned Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 8799–806. http://dx.doi.org/10.1609/aaai.v37i7.26058.

Full text
Abstract:
Goal-conditioned reinforcement learning (GCRL) has a wide range of potential real-world applications, including manipulation and navigation problems in robotics. Especially in such robotics tasks, sample efficiency is of the utmost importance for GCRL since, by default, the agent is only rewarded when it reaches its goal. While several methods have been proposed to improve the sample efficiency of GCRL, one relatively under-studied approach is the design of neural architectures to support sample efficiency. In this work, we introduce a novel neural architecture for GCRL that achieves significa
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Hongyu, Yuanze Tang, Qing Wu, Bo Wang, Chunlin Chen, and Zhi Wang. "Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning." IEEE/CAA Journal of Automatica Sinica 10, no. 12 (2023): 2233–47. http://dx.doi.org/10.1109/jas.2023.123477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Faccio, Francesco, Vincent Herrmann, Aditya Ramesh, Louis Kirsch, and Jürgen Schmidhuber. "Goal-Conditioned Generators of Deep Policies." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7503–11. http://dx.doi.org/10.1609/aaai.v37i6.25912.

Full text
Abstract:
Goal-conditioned Reinforcement Learning (RL) aims at learning optimal policies, given goals encoded in special command inputs. Here we study goal-conditioned neural nets (NNs) that learn to generate deep NN policies in form of context-specific weight matrices, similar to Fast Weight Programmers and other methods from the 1990s. Using context commands of the form ``generate a policy that achieves a desired expected return,'' our NN generators combine powerful exploration of parameter space with generalization across commands to iteratively find better and better policies. A form of weight-shari
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Goal-conditioned reinforcement learning"

1

Chenu, Alexandre. "Leveraging sequentiality in Robot Learning : Application of the Divide & Conquer paradigm to Neuro-Evolution and Deep Reinforcement Learning." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS342.

Full text
Abstract:
"Pour réussir, il ne suffit pas de prévoir, il faut aussi savoir improviser." Cette citation d’Isaac Asimov, père fondateur de la robotique et auteur des Trois lois de la robotique, souligne toute l’importance d’être capable de s’adapter et d’agir dans l’instant présent pour réussir. Même si, aujourd’hui, les robots peuvent résoudre des tâches d’une complexité qui était inimaginable il y a encore quelques années, ces capacités d’adaptation leur font encore défaut, ce qui les empêche d’être déployé à une plus grande échelle. Pour remédier à ce manque d’adaptabilité, les roboticiens utilisent de
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Goal-conditioned reinforcement learning"

1

Steccanella, Lorenzo, and Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Amadou, Abdoul Aziz, Vivek Singh, Florin C. Ghesu, et al. "Goal-Conditioned Reinforcement Learning for Ultrasound Navigation Guidance." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72120-5_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zou, Qiming, and Einoshin Suzuki. "Contrastive Goal Grouping for Policy Generalization in Goal-Conditioned Reinforcement Learning." In Neural Information Processing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92185-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vasilev, Denis V., Artem Latyshev, Petr Kuderov, Nutsu Shiman, and Aleksandr I. Panov. "Dynamical Distance Adaptation in Goal-Conditioned Model-Based Reinforcement Learning." In Advances in Neural Computation, Machine Learning, and Cognitive Research VIII. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73691-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vasilev, Denis V., Artem Latyshev, Petr Kuderov, Nutsu Shiman, and Aleksandr I. Panov. "Dynamical Distance Adaptation in Goal-Conditioned Model-Based Reinforcement Learning." In Studies in Computational Intelligence. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-80463-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hashimoto, Takanori, Teijiro Isokawa, and Naotake Kamiura. "Virtual Command Allocation: Enhancing Hexapod Robot Locomotion Through Goal-Conditioned Reinforcement Learning." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-6579-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Volovikova, Zoya, Alexey Skrynnik, Petr Kuderov, and Aleksandr I. Panov. "Instruction Following with Goal-Conditioned Reinforcement Learning in Virtual Environments." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240545.

Full text
Abstract:
In this study, we address the issue of enabling an artificial intelligence agent to execute complex language instructions within virtual environments. In our framework, we assume that these instructions involve intricate linguistic structures and multiple interdependent tasks that must be navigated successfully to achieve the desired outcomes. To effectively manage these complexities, we propose a hierarchical framework that combines the deep language comprehension of large language models with the adaptive action-execution capabilities of reinforcement learning agents: the language module (ba
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Goal-conditioned reinforcement learning"

1

Rens, Gavin. "Proposing Hierarchical Goal-Conditioned Policy Planning in Multi-Goal Reinforcement Learning." In 17th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013238900003890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Tianhao, Tianyang Chen, Xianwei Chen, and Qinyuan Ren. "Learning Efficient Representations for Goal-conditioned Reinforcement Learning via Tabu Search." In 2024 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE International Conference on Robotics, Automation and Mechatronics (RAM). IEEE, 2024. http://dx.doi.org/10.1109/cis-ram61939.2024.10672981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ransiek, Joshua, Johannes Plaum, Jacob Langner, and Eric Sax. "GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation." In 2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2024. https://doi.org/10.1109/itsc58415.2024.10919498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Taeyoung, Taemin Kang, Seungah Son, Kuk Won Ko, and Dongsoo Har. "Goal-Conditioned Reinforcement Learning Approach for Autonomous Parking in Complex Environments." In 2025 International Conference on Artificial Intelligence in Information and Communication (ICAIIC). IEEE, 2025. https://doi.org/10.1109/icaiic64266.2025.10920799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Seongsu, and Jun Moon. "Offline Goal-Conditioned Model-Based Reinforcement Learning in Pixel-Based Environment." In 2024 15th International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2024. https://doi.org/10.1109/ictc62082.2024.10827048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cao, Chenyang, Zichen Yan, Renhao Lu, Junbo Tan, and Xueqian Wang. "Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Minghuan, Menghui Zhu, and Weinan Zhang. "Goal-Conditioned Reinforcement Learning: Problems and Solutions." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/770.

Full text
Abstract:
Goal-conditioned reinforcement learning (GCRL), related to a set of complex RL problems, trains an agent to achieve different goals under particular scenarios. Compared to the standard RL solutions that learn a policy solely depending on the states or observations, GCRL additionally requires the agent to make decisions according to different goals. In this survey, we provide a comprehensive overview of the challenges and algorithms for GCRL. Firstly, we answer what the basic problems are studied in this field. Then, we explain how goals are represented and present how existing solutions are de
APA, Harvard, Vancouver, ISO, and other styles
8

Bortkiewicz, Michał, Jakub Łyskawa, Paweł Wawrzyński, et al. "Subgoal Reachability in Goal Conditioned Hierarchical Reinforcement Learning." In 16th International Conference on Agents and Artificial Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012326200003636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Zhe, Kailai Sun, Chenghao Li, Dianyu Zhong, Yiqin Yang, and Qianchuan Zhao. "A Goal-Conditioned Reinforcement Learning Algorithm with Environment Modeling." In 2023 42nd Chinese Control Conference (CCC). IEEE, 2023. http://dx.doi.org/10.23919/ccc58697.2023.10240963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zou, Qiming, and Einoshin Suzuki. "Sample-Efficient Goal-Conditioned Reinforcement Learning via Predictive Information Bottleneck for Goal Representation Learning." In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. http://dx.doi.org/10.1109/icra48891.2023.10161213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!