Academic literature on the topic 'Reinforcement Learning in Databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reinforcement Learning in Databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Reinforcement Learning in Databases"

1

Pakzad, Armie E., Raine Mattheus Manuel, Jerrick Spencer Uy, Xavier Francis Asuncion, Joshua Vincent Ligayo, and Lawrence Materum. "Reinforcement Learning-Based Television White Space Database." Baghdad Science Journal 18, no. 2(Suppl.) (2021): 0947. http://dx.doi.org/10.21123/bsj.2021.18.2(suppl.).0947.

Full text
Abstract:
Television white spaces (TVWSs) refer to the unused part of the spectrum under the very high frequency (VHF) and ultra-high frequency (UHF) bands. TVWS are frequencies under licenced primary users (PUs) that are not being used and are available for secondary users (SUs). There are several ways of implementing TVWS in communications, one of which is the use of TVWS database (TVWSDB). The primary purpose of TVWSDB is to protect PUs from interference with SUs. There are several geolocation databases available for this purpose. However, it is unclear if those databases have the prediction feature
APA, Harvard, Vancouver, ISO, and other styles
2

Nzenwata, Uchenna Jeremiah, Goodness Oluwamayokun Opateye, Noze-Otote Aisosa, et al. "Autonomous Database Systems – A Systematic Review of Self-Healing and Self-Tuning Database Systems." Asian Journal of Research in Computer Science 18, no. 7 (2025): 77–87. https://doi.org/10.9734/ajrcos/2025/v18i7721.

Full text
Abstract:
Problem Statement: Autonomous database systems represent a significant change in the management of databases, utilizing Machine Learning (ML) and Artificial Intelligence (AI) in order to carry out self-healing and self-tuning with minimal human intervention. Objectives: This systematic review investigates the defining characteristics, AI/ML techniques, challenges and the future trends of self-healing and self-tuning autonomous databases. Methodology: The research questions were answered integrating findings from 35 current literatures between 2020 and 2025. These literatures were obtained from
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Ritesh. "AI-Augmented Database Indexing for High-Performance Query Optimization." International Scientific Journal of Engineering and Management 02, no. 11 (2023): 1–7. https://doi.org/10.55041/isjem01292.

Full text
Abstract:
Abstract—Database indexing plays a crucial role in optimizing query performance, particularly in cloud-native and high-performance computing environments. Traditional indexing techniques often struggle to adapt dynamically to varying workloads, leading to suboptimal query execution times and increased computational overhead. This paper presents an AI-augmented approach to database indexing that leverages reinforcement learning-based adaptive indexing and machine learning-driven query optimization. By integrating AI models into indexing strategies, databases can dynamically adjust index structu
APA, Harvard, Vancouver, ISO, and other styles
4

Bhattarai, Sushil, and Suman Thapaliya. "A Novel Approach to Self-tuning Database Systems Using Reinforcement Learning Techniques." NPRC Journal of Multidisciplinary Research 1, no. 7 (2024): 143–49. https://doi.org/10.3126/nprcjmr.v1i7.72480.

Full text
Abstract:
The rapid evolution of data-intensive applications has intensified the need for efficient and adaptive database systems. Traditional database tuning methods, relying on manual interventions and rule-based optimizations, often fall short in handling dynamic workloads and complex parameter interdependencies. This paper introduces a novel approach to self-tuning database systems using reinforcement learning (RL) techniques, enabling databases to autonomously optimize configurations such as indexing strategies, memory allocation, and query execution plans. The proposed framework significantly enha
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Lei, Tian Li, Lin Wei, Yongcai Tao, Cuixia Li, and Yufei Gao. "FASTune: Towards Fast and Stable Database Tuning System with Reinforcement Learning." Electronics 12, no. 10 (2023): 2168. http://dx.doi.org/10.3390/electronics12102168.

Full text
Abstract:
Configuration tuning is vital to achieving high performance for a database management system (DBMS). Recently, automatic tuning methods using Reinforcement Learning (RL) have been explored to find better configurations compared with database administrators (DBAs) and heuristics. However, existing RL-based methods still have several limitations: (1) Excessive overhead due to reliance on cloned databases; (2) trial-and-error strategy may produce dangerous configurations that lead to database failure; (3) lack the ability to handle dynamic workload. To address the above challenges, a fast and sta
APA, Harvard, Vancouver, ISO, and other styles
6

Blank, Sebastian, Florian Wilhelm, Hans-Peter Zorn, and Achim Rettinger. "Querying NoSQL with Deep Learning to Answer Natural Language Questions." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9416–21. http://dx.doi.org/10.1609/aaai.v33i01.33019416.

Full text
Abstract:
Almost all of today’s knowledge is stored in databases and thus can only be accessed with the help of domain specific query languages, strongly limiting the number of people which can access the data. In this work, we demonstrate an end-to-end trainable question answering (QA) system that allows a user to query an external NoSQL database by using natural language. A major challenge of such a system is the non-differentiability of database operations which we overcome by applying policy-based reinforcement learning. We evaluate our approach on Facebook’s bAbI Movie Dialog dataset and achieve a
APA, Harvard, Vancouver, ISO, and other styles
7

Srikanth Reddy Keshireddy. "Reinforcement Learning Based Optimization of Query Execution Plans in Distributed Databases." Research Briefs on Information and Communication Technology Evolution 11 (March 11, 2025): 42–61. https://doi.org/10.69978/rebicte.v11i.211.

Full text
Abstract:
Troublesome workloads, data heterogeneity, and shifting resource conditions make efficient query execution highly difficult to achieve in distributed database systems. Traditional optimizers will almost always rely on handcrafted methods or static cost models to achieve the desired results, resulting in adaptative failures along the way and serving at best subpar query execution plans (QEPs). This paper presents a new architecture meant to optimize QEPs by utilizing deep policy reinforcement learning (RL) for dynamically shifting execution strategy adaptations over distributed nodes. The propo
APA, Harvard, Vancouver, ISO, and other styles
8

Sharma, Manas. "Machine Learning-Based Inferential Statistics for Query Optimization: A Novel Approach." European Journal of Computer Science and Information Technology 13, no. 18 (2025): 76–90. https://doi.org/10.37745/ejcsit.2013/vol13n187690.

Full text
Abstract:
The ML-based inferential statistics framework presents a novel solution for database query optimization that addresses critical challenges in statistics maintenance and cardinality estimation. By combining Bayesian learning and reinforcement learning modules, the framework enables continuous adaptation to changing data patterns while minimizing computational overhead. The solution offers improved query performance through better plan selection, reduced resource consumption, and enhanced accuracy in cardinality estimation. The framework's dynamic histogram redistribution mechanism ensures optim
APA, Harvard, Vancouver, ISO, and other styles
9

Sassi, Najla, and Wassim Jaziri. "Efficient AI-Driven Query Optimization in Large-Scale Databases: A Reinforcement Learning and Graph-Based Approach." Mathematics 13, no. 11 (2025): 1700. https://doi.org/10.3390/math13111700.

Full text
Abstract:
As data-centric applications become increasingly complex, understanding effective query optimization in large-scale relational databases is crucial for managing this complexity. Yet, traditional cost-based and heuristic approaches simply do not scale, adapt, or remain accurate in highly dynamic multi-join queries. This research work proposes the reinforcement learning and graph-based hybrid query optimizer (GRQO), the first ever to apply reinforcement learning and graph theory for optimizing query execution plans, specifically in join order selection and cardinality estimation. By employing pr
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Jun, Feng Ye, Nadia Nedjah, Ming Zhang, and Dong Xu. "Workload-Aware Performance Tuning for Multimodel Databases Based on Deep Reinforcement Learning." International Journal of Intelligent Systems 2023 (September 5, 2023): 1–17. http://dx.doi.org/10.1155/2023/8835111.

Full text
Abstract:
Currently, multimodel databases are widely used in modern applications, but the default configuration often fails to achieve the best performance. How to efficiently manage and tune the performance of multimodel databases is still a problem. Therefore, in this study, we present a configuration parameter tuning tool MMDTune+ for ArangoDB. First, the selection of configuration parameters is based on the random forest algorithm for feature selection. Second, a workload-aware mechanism is based on k-means++ and the Pearson correlation coefficient to detect workload changes and match the empirical
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Reinforcement Learning in Databases"

1

Izquierdo, Ayala Pablo. "Learning comparison: Reinforcement Learning vs Inverse Reinforcement Learning : How well does inverse reinforcement learning perform in simple markov decision processes in comparison to reinforcement learning?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259371.

Full text
Abstract:
This research project elaborates a qualitative comparison between two different learning approaches, Reinforcement Learning (RL) and Inverse Reinforcement Learning (IRL) over the Gridworld Markov Decision Process. The interest focus will be set on the second learning paradigm, IRL, as it is considered to be relatively new and little work has been developed in this field of study. As observed, RL outperforms IRL, obtaining a correct solution in all the different scenarios studied. However, the behaviour of the IRL algorithms can be improved and this will be shown and analyzed as part of the sco
APA, Harvard, Vancouver, ISO, and other styles
2

Seymour, B. J. "Aversive reinforcement learning." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/800107/.

Full text
Abstract:
We hypothesise that human aversive learning can be described algorithmically by Reinforcement Learning models. Our first experiment uses a second-order conditioning design to study sequential outcome prediction. We show that aversive prediction errors are expressed robustly in the ventral striatum, supporting the validity of temporal difference algorithms (as in reward learning), and suggesting a putative critical area for appetitive-aversive interactions. With this in mind, the second experiment explores the nature of pain relief, which as expounded in theories of motivational opponency, is r
APA, Harvard, Vancouver, ISO, and other styles
3

Akrour, Riad. "Robust Preference Learning-based Reinforcement Learning." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112236/document.

Full text
Abstract:
Les contributions de la thèse sont centrées sur la prise de décisions séquentielles et plus spécialement sur l'Apprentissage par Renforcement (AR). Prenant sa source de l'apprentissage statistique au même titre que l'apprentissage supervisé et non-supervisé, l'AR a gagné en popularité ces deux dernières décennies en raisons de percées aussi bien applicatives que théoriques. L'AR suppose que l'agent (apprenant) ainsi que son environnement suivent un processus de décision stochastique Markovien sur un espace d'états et d'actions. Le processus est dit de décision parce que l'agent est appelé à ch
APA, Harvard, Vancouver, ISO, and other styles
4

Tabell, Johnsson Marco, and Ala Jafar. "Efficiency Comparison Between Curriculum Reinforcement Learning & Reinforcement Learning Using ML-Agents." Thesis, Blekinge Tekniska Högskola, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Zhaoyuan Yang. "Adversarial Reinforcement Learning for Control System Design: A Deep Reinforcement Learning Approach." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu152411491981452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cortesi, Daniele. "Reinforcement Learning in Rogue." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16138/.

Full text
Abstract:
In this work we use Reinforcement Learning to play the famous Rogue, a dungeon-crawler videogame father of the rogue-like genre. By employing different algorithms we substantially improve on the results obtained in previous work, addressing and solving the problems that were arisen. We then devise and perform new experiments to test the limits of our own solution and encounter additional and unexpected issues in the process. In one of the investigated scenario we clearly see that our approach is not yet enough to even perform better than a random agent and propose ideas for future works.
APA, Harvard, Vancouver, ISO, and other styles
7

Girgin, Sertan. "Abstraction In Reinforcement Learning." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608257/index.pdf.

Full text
Abstract:
Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment. Generally, the problem to be solved contains subtasks that repeat at different regions of the state space. Without any guidance an agent has to learn the solutions of all subtask instances independently, which degrades the learning performance. In this thesis, we propose two approaches to build connections between different regions of the search space leading to better utilization of gained experience and accelerate learning is proposed. In the fir
APA, Harvard, Vancouver, ISO, and other styles
8

Suay, Halit Bener. "Reinforcement Learning from Demonstration." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-dissertations/173.

Full text
Abstract:
Off-the-shelf Reinforcement Learning (RL) algorithms suffer from slow learning performance, partly because they are expected to learn a task from scratch merely through an agent's own experience. In this thesis, we show that learning from scratch is a limiting factor for the learning performance, and that when prior knowledge is available RL agents can learn a task faster. We evaluate relevant previous work and our own algorithms in various experiments. Our first contribution is the first implementation and evaluation of an existing interactive RL algorithm in a real-world domain with a human
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Yang. "Argumentation accelerated reinforcement learning." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/26603.

Full text
Abstract:
Reinforcement Learning (RL) is a popular statistical Artificial Intelligence (AI) technique for building autonomous agents, but it suffers from the curse of dimensionality: the computational requirement for obtaining the optimal policies grows exponentially with the size of the state space. Integrating heuristics into RL has proven to be an effective approach to combat this curse, but deriving high-quality heuristics from people's (typically conflicting) domain knowledge is challenging, yet it received little research attention. Argumentation theory is a logic-based AI technique well-known for
APA, Harvard, Vancouver, ISO, and other styles
10

Alexander, John W. "Transfer in reinforcement learning." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=227908.

Full text
Abstract:
The problem of developing skill repertoires autonomously in robotics and artificial intelligence is becoming ever more pressing. Currently, the issues of how to apply prior knowledge to new situations and which knowledge to apply have not been sufficiently studied. We present a transfer setting where a reinforcement learning agent faces multiple problem solving tasks drawn from an unknown generative process, where each task has similar dynamics. The task dynamics are changed by varying in the transition function between states. The tasks are presented sequentially with the latest task presente
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Reinforcement Learning in Databases"

1

Sutton, Richard S. Reinforcement Learning. Springer US, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wiering, Marco, and Martijn van Otterlo, eds. Reinforcement Learning. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27645-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sutton, Richard S., ed. Reinforcement Learning. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3618-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lorenz, Uwe. Reinforcement Learning. Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-61651-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nandy, Abhishek, and Manisha Biswas. Reinforcement Learning. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3285-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

S, Sutton Richard, ed. Reinforcement learning. Kluwer Academic Publishers, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lorenz, Uwe. Reinforcement Learning. Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68311-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jinna, Frank L. Lewis, and Jialu Fan. Reinforcement Learning. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-28394-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xiao, Zhiqing. Reinforcement Learning. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-19-4933-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Merrick, Kathryn, and Mary Lou Maher. Motivated Reinforcement Learning. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-89187-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Reinforcement Learning in Databases"

1

Zap, Alexander, Tobias Joppen, and Johannes Fürnkranz. "Deep Ordinal Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46133-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akrour, Riad, Marc Schoenauer, and Michèle Sebag. "APRIL: Active Preference Learning-Based Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33486-3_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Michini, Bernard, and Jonathan P. How. "Bayesian Nonparametric Inverse Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33486-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jong, Nicholas K., and Peter Stone. "Compositional Models for Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04180-8_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Di Castro, Dotan, and Shie Mannor. "Adaptive Bases for Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15880-3_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Guoxi, and Hisashi Kashima. "Batch Reinforcement Learning from Crowds." In Machine Learning and Knowledge Discovery in Databases. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Steccanella, Lorenzo, and Anders Jonsson. "State Representation Learning for Goal-Conditioned Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26412-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Chi, Chetan Gupta, Ahmed Farahat, Kosta Ristovski, and Dipanjan Ghosh. "Equipment Health Indicator Learning Using Deep Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10997-4_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rothkopf, Constantin A., and Christos Dimitrakakis. "Preference Elicitation and Inverse Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23808-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bräm, Timo, Gino Brunner, Oliver Richter, and Roger Wattenhofer. "Attentive Multi-task Deep Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-46133-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Reinforcement Learning in Databases"

1

Saranya, V., G. R. K. Murthy, Purnachandra Rao Alapati, M. Mythili, K. Swarnamughi, and B. Kiran Bala. "Optimizing English Lexical Databases with BERT and Reinforcement Learning." In 2025 Fifth International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT). IEEE, 2025. https://doi.org/10.1109/icaect63952.2025.10958921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Junhao. "PostgreSQL Database Parameter Optimization Based on Reinforcement Learning." In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP). IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

He, Yongfeng. "Design of Database Index Structure based on Optimized Deep Reinforcement Learning." In 2024 International Conference on Distributed Systems, Computer Networks and Cybersecurity (ICDSCNC). IEEE, 2024. https://doi.org/10.1109/icdscnc62492.2024.10939803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dong, Wenlong, Wei Liu, Rui Xi, Mengshu Hou, and Shuhuan Fan. "MLETune: Streamlining Database Knob Tuning via Multi-LLMs Experts Guided Deep Reinforcement Learning." In 2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2024. https://doi.org/10.1109/icpads63350.2024.00038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tembhekar, Trupti Deoram, Tanusha Mittal, M. Lakshminarayana, Naresh Kumar Sripada, Falguni Tlajiya, and Shreyasi Bhattacharya. "Deep Reinforcement Learning-Enhanced Query Optimization Engine for Distributed and Federated Database Management Systems." In 2025 3rd International Conference on Data Science and Information System (ICDSIS). IEEE, 2025. https://doi.org/10.1109/icdsis65355.2025.11070921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dowling, J., R. Cunningham, E. Curran, and V. Cahill. "Collaborative reinforcement learning of autonomic behaviour." In Proceedings. 15th International Workshop on Database and Expert Systems Applications, 2004. IEEE, 2004. http://dx.doi.org/10.1109/dexa.2004.1333556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rudowsky, I., O. Kulyba, M. Kunin, S. Parsons, and T. Raphan. "Reinforcement Learning Interfaces for Biomedical Database Systems." In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.260484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rudowsky, I., O. Kulyba, M. Kunin, S. Parsons, and T. Raphan. "Reinforcement Learning Interfaces for Biomedical Database Systems." In Conference Proceedings. Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2006. http://dx.doi.org/10.1109/iembs.2006.4398892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Singh, Lohit, and Dilip Kumar Sharma. "An architecture for extracting information from hidden web databases using intelligent agent technology through reinforcement learning." In 2013 IEEE Conference on Information & Communication Technologies (ICT). IEEE, 2013. http://dx.doi.org/10.1109/cict.2013.6558108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mai, Genting, Zilong He, Guangba Yu, Zhiming Chen, and Pengfei Chen. "CTuner: Automatic NoSQL Database Tuning with Causal Reinforcement Learning." In Internetware 2024: 15th Asia-Pacific Symposium on Internetware. ACM, 2024. http://dx.doi.org/10.1145/3671016.3674809.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Reinforcement Learning in Databases"

1

Singh, Satinder, Andrew G. Barto, and Nuttapong Chentanez. Intrinsically Motivated Reinforcement Learning. Defense Technical Information Center, 2005. http://dx.doi.org/10.21236/ada440280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ghavamzadeh, Mohammad, and Sridhar Mahadevan. Hierarchical Multiagent Reinforcement Learning. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada440418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Harmon, Mance E., and Stephanie S. Harmon. Reinforcement Learning: A Tutorial. Defense Technical Information Center, 1997. http://dx.doi.org/10.21236/ada323194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tadepalli, Prasad, and Alan Fern. Partial Planning Reinforcement Learning. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada574717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghavamzadeh, Mohammad, and Sridhar Mahadevan. Hierarchical Average Reward Reinforcement Learning. Defense Technical Information Center, 2003. http://dx.doi.org/10.21236/ada445728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Johnson, Daniel W. Drive-Reinforcement Learning System Applications. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada264514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cleland, Andrew. Bounding Box Improvement With Reinforcement Learning. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.6322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jiajie. Learning Financial Investment Strategies using Reinforcement Learning and 'Chan theory'. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baird, III, Klopf Leemon C., and A. H. Reinforcement Learning With High-Dimensional, Continuous Actions. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada280844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Obert, James, and Angie Shia. Optimizing Dynamic Timing Analysis with Reinforcement Learning. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1573933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!