To see the other types of publications on this topic, follow the link: Database Performance Tuning Query Tuning.

Journal articles on the topic 'Database Performance Tuning Query Tuning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Database Performance Tuning Query Tuning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shahwan, Younis Ali, and Maseeh Hajar. "AI-Powered Database Management: Predictive Analytics for Performance Tuning." Engineering and Technology Journal 10, no. 05 (2025): 5100–5112. https://doi.org/10.5281/zenodo.15472012.

Full text
Abstract:
As data volumes and query complexities grow in modern applications, ensuring optimal database performance has become increasingly challenging. Traditional manual tuning approaches are reactive, time-consuming, and often lack adaptability to dynamic workloads. This paper explores the integration of Artificial Intelligence (AI) and predictive analytics into database management systems (DBMS) for proactive performance tuning. By leveraging machine learning models, such as regression analysis and anomaly detection, AI-powered systems can forecast performance degradation, recommend tuning actions, and optimize resource allocation in real time. The study reviews state-of-the-art techniques in AI-driven query optimization, index selection, and workload prediction. Experimental insights demonstrate significant improvements in query execution time, throughput, and overall system responsiveness. This paper concludes that predictive analytics not only enhances DBMS efficiency but also paves the way for autonomous database tuning in cloud and enterprise environments.  
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, De Yu. "Research on Improving Oracle Query Performance in MES." Applied Mechanics and Materials 201-202 (October 2012): 39–42. http://dx.doi.org/10.4028/www.scientific.net/amm.201-202.39.

Full text
Abstract:
The tuning for Oracle database system is vital to the normal running of the whole system, but it is a complicated work. SQL statement tuning is a very critical aspect of database performance tuning. It is an inherently complex activity requiring a high level of expertise in several domains: query optimization, to improve the execution plan selected by the query optimizer, access design to identify missing access structures and SQL design to restructure and simplify the text of a badly written SQL statement. In this paper, the author analyzes the execution procedure of oracle optimizer, and researches how to improve the oracle database query performance in MES.
APA, Harvard, Vancouver, ISO, and other styles
3

Muhammad, Qasim Memon, He Jingsha, Memon Aasma, Gulzar Rana Khurram, and Salman Pathan Muhammad. "Query Processing for Time Efficient Data Retrieval." Indonesian Journal of Electrical Engineering and Computer Science 9, no. 3 (2018): 784–88. https://doi.org/10.11591/ijeecs.v9.i3.pp784-788.

Full text
Abstract:
In database management system (DBMS) retrieving data through structure query language is an essential aspect to find better execution plan for performance. In this paper, we incorporated database objects to optimize query execution time and its cost by vanishing poorly SQL statements. We proposed a method of evolving and inserting database constraints as database objects embedded with queries either to add them for the sake of transactions required by user to detect those queries for the betterment of performance. We took analysis on several databases while processing queries itself and assimilate real time database workload with the bunch of transactions are invoked in comparison with tuning approaches. These database objects are coded in procedural language environment pertaining rules to make it worth and are merged into queries offering improved execution plan.
APA, Harvard, Vancouver, ISO, and other styles
4

Bhattarai, Sushil, and Suman Thapaliya. "A Novel Approach to Self-tuning Database Systems Using Reinforcement Learning Techniques." NPRC Journal of Multidisciplinary Research 1, no. 7 (2024): 143–49. https://doi.org/10.3126/nprcjmr.v1i7.72480.

Full text
Abstract:
The rapid evolution of data-intensive applications has intensified the need for efficient and adaptive database systems. Traditional database tuning methods, relying on manual interventions and rule-based optimizations, often fall short in handling dynamic workloads and complex parameter interdependencies. This paper introduces a novel approach to self-tuning database systems using reinforcement learning (RL) techniques, enabling databases to autonomously optimize configurations such as indexing strategies, memory allocation, and query execution plans. The proposed framework significantly enhances performance, scalability, and resource utilization by leveraging RL’s ability to learn from interactions and adapt to changing environments. Experimental evaluations demonstrate up to a 45% improvement in query execution times and superior adaptability to workload variations compared to traditional methods. This study highlights RL's potential to transform database management, setting the stage for next-generation intelligent and autonomous data systems. Modern database systems face increasing complexity due to the diverse workloads and dynamic environments they operate in. Traditional database tuning methods often require significant manual intervention and expertise, making them inefficient for large-scale systems. This paper presents a novel approach to self-tuning database systems using reinforcement learning (RL) techniques. By leveraging RL, databases can autonomously learn and adapt to changing conditions, optimizing configurations such as indexing, query execution plans, and memory allocation. We outline a framework for implementing RL-based self-tuning, discuss key challenges, and evaluate the approach against traditional methods. Results indicate significant improvements in performance, adaptability, and resource utilization, demonstrating the potential of RL for next-generation database systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Barbosa, Diogo, Le Gruenwald, Laurent D’Orazio, and Jorge Bernardino. "QRLIT: Quantum Reinforcement Learning for Database Index Tuning." Future Internet 16, no. 12 (2024): 439. http://dx.doi.org/10.3390/fi16120439.

Full text
Abstract:
Selecting indexes capable of reducing the cost of query processing in database systems is a challenging task, especially in large-scale applications. Quantum computing has been investigated with promising results in areas related to database management, such as query optimization, transaction scheduling, and index tuning. Promising results have also been seen when reinforcement learning is applied for database tuning in classical computing. However, there is no existing research with implementation details and experiment results for index tuning that takes advantage of both quantum computing and reinforcement learning. This paper proposes a new algorithm called QRLIT that uses the power of quantum computing and reinforcement learning for database index tuning. Experiments using the database TPC-H benchmark show that QRLIT exhibits superior performance and a faster convergence compared to its classical counterpart.
APA, Harvard, Vancouver, ISO, and other styles
6

Oluwafemi Oloruntoba. "AI-Driven autonomous database management: Self-tuning, predictive query optimization, and intelligent indexing in enterprise it environments." World Journal of Advanced Research and Reviews 25, no. 2 (2025): 1558–80. https://doi.org/10.30574/wjarr.2025.25.2.0534.

Full text
Abstract:
The rapid growth of enterprise data and the increasing complexity of modern database systems have necessitated a shift from traditional manual database management to autonomous, AI-driven solutions. AI-driven autonomous database management systems (ADBMS) leverage machine learning, predictive analytics, and automation to optimize database performance, reduce administrative overhead, and enhance scalability in enterprise IT environments. Traditional database management approaches often suffer from inefficiencies related to query performance, indexing, workload tuning, and anomaly detection, leading to increased operational costs and performance bottlenecks. This paper explores the key components of AI-driven autonomous database management, focusing on self-tuning mechanisms, predictive query optimization, and intelligent indexing techniques. Self-tuning capabilities leverage AI to analyze workloads, optimize resource allocation, and dynamically adjust system parameters to maintain peak efficiency. Predictive query optimization utilizes deep learning algorithms to enhance query execution plans, reduce latency, and anticipate performance issues before they impact business operations. Additionally, intelligent indexing applies machine learning techniques to automate index selection, adaptation, and maintenance, ensuring optimal data retrieval and reducing query processing times. By integrating these AI-driven mechanisms, enterprises can achieve greater operational efficiency, improved database reliability, and reduced human intervention in performance tuning. The study also addresses security, compliance, and reliability concerns associated with autonomous database management, proposing best practices for AI-driven data governance. Future research directions include the integration of quantum computing for database acceleration, AI-driven anomaly detection for enhanced cybersecurity, and the application of reinforcement learning for real-time database optimization. This paper provides a strategic roadmap for enterprises looking to adopt AI-driven autonomous database solutions to drive innovation and competitive advantage.
APA, Harvard, Vancouver, ISO, and other styles
7

Bianchi, Alexander, Andrew Chai, Vincent Corvinelli, Parke Godfrey, Jarek Szlichta, and Calisto Zuzarte. "Db2une: Tuning Under Pressure via Deep Learning." Proceedings of the VLDB Endowment 17, no. 12 (2024): 3855–68. http://dx.doi.org/10.14778/3685800.3685811.

Full text
Abstract:
Modern database systems including IBM Db2 have numerous parameters, "knobs," that require precise configuration to achieve optimal workload performance. Even for experts, manually "tuning" these knobs is a challenging process. We present Db2une, an automatic query-aware tuning system that leverages deep learning to maximize performance while minimizing resource usage. Via a specialized transformer-based query-embedding pipeline we name QBERT, Db2une generates context-aware representations of query workloads to feed as input to a stability-oriented, on-policy deep reinforcement learning model. In Db2une, we introduce a multi-phased, database meta-data driven training approach---which incorporates cost estimates, interpolation of these costs, and database statistics---to efficiently discover optimal tuning configurations without the need to execute queries. Thus, our model can scale to very large workloads, for which executing queries would be prohibitively expensive. Through experimental evaluation, we demonstrate Db2une's efficiency and effectiveness over a variety of workloads. We compare it against the state-of-the-art query-aware tuning systems and show that the system provides recommendations that surpass those of IBM experts.
APA, Harvard, Vancouver, ISO, and other styles
8

Martani, Marlene, Hanny Juwitasary, and Arya Nata Gani Putra. "Analisis Alat Bantu Tuning Fisikal Basis Data pada Sql Server 2008." ComTech: Computer, Mathematics and Engineering Applications 5, no. 1 (2014): 334. http://dx.doi.org/10.21512/comtech.v5i1.2628.

Full text
Abstract:
Nowadays every company has been faced with a business competition that requires the company to survive and be superior to its competitors. One strategy used by many companies is to use information technology to run their business processes. The use of information technology would require a storage which commonly referred to as a database to store and process data into useful information for the company. However, it was found that the greater the amount of data in the database, then the speed of the resulting process will decrease because the time needed to access the data will be much longer. The long process of data can cause a decrease in the company’s performance and the length of time needed to make decisions so that this can be a challenge to achieve the company’s competitive advantage. In this study performed an analysis of technique to improve the performance of the database system used by the company to perform tuning on SQL Server 2008 database physically. The purpose of this study is to improve the performance of the database used by speeding up the time it takes when doing query processing. The research methodology used was the method of analysis such as literature studies, analysis of the process and the workings of tuning tools that already exist in SQL Server 2008, and evaluation of applications that have been created, and also tuning methods that include query optimization and create index. The results obtained from this study is an evaluation of the physical application tuning tools that can integrate database functionality of other tuning tools such as SQL Profiler and Database Tuning Advisor.
APA, Harvard, Vancouver, ISO, and other styles
9

Abbasi, Maryam, Marco V. Bernardo, Paulo Váz, José Silva, and Pedro Martins. "Adaptive and Scalable Database Management with Machine Learning Integration: A PostgreSQL Case Study." Information 15, no. 9 (2024): 574. http://dx.doi.org/10.3390/info15090574.

Full text
Abstract:
The increasing complexity of managing modern database systems, particularly in terms of optimizing query performance for large datasets, presents significant challenges that traditional methods often fail to address. This paper proposes a comprehensive framework for integrating advanced machine learning (ML) models within the architecture of a database management system (DBMS), with a specific focus on PostgreSQL. Our approach leverages a combination of supervised and unsupervised learning techniques to predict query execution times, optimize performance, and dynamically manage workloads. Unlike existing solutions that address specific optimization tasks in isolation, our framework provides a unified platform that supports real-time model inference and automatic database configuration adjustments based on workload patterns. A key contribution of our work is the integration of ML capabilities directly into the DBMS engine, enabling seamless interaction between the ML models and the query optimization process. This integration allows for the automatic retraining of models and dynamic workload management, resulting in substantial improvements in both query response times and overall system throughput. Our evaluations using the Transaction Processing Performance Council Decision Support (TPC-DS) benchmark dataset at scale factors of 100 GB, 1 TB, and 10 TB demonstrate a reduction of up to 42% in query execution times and a 74% improvement in throughput compared with traditional approaches. Additionally, we address challenges such as potential conflicts in tuning recommendations and the performance overhead associated with ML integration, providing insights for future research directions. This study is motivated by the need for autonomous tuning mechanisms to manage large-scale, heterogeneous workloads while answering key research questions, such as the following: (1) How can machine learning models be integrated into a DBMS to improve query optimization and workload management? (2) What performance improvements can be achieved through dynamic configuration tuning based on real-time workload patterns? Our results suggest that the proposed framework significantly reduces the need for manual database administration while effectively adapting to evolving workloads, offering a robust solution for modern large-scale data environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Memon, Muhammad Qasim, Jingsha He, Aasma Memon, Khurram Gulzar Rana, and Muhammad Salman Pathan. "Query Processing for Time Efficient Data Retrieval." Indonesian Journal of Electrical Engineering and Computer Science 9, no. 3 (2018): 784. http://dx.doi.org/10.11591/ijeecs.v9.i3.pp784-788.

Full text
Abstract:
<p class="TTPAbstract">In database management system (DBMS) retrieving data through structure query language is an essential aspect to find better execution plan for performance. In this paper, we incorporated database objects to optimize query execution time and its cost by vanishing poorly SQL statements. We proposed a method of evolving and inserting database constraints as database objects embedded with queries either to add them for the sake of transactions required by user to detect those queries for the betterment of performance. We took analysis on several databases while processing queries itself and assimilate real time database workload with the bunch of transactions are invoked in comparison with tuning approaches. These database objects are coded in procedural language environment pertaining rules to make it worth and are merged into queries offering improved execution plan.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Yu, Tao, Zhaonian Zou, Weihua Sun, and Yu Yan. "Refactoring Index Tuning Process with Benefit Estimation." Proceedings of the VLDB Endowment 17, no. 7 (2024): 1528–41. http://dx.doi.org/10.14778/3654621.3654622.

Full text
Abstract:
Index tuning is a challenging task aiming to improve query performance by selecting the most effective indexes for a database and a workload. Existing automatic index tuning methods typically rely on "what-if tools" to evaluate the benefit of an index configuration, which is costly and sometimes inaccurate. In this paper, we propose RIBE, a novel method that effectively eliminates redundant queries from the workload and harnesses statistical information of query plans to enable fast and accurate estimation of the benefit of an index configuration. With RIBE, a considerable portion of what-if calls can be skipped, thereby reducing index tuning time and increasing estimation accuracy. At the heart of RIBE is a deep learning model based on attention mechanism that predicts the impact of indexes on queries. A practical advantage of RIBE is that it achieves both improved accuracy of benefit estimation and time savings without making any changes to DBMS implementation and index configuration enumeration algorithms. Our evaluation shows that RIBE can achieve competitive tuning results and 1--2 orders of magnitude faster performance compared with the tuning method based on the full workload, and RIBE also attains higher tuning quality and comparable efficiency against the tuning methods based on the state-of-the-art workload compression methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Siddiqui, Tarique, Wentao Wu, Vivek Narasayya, and Surajit Chaudhuri. "DISTILL." Proceedings of the VLDB Endowment 15, no. 10 (2022): 2019–31. http://dx.doi.org/10.14778/3547305.3547309.

Full text
Abstract:
Many database systems offer index tuning tools that help automatically select appropriate indexes for improving the performance of an input workload. Index tuning is a resource-intensive and time-consuming task requiring expensive optimizer calls for estimating the cost of queries over potential index configurations. In this work, we develop low-overhead techniques that can be leveraged by index tuning tools for reducing a large number of optimizer calls without making changes to the tuning algorithm or to the query optimizer. First, index tuning tools use rule-based techniques to generate a large number of syntactically-relevant indexes; however, a large proportion of such indexes are spurious and do not lead to a significant improvement in the performance of queries. We eliminate such indexes much earlier in the search by leveraging patterns in the workload, without making optimizer calls. Second, we learn cost models that exploit the similarity between query and index configuration pairs in the workload to efficiently estimate the cost of queries over a large number of index configurations using fewer optimizer calls. We perform an extensive evaluation over both real-world and synthetic benchmarks, and show that given the same set of input queries, indexes, and the search algorithm for exploration, our proposed techniques can lead to a median reduction in tuning time of 3X and a maximum of 12X compared to state-of-the-art tuning tools with similar quality of recommended indexes.
APA, Harvard, Vancouver, ISO, and other styles
13

Brucato, Matteo, Tarique Siddiqui, Wentao Wu, Vivek Narasayya, and Surajit Chaudhuri. "Wred: Workload Reduction for Scalable Index Tuning." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–26. http://dx.doi.org/10.1145/3639305.

Full text
Abstract:
Modern database systems offer index-tuning advisors that automatically identify a set of indexes to improve workload performance. Advisors leverage the optimizer's what-if API to optimize a query for a hypothetical index configuration. Because what-if calls constitute a major bottleneck of index tuning, existing techniques, such as workload compression, help reduce the number of what-if calls to speed up tuning. Unfortunately, even with small workloads and few what-if calls, tuning can still take hours due to the complexity of the queries (e.g., the number of joins, filters, group-by and order-by clauses), which increases their optimization time. This paper introduces workload reduction, a new complementary technique aimed at expediting index tuning by decreasing individual what-if call time without significantly affecting the quality of index tuning. We present an efficient workload reduction algorithm, called Wred, which rewrites each query in the original workload to eliminate column and table expressions unlikely to benefit from indexes, thereby accelerating what-if calls. We study its complexity and ability to maintain high index quality. We perform an extensive evaluation over industry benchmarks and real-world customer workloads, which shows that Wred results in a 3x median speedup in tuning efficiency over an industrial-strength state-of-the-art index advisor, with only a 3.7% median loss in improvement---where improvement is the total workload cost as estimated by the query optimizer---and results in up to 24.7x speedup with 1.8% improvement loss. Furthermore, combining Wred and Isum (a state-of-the-art workload compression technique for index tuning) results in higher speedups than either of the two techniques alone, with 10.5x median speedup and 5% median improvement loss.
APA, Harvard, Vancouver, ISO, and other styles
14

Santosh Jaini. "Autonomous Databases: Leveraging Machine Learning and Neural Networks for Predictive Query Optimization, Self-Tuning, and Index Optimization in Multi-RDBMS Systems." International Journal for Research Publication and Seminar 13, no. 2 (2022): 378–86. http://dx.doi.org/10.36676/jrps.v13.i2.1600.

Full text
Abstract:
Autonomous databases are the new fad in modern database systems. The database systems are managed by machine learning and neural networks for query prediction, self-tuning, and self-indexing. These systems decrease intervention in multi-relational database management systems (multi-RDBMS). This paper analyses the relevance of ML and NN in optimizing the queries and automating the working of databases. Either simulation results of the tested benchmark queries or real-time use cases show the extent of the query processing speed increase and its accuracy. However, problems like implementing these technologies into current structures and dealing with high-velocity data persist. The proposed solutions are using graph neural networks to solve scalability problems. In conclusion, this research enshrines the prospects for AI autonomous databases to improve performance in multi-RDBMS architecture.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Guoliang, Xuanhe Zhou, and Lei Cao. "Machine learning for databases." Proceedings of the VLDB Endowment 14, no. 12 (2021): 3190–93. http://dx.doi.org/10.14778/3476311.3476405.

Full text
Abstract:
Machine learning techniques have been proposed to optimize the databases. For example, traditional empirical database optimization techniques (e.g., cost estimation, join order selection, knob tuning, index and view advisor) cannot meet the high-performance requirement for large-scale database instances, various applications and diversified users, especially on the cloud. Fortunately, machine learning based techniques can alleviate this problem by judiciously selecting optimization strategy. In this tutorial, we categorize database tasks into three typical problems that can be optimized by different machine learning models, including NP-hard problems (e.g., knob space exploration, index/view selection, partition-key recommendation for offline optimization; query rewrite, join order selection for online optimization), regression problems (e.g., cost/cardinality estimation, index/view benefit estimation, query latency prediction), and prediction problems (e.g., query workload prediction). We review existing machine learning based techniques to address these problems and provide research challenges.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, William, Wan Shen Lim, Matthew Butrovich, and Andrew Pavlo. "The Holon Approach for Simultaneously Tuning Multiple Components in a Self-Driving Database Management System with Machine Learning via Synthesized Proto-Actions." Proceedings of the VLDB Endowment 17, no. 11 (2024): 3373–87. http://dx.doi.org/10.14778/3681954.3682007.

Full text
Abstract:
Existing machine learning (ML) approaches to automatically optimize database management systems (DBMSs) only target a single configuration space at a time (e.g., knobs, query hints, indexes). Simultaneously tuning multiple configuration spaces is challenging due to the combined space's complexity. Previous tuning methods work around this by sequentially tuning individual spaces with a pool of tuners. However, these approaches struggle to coordinate their tuners and get stuck in local optima. This paper presents the Proto-X framework that holistically tunes multiple configuration spaces. The key idea of Proto-X is to identify similarities across multiple spaces, encode them in a high-dimensional model, and then synthesize "proto-actions" to navigate the organized space for promising configurations. We evaluate Proto-X against state-of-the-art DBMS tuning frameworks on tuning PostgreSQL for analytical and transactional workloads. By reasoning about configuration spaces that are orders of magnitude more complex than other frameworks (both in terms of quantity and variety), Proto-X discovers configurations that improve PostgreSQL's performance by up to 53% over the next best approach.
APA, Harvard, Vancouver, ISO, and other styles
17

Warveen, merza eido, and Maseeh Yasin Hajar. "Machine Learning Approaches for Enhancing Query Optimization in Large Databases." Engineering and Technology Journal 10, no. 03 (2025): 4326–49. https://doi.org/10.5281/zenodo.15105850.

Full text
Abstract:
More effective query optimization strategies in large-scale databases are required due to the growing volume and complexity of data in contemporary applications. Performance inefficiencies result from traditional query optimization techniques, such as rule-based and cost-based approaches, which frequently find it difficult to manage dynamic and complicated workloads. By utilizing deep learning, reinforcement learning, and predictive analytics to enhance query execution plans, indexing, and workload management, machine learning (ML) has become a game-changing method for improving query optimization. With its many advantages—including workload-aware indexing, adaptive tuning, and real-time performance improvements—ML-driven optimization approaches are especially well-suited for distributed and cloud-based database setups. However, challenges remain, such as the need for more explainable AI-powered optimizers, security vulnerabilities, and the high computational costs of training machine learning models. To ensure reliable and efficient database management, future research should focus on creating hybrid optimization frameworks, strengthening security measures, and making machine learning-based decision-making more explainable. By addressing these challenges, machine learning-powered query optimization could open the door to smarter, more flexible, and scalable database systems.  
APA, Harvard, Vancouver, ISO, and other styles
18

AzraJabeen, Mohamed Ali. "SQL Server Optimization-Best Practices for Maximizing Performance." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 8, no. 4 (2020): 1–10. https://doi.org/10.5281/zenodo.14535769.

Full text
Abstract:
This paper explores the best practices for SQL Server optimization, offering a comprehensive guide to enhance the performance of database systems.In the data-driven world of today, sustaining high efficiency and responsiveness requires that SQL Server databases operate at their best.By addressing key aspects such as query tuning, indexing strategies, and resource management, it presents effective techniques to minimize latency and improve execution speed. It also highlights the importance of proper configuration, efficient use of memory, and effective database maintenance practices. Through these best practices, database administrators and developers can ensure that SQL Server operates at peak performance, supporting faster queries, reduced downtime, and seamless scalability.This paper serves as an invaluable resource for anyone seeking to optimize their SQL Server environment, ensuring better performance and reliability in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Peng, Yuchen, Ke Chen, Lidan Shou, Dawei Jiang, and Gang Chen. "AQUA: Automatic Collaborative Query Processing in Analytical Database." Proceedings of the VLDB Endowment 16, no. 12 (2023): 4006–9. http://dx.doi.org/10.14778/3611540.3611607.

Full text
Abstract:
Data analysts nowadays are keen to have analytical capabilities involving deep learning (DL). Collaborative queries, which employ relational operations to process structured data and DL models to process unstructured data, provide a powerful facility for DL-based in-database analysis. The classical approach to support collaborative queries in relational databases is to integrate DL models with user-defined functions (UDFs) in a general-purpose language (e.g., C++) to process unstructured data. This approach suffers from suboptimal performance as the opaque UDFs preclude the generation of an optimal query plan. A recent work, DL2SQL, addresses the problem of collaborative query optimization by first converting DL computations into SQL subqueries and then using a classical relational query optimizer to optimize the entire collaborative query. However, the DL2SQL approach compromises usability by requiring data analysts to manually manage DL-related data and tune query performance. To this end, this paper introduces AQUA, an analytical database designed for efficient collaborative query processing. Built on DL2SQL, AQUA automates translations from collaborative queries into SQL queries. To enhance usability, AQUA introduces two techniques: 1) a declarative scheme for DL-related data management, and 2) DL-specific optimizations for collaborative query processing, eliminating the burden of manual data management and performance tuning from the data analysts. We demonstrate the key contributions of AQUA via a web APP that allows the audience to perform collaborative queries on the CIFAR-10 dataset.
APA, Harvard, Vancouver, ISO, and other styles
20

Vasilenko, N. K., A. V. Demin, and D. K. Ponomaryov. "Adaptive Cost Model for Query Optimization." Bulletin of Irkutsk State University. Series Mathematics 52 (2025): 137–52. https://doi.org/10.26516/1997-7670.2025.52.137.

Full text
Abstract:
The principal component of conventional database query optimizers is a cost model that is used to estimate expected performance of query plans. The accuracy of the cost model has direct impact on the optimality of execution plans selected by the optimizer and thus, on the resulting query latency. Several common parameters of cost models in modern DBMS are related to the performance of CPU and I/O and are typically set by a database administrator upon system tuning. However these performance characteristics are not stable and therefore, a single point estimation may not suffice for all DB load regimes. In this paper, we propose an Adaptive Cost Model (ACM) which dynamically optimizes CPU- and I/O-related plan cost parameters at DB runtime. By continuously monitoring query execution statistics and the state of DB buffer cache ACM adjusts cost parameters without the need for manual intervention from a database administrator. This allows for responding to changes in the workload and system performance ensuring more optimal query execution plans. We describe the main ideas in the implementation of ACM and report on a preliminary experimental evaluation showing 20% end-to-end latency improvement on TPC-H benchmark.
APA, Harvard, Vancouver, ISO, and other styles
21

Madathala, Harikrishna, Balaji Barmavat, and Srinivasa Rao Thumala. "Performance Optimization of SAP HANA using AI-based Workload Predictions." International Journal of Innovative Research in Science,Engineering and Technology 12, no. 12 (2023): 15315–26. http://dx.doi.org/10.15680/ijirset.2023.1212047.

Full text
Abstract:
This research paper explores the application of artificial intelligence (AI) techniques for optimizing the performance of SAP HANA databases through predictive workload analysis and dynamic resource allocation. SAP HANA, as an in-memory, column-oriented relational database management system, presents unique challenges in performance tuning due to its complex architecture and diverse workload patterns. We propose a novel framework that leverages machine learning models to predict future workloads and intelligently allocate resources in real-time. Our approach demonstrates significant improvements in query response times, resource utilization, and overall system throughput compared to traditional optimization techniques. The study also addresses implementation challenges and outlines future research directions in this rapidly evolving field.
APA, Harvard, Vancouver, ISO, and other styles
22

Vishnupriya, S. Devarajulu. "Key Solutions to Optimize Database SQL Queries." Journal of Scientific and Engineering Research 6, no. 12 (2019): 311–14. https://doi.org/10.5281/zenodo.13753398.

Full text
Abstract:
Optimizing SQL Queries is crucial for enhancing the performance and efficiency of database-driven applications. This article explores key solutions to the performance issues in SQL queries, with code samples and detailed explanations. Best practices such as using indexes, avoiding unnecessary columns in SELECT statements, using schema names with object names, and optimizing joins and subqueries and other solutions are discussed. By following these optimization techniques, developers can provide more efficient database with improved application performance.
APA, Harvard, Vancouver, ISO, and other styles
23

Murali Natti. "Optimizing oracle database performance: Reducing row migration and enhancing access efficiency by tuning PCT Free and PCT Used." International Journal of Science and Research Archive 14, no. 2 (2025): 124–26. https://doi.org/10.30574/ijsra.2025.14.2.0577.

Full text
Abstract:
In Oracle databases, efficient data storage and retrieval are paramount for maintaining high performance, especially in systems with large datasets and frequent updates. A critical aspect of database performance is the management of data storage within blocks, which directly impacts how rows are stored and accessed. Oracle uses parameters such as PCT Free and PCT Used to control space allocation and manage how data is stored within database blocks. Improperly configured settings for these parameters can lead to significant performance degradation, especially in terms of row migration. Row migration occurs when a row, after being updated, becomes too large to fit into its original block, resulting in the row being moved to another block. This introduces inefficiencies, leading to increased disk I/O, fragmented blocks, and degraded query performance. This white paper explores a comprehensive approach to optimizing Oracle database performance by fine-tuning the PCT Free and PCT Used parameters, ultimately reducing row migration and enhancing access efficiency. By adjusting these settings based on workload patterns, table structures, and row update frequencies, organizations can minimize unnecessary block accesses, improve overall space utilization, and reduce the I/O overhead that hampers system performance. The white paper provides a detailed exploration of the problem, the methodology for tuning these parameters, and the results achieved through a practical case study.
APA, Harvard, Vancouver, ISO, and other styles
24

Murali Natti. "Optimizing oracle database performance: Reducing row migration and enhancing access efficiency by tuning PCT Free and PCT Used." International Journal of Science and Research Archive 12, no. 2 (2024): 3014–16. https://doi.org/10.30574/ijsra.2024.12.2.0577.

Full text
Abstract:
In Oracle databases, efficient data storage and retrieval are paramount for maintaining high performance, especially in systems with large datasets and frequent updates. A critical aspect of database performance is the management of data storage within blocks, which directly impacts how rows are stored and accessed. Oracle uses parameters such as PCT Free and PCT Used to control space allocation and manage how data is stored within database blocks. Improperly configured settings for these parameters can lead to significant performance degradation, especially in terms of row migration. Row migration occurs when a row, after being updated, becomes too large to fit into its original block, resulting in the row being moved to another block. This introduces inefficiencies, leading to increased disk I/O, fragmented blocks, and degraded query performance. This white paper explores a comprehensive approach to optimizing Oracle database performance by fine-tuning the PCT Free and PCT Used parameters, ultimately reducing row migration and enhancing access efficiency. By adjusting these settings based on workload patterns, table structures, and row update frequencies, organizations can minimize unnecessary block accesses, improve overall space utilization, and reduce the I/O overhead that hampers system performance. The white paper provides a detailed exploration of the problem, the methodology for tuning these parameters, and the results achieved through a practical case study.
APA, Harvard, Vancouver, ISO, and other styles
25

Giannakouris, Victor, and Immanuel Trummer. "DBG-PT: A Large Language Model Assisted Query Performance Regression Debugger." Proceedings of the VLDB Endowment 17, no. 12 (2024): 4337–40. http://dx.doi.org/10.14778/3685800.3685869.

Full text
Abstract:
In this paper we explore the ability of Large Language Models (LLMs) in analyzing and comparing query plans, and resolving query performance regressions. We present DBG-PT, a query regression debugging framework powered by LLMs. DBG-PT keeps track of query execution instances, and detects slowdowns according to a user-defined regression factor. Once a regression is detected, DBG-PT leverages the capabilities of the underlying LLM in order to compare the regressed plan with a previously effective one, and comes up with tuning knob configurations in order to alleviate the regression. By exploiting textual information of the executed query plans, DBG-PT is able to integrate with close-to-zero implementation effort with any database system that supports the EXPLAIN clause. During the demonstration, we will showcase DBG-PT's ability to resolve query regressions using several real-world inspired scenarios, including plan changes because of index creations/deletions, or configuration changes. Furthermore, users will be able to experiment using ad-hoc, or predefined queries from the Join Order Benchmark (JOB) and TPC-H, and over MySQL and Postgres.
APA, Harvard, Vancouver, ISO, and other styles
26

Vellanki, Ravi Babu. "PostgreSQL Configuration: Best Practices for Performance and Security." European Journal of Computer Science and Information Technology 13, no. 47 (2025): 172–82. https://doi.org/10.37745/ejcsit.2013/vol13n47172182.

Full text
Abstract:
PostgreSQL configuration significantly impacts database performance and security, yet default settings often prioritize compatibility over optimization. This article presents a comprehensive framework for PostgreSQL configuration, addressing critical aspects including memory allocation, query planning, security hardening, and monitoring. By examining the interdependencies between configuration parameters and their effects on system behavior under various workloads, the article provides a structured approach to database optimization. Memory allocation strategies focus on shared buffers, work memory, and background writer settings to maximize performance while preventing resource contention. Query performance optimization encompasses planner configuration, autovacuum tuning, and parallel execution capabilities to enhance throughput and reduce latency. Security hardening measures include network protection, authentication controls, privilege management, and vulnerability mitigation techniques to safeguard data while maintaining functionality. Comprehensive logging and monitoring strategies enable proactive identification of performance bottlenecks and security threats. Together, these best practices enable organizations to implement secure, high-performance PostgreSQL environments tailored to their specific requirements.
APA, Harvard, Vancouver, ISO, and other styles
27

Colley, Derek, Clare Stanier, and Md Asaduzzaman. "Investigating the Effects of Object-Relational Impedance Mismatch on the Efficiency of Object-Relational Mapping Frameworks." Journal of Database Management 31, no. 4 (2020): 1–23. http://dx.doi.org/10.4018/jdm.2020100101.

Full text
Abstract:
The object-relational impedance mismatch (ORIM) problem characterises differences between the object-oriented and relational approaches to data access. Queries generated by object-relational mapping (ORM) frameworks are designed to overcome ORIM difficulties and can cause performance concerns in environments which use object-oriented paradigms. The aim of this paper is twofold, first presenting a survey of database practitioners on the effectiveness of ORM tools followed by an experimental investigation into the extent of operational concerns through the comparison of ORM-generated query performance and SQL query performance with a benchmark data set. The results show there are perceived difficulties in tuning ORM tools and distrust around their effectiveness. Through experimental testing, these views are validated by demonstrating that ORMs exhibit performance issues to the detriment of the query and the overall scalability of the ORM-led approach. Future work on establishing a system to support the query optimiser when parsing and preparing ORM-generated queries is outlined.
APA, Harvard, Vancouver, ISO, and other styles
28

Satish Vadlamani, Siddhey Mahadik, Shanmukha Eeti, Om Goel, Shalu Jain, and Raghav Agarwal. "Database Performance Optimization Techniques for Large-Scale Teradata Systems." Universal Research Reports 8, no. 4 (2021): 192–209. http://dx.doi.org/10.36676/urr.v8.i4.1386.

Full text
Abstract:
In the era of big data, optimizing database performance is critical for managing large-scale Teradata systems efficiently. This paper explores various techniques for enhancing performance, focusing on query optimization, data distribution strategies, and resource management. Query optimization involves analyzing execution plans and leveraging Teradata's parallel processing capabilities to reduce latency and increase throughput. Effective data distribution techniques, such as choosing appropriate primary indexes and employing partitioning strategies, significantly influence data retrieval speeds and overall system performance. Additionally, resource management techniques, including workload management and system tuning, play a vital role in balancing user demands and system capabilities. By implementing these strategies, organizations can ensure that their Teradata systems not only handle vast amounts of data but also provide timely insights for decision-making. The research also discusses the importance of continuous monitoring and performance assessment, highlighting tools and methodologies that facilitate ongoing optimization. Ultimately, this study aims to provide a comprehensive framework for database administrators and data engineers to enhance the performance of Teradata systems, ensuring they meet the growing demands of modern data environments. Through real-world case studies and performance metrics, we demonstrate the effectiveness of these optimization techniques, paving the way for more efficient and scalable database solutions.
APA, Harvard, Vancouver, ISO, and other styles
29

Murali Natti. "Managing Connections Efficiently in PostgreSQL to Optimize CPU, I/O and Memory Usage." International Journal of Science and Research Archive 15, no. 1 (2025): 1726–29. https://doi.org/10.30574/ijsra.2025.15.1.0650.

Full text
Abstract:
Modern database management systems, such as PostgreSQL, require meticulous attention to connection management in order to optimize the allocation and utilization of crucial system resources including CPU, memory, and disk I/O. Efficient connection management is not merely about opening or closing connections—it involves implementing advanced strategies that ensure resources are used judiciously and that system performance remains robust even under high-load conditions. This article delves into the various methodologies that can be employed to enhance query performance and overall responsiveness of the database. It explores how connection pooling can drastically reduce the overhead associated with establishing new connections by reusing a finite pool of pre-established connections, thus saving on CPU cycles and minimizing memory consumption. Furthermore, the article discusses the critical role of tuning CPU usage through parallel query execution and the careful management of worker processes, which together ensure that complex queries are processed swiftly without overburdening the system's processing cores. Additionally, the discussion extends to optimizing I/O operations by configuring parameters like shared_buffers and work_mem so that frequently accessed data remains in memory, reducing the need for slower disk-based operations. Fine-tuning these settings allows the system to manage I/O workloads more efficiently, ensuring that query execution does not suffer due to excessive disk activity. The article also emphasizes the importance of strategic memory management to prevent issues such as memory bloat, thereby maintaining a balance between available resources and workload demands. Through a comprehensive exploration of these strategies and configuration best practices, database administrators are provided with a robust framework to achieve improved performance and scalability. This proactive approach not only enhances the system’s stability under heavy workloads but also paves the way for future growth, ensuring that PostgreSQL continues to deliver high responsiveness and efficient resource utilization in diverse operational environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Hanxian, Tarique Siddiqui, Rana Alotaibi, et al. "Sibyl: Forecasting Time-Evolving Query Workloads." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–27. http://dx.doi.org/10.1145/3639308.

Full text
Abstract:
Database systems often rely on historical query traces to perform workload-based performance tuning. However, real production workloads are time-evolving, making historical queries ineffective for optimizing future workloads. To address this challenge, we propose SIBYL, an end-to-end machine learning-based framework that accurately forecasts a sequence of future queries, with the entire query statements, in various prediction windows. Drawing insights from real-workloads, we propose template-based featurization techniques and develop a stacked-LSTM with an encoder-decoder architecture for accurate forecasting of query workloads. We also develop techniques to improve forecasting accuracy over large prediction windows and achieve high scalability over large workloads with high variability in arrival rates of queries. Finally, we propose techniques to handle workload drifts. Our evaluation on four real workloads demonstrates that SIBYL can forecast workloads with an 87.3% median F1 score, and can result in 1.7× and 1.3× performance improvement when applied to materialized view selection and index selection applications, respectively.
APA, Harvard, Vancouver, ISO, and other styles
31

Tharun Damera. "Optimizing system performance in large-scale backend architectures." World Journal of Advanced Research and Reviews 26, no. 1 (2025): 3083–97. https://doi.org/10.30574/wjarr.2025.26.1.1394.

Full text
Abstract:
This article explores strategies for optimizing system performance in large-scale backend architectures where user expectations for responsiveness continue to rise. It addresses how architectural complexity creates numerous bottlenecks across technology stacks and introduces techniques for identifying and eliminating performance issues. The article covers database query optimization, API endpoint efficiency, inter-service communication improvements, and scaling strategies for high-traffic systems including load balancing, caching implementations, and data sharding approaches. Advanced topics include database optimization techniques like connection pooling and read/write splitting, asynchronous processing patterns utilizing message queues and batch processing, runtime optimization through memory management and thread pool tuning, and observability practices including distributed tracing and performance testing. The discussion concludes with considerations for balancing consistency, availability, and performance in distributed systems through eventual consistency models, conflict resolution strategies, and failure isolation patterns.
APA, Harvard, Vancouver, ISO, and other styles
32

Shankeshi, Raghu Murthy. "Enhancing Oracle Database Performance with AI-Driven Automation in Cloud Environments." International Journal of Novel Research and Development 6, no. 10 (2021): 1–11. https://doi.org/10.5281/zenodo.15106578.

Full text
Abstract:
As the complexity and size of cloud-hosted Oracle Database environments is growing, using AI-driven automation toachieve performance, increasing the utilization of resources, decrease the operational costs is becoming a requirement. It addresses the issue of integrating artificial intelligence in the databases query optimization, indexing, workload balancing, anomaly detection and self healing processes in order to make databases more efficient. With the help of AI models, organizations can eliminate the need of doing performance tuning and accomplish time dynamic resource allocation as well as proactive handling of system anomalies, thus reducing query execution time and increasing the reliability of a database. The study then elaborates on the various advantages of having AI optimization of the database, such as the real time management of the workload, the intelligent indexing strategies, and the proactive prevention of failure. With database stability being the lifeline of any data operation, AI powered anomaly detection mechanisms provide a significant boost by determining irregular pattern and take the escalative corrective action before system performance degrades to the point of failure. Another important feature facilitating many benefits of OLAP is automated workload balancing in order to evenly distribute processing power avoid bottlenecks and optimize query throughput. The improvements of these deliver the benefits of reduced downtime, increased system resilience, and economically utilized cloud resource utilization. Additionally, AI based enterprise solution helps enterprises to realize financial efficiency by leveraging adaptive provisioning of resources on cloud optimizing expenditure in cloud. Typically database management techniques involve either over provisioning of resources or under utilization resulting in unnecessary cost. On the other hand, AI based automation is automatic to scale out the resources based upon workload requirement and it is cost effective for cloud utilization. But yet, there are challenges like data security, compliance risks, and reliance on cloud provider APIs to fully leverage the potential of the AI in the database management.
APA, Harvard, Vancouver, ISO, and other styles
33

Rumbaugh, Douglas B., Dong Xie, and Zhuoyue Zhao. "Towards Systematic Index Dynamization." Proceedings of the VLDB Endowment 17, no. 11 (2024): 2867–79. http://dx.doi.org/10.14778/3681954.3681969.

Full text
Abstract:
There is significant interest in examining large datasets using complex domain-specific queries. In many cases, these queries can be accelerated using specialized indexes. Unfortunately, the development of a practical index is difficult, because databases generally require additional features such as updates, concurrency support, crash recovery, etc. There are three major lines of work to alleviate the pain: (1) automatic index composition/tuning which composes indexes out of core data structure primitives to optimize for specific workloads; (2) generalized index templates which generalize common data structures such as B+-trees for custom queries over custom data types, and (3) data structure dynamization frameworks such as the Bentley-Saxe method which converts a static data structure into an updatable data structure with bounded additional query cost. The first two are limited to very specific queries and/or data structures and, thus, are not suitable for building a general index dynamization framework. The last one is more promising in its generality but also has limitations on query types, deletion support, and performance tuning. In this paper, we discuss the limitations of the classic index dynamization techniques and propose a path towards a more general and systematic solution. We demonstrate the viability of our framework by realizing it as a C++20 metaprogramming library and conducting case studies on four example queries with their corresponding static index structures. With this framework, many theoretical/early-stage index designs can easily be extended with support for updates, along with a wide tuning space for query/update performance trade-offs. This allows index designers to focus on efficient data layouts and query algorithms, thereby dramatically narrowing the gap between novel index designs and deployment.
APA, Harvard, Vancouver, ISO, and other styles
34

Öztürk, Emir. "Improving Text-to-Sql Conversion for Low-Resource Languages Using Large Language Models." Bitlis Eren Üniversitesi Fen Bilimleri Dergisi 14, no. 1 (2025): 163–78. https://doi.org/10.17798/bitlisfen.1561298.

Full text
Abstract:
Accurate text-to-SQL conversion remains a challenge, particularly for low-resource languages like Turkish. This study explores the effectiveness of large language models (LLMs) in translating Turkish natural language queries into SQL, introducing a two-stage fine-tuning approach to enhance performance. Three widely used LLMs Llama2, Llama3, and Phi3 are fine-tuned under two different training strategies, direct SQL fine-tuning and sequential fine-tuning, where models are first trained on Turkish instruction data before SQL fine-tuning. A total of six model configurations are evaluated using execution accuracy and logical form accuracy. The results indicate that Phi3 models outperform both Llama-based models and previously reported methods, achieving execution accuracy of up to 99.95% and logical form accuracy of 99.95%, exceeding the best scores in the literature by 5–10%. The study highlights the effectiveness of instruction-based fine-tuning in improving SQL query generation. It provides a detailed comparison of Llama-based and Phi-based models in text-to-SQL tasks, introduces a structured fine-tuning methodology designed for low-resource languages, and presents empirical evidence demonstrating the positive impact of strategic data augmentation on model performance. These findings contribute to the advancement of natural language interfaces for databases, particularly in languages with limited NLP resources. The scripts and models used during the training and testing phases of the study are publicly available at https://github.com/emirozturk/TT2SQL.
APA, Harvard, Vancouver, ISO, and other styles
35

Deb, Mrinal. "AI-Driven Adaptive Indexing and Query Optimization in Graph Databases." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–9. https://doi.org/10.55041/isjem03746.

Full text
Abstract:
Abstract: Graph databases have emerged as a pivotal solution for managing intercon- nected data, providing a more intuitive way to model relationships compared to traditional relational databases. As the complexity and scale of the graph data increase, the need for efficient indexing and intelligent query optimization becomes paramount. This paper presents an AI-driven approach to adaptive indexing and query optimization in Neo4j, leveraging a movie dataset. By inte- grating Python-based preprocessing and fine-tuning an OpenAI language model on a custom schema, we demonstrate how natural language queries can be opti- mized into efficient Cypher queries. Our study covers the performance of simple, complex, recursive, and subquery-based queries and evaluates the effectiveness of AI-generated optimizations. Keywords: Graph Databases, Query Optimization, AI Driven Indexing
APA, Harvard, Vancouver, ISO, and other styles
36

Ding, Bailu, Surajit Chaudhuri, Johannes Gehrke, and Vivek Narasayya. "DSB." Proceedings of the VLDB Endowment 14, no. 13 (2021): 3376–88. http://dx.doi.org/10.14778/3484224.3484234.

Full text
Abstract:
We describe a new benchmark, DSB, for evaluating both workload-driven and traditional database systems on modern decision support workloads. DSB is adapted from the widely-used industrial-standard TPC-DS benchmark. It enhances the TPC-DS benchmark with complex data distribution and challenging yet semantically meaningful query templates. DSB also introduces configurable and dynamic workloads to assess the adaptability of database systems. Since workload-driven and traditional database systems have different performance dimensions, including the additional resources required for tuning and maintaining the systems, we provide guidelines on evaluation methodology and metrics to report. We show a case study on how to evaluate both workload-driven and traditional database systems with the DSB benchmark. The code for the DSB benchmark is open sourced and is available at https://aka.ms/dsb.
APA, Harvard, Vancouver, ISO, and other styles
37

SRINIVASAN, JAGANNATHAN, YIN-HE JIANG, YONGGUANG ZHANG, and BHARAT BHARGAVA. "PERFORMANCE STUDY ON SUPPORTING OBJECTS IN O-RAID DISTRIBUTED DATABASE SYSTEM." International Journal of Cooperative Information Systems 02, no. 02 (1993): 225–47. http://dx.doi.org/10.1142/s0218215793000113.

Full text
Abstract:
O-Raid [1, 2] uses a layered approach to provide support for objects on top of a distributed relational database system called RAID [3], It reuses the replication controller of RAID to allow replication of simple objects as well as replication of composite objects. In this paper, we first describe the experiments conducted on O-Raid that measure the overheads incurred in supporting objects through a layered implementation, and the overheads involved in replicating objects. The overheads are low (e.g. 4ms for an insert query involving objects). We present experiments that evaluate three replication strategies for composite objects, namely, full replication, selective replication and no replication in a two-site and a four-site O-Raid system. For composite object experiments, the selective replication strategy demonstrated the flexibility of tuning replication of member objects based on the patterns of access. The experimentation is performed in different networking environments (LANs and WANs) to further evaluate the replication schemes. The results indicate that selective replication scheme has greater benefits in WAN than in LAN.
APA, Harvard, Vancouver, ISO, and other styles
38

Gopikrishna Maddali. "Enhancing Database Architectures with Artificial Intelligence (AI)." International Journal of Scientific Research in Science and Technology 12, no. 3 (2025): 296–308. https://doi.org/10.32628/ijsrst2512331.

Full text
Abstract:
Artificial intelligence and Database Management Systems Integration bring intelligence, adaptability, and independence in the world of databases. Relational database management systems structure the data and have been the foundations for implementing them, although they face several challenges that have arisen from modern-day environments of computing and information processing, such as scalability, real-time processing, the incorporation of unstructured data, and capabilities for making proactive decisions. As a result, new approaches like NoSQL and NewSQL appeared to address various and scalable needs of the applications. AI concepts such as Machine learning (ML), Deep learning (DL), and Natural language processing (NLP) have brought about improvement of advanced functions and optimization of efficiency into current database systems. These are self-tuning, query optimization, predictive caching, and natural language interfaces that enable a database to work autonomously while offering high-performance and reliability service. This paper focuses on the traditional and advanced DBMS architectures, the development and integration of AI-based DBMS, and other novelties such as federated learning and reinforcement-based cache.
APA, Harvard, Vancouver, ISO, and other styles
39

Madhuri Koripalli. "Intelligent assistants for data professionals: Copilots and agents." World Journal of Advanced Engineering Technology and Sciences 15, no. 2 (2025): 015–21. https://doi.org/10.30574/wjaets.2025.15.2.0470.

Full text
Abstract:
Intelligent assistants, including AI-driven copilots and specialized extensions, transform how data professionals interact with database environments. These tools leverage advanced language models and contextual understanding to automate routine tasks while providing sophisticated recommendations for query optimization, schema design, and performance tuning. By integrating with platforms like SQL Server Management Studio and Azure Data Studio, these assistants offer capabilities ranging from natural language query translation to predictive code completion and error prevention. Success stories across financial services, healthcare, and retail demonstrate their potential to accelerate development cycles, improve code quality, and democratize data access. However, implementation requires careful consideration of adoption frameworks, governance policies, and technical prerequisites. These systems face challenges despite their value, including performance limitations with complex queries, organizational resistance, potential skill erosion, and privacy concerns. The evolution of these intelligent companions represents a significant shift from passive tools to active collaborators in data management.
APA, Harvard, Vancouver, ISO, and other styles
40

Khopade, Vaibhavi Santosh, and Prof. Sonali Sagar Gholve Asst. "Performance Optimization Techniques in Spring Boot Applications." International Journal of Advance and Applied Research S6, no. 23 (2025): 26–32. https://doi.org/10.5281/zenodo.15119135.

Full text
Abstract:
<em>In the era of rapid digital transformation, the performance of web applications is paramount to ensure user satisfaction and operational efficiency. Spring Boot, a widely adopted framework for building Java-based applications, offers a plethora of features that facilitate rapid development and deployment. However, as applications scale, performance optimization becomes critical. This paper explores various performance optimization techniques specifically tailored for Spring Boot applications. We begin by defining key performance metrics and the significance of monitoring tools in assessing application health. The discussion then delves into architectural considerations, emphasizing the benefits of micro services and layered architecture. Subsequently, we examine database optimization strategies, including connection pooling, query optimization, and caching mechanisms, which significantly enhance data access speeds. Code optimization techniques, such as asynchronous processing and batch processing, are also analyzed to reduce latency and improve throughput. Furthermore, we address configuration tuning, JVM optimization, and effective load balancing strategies to ensure scalability and resilience.</em>
APA, Harvard, Vancouver, ISO, and other styles
41

Abbasi, Maryam, Marco V. Bernardo, Paulo Váz, José Silva, and Pedro Martins. "Revisiting Database Indexing for Parallel and Accelerated Computing: A Comprehensive Study and Novel Approaches." Information 15, no. 8 (2024): 429. http://dx.doi.org/10.3390/info15080429.

Full text
Abstract:
While the importance of indexing strategies for optimizing query performance in database systems is widely acknowledged, the impact of rapidly evolving hardware architectures on indexing techniques has been an underexplored area. As modern computing systems increasingly leverage parallel processing capabilities, multi-core CPUs, and specialized hardware accelerators, traditional indexing approaches may not fully capitalize on these advancements. This comprehensive experimental study investigates the effects of hardware-conscious indexing strategies tailored for contemporary and emerging hardware platforms. Through rigorous experimentation on a real-world database environment using the industry-standard TPC-H benchmark, this research evaluates the performance implications of indexing techniques specifically designed to exploit parallelism, vectorization, and hardware-accelerated operations. By examining approaches such as cache-conscious B-Tree variants, SIMD-optimized hash indexes, and GPU-accelerated spatial indexing, the study provides valuable insights into the potential performance gains and trade-offs associated with these hardware-aware indexing methods. The findings reveal that hardware-conscious indexing strategies can significantly outperform their traditional counterparts, particularly in data-intensive workloads and large-scale database deployments. Our experiments show improvements ranging from 32.4% to 48.6% in query execution time, depending on the specific technique and hardware configuration. However, the study also highlights the complexity of implementing and tuning these techniques, as they often require intricate code optimizations and a deep understanding of the underlying hardware architecture. Additionally, this research explores the potential of machine learning-based indexing approaches, including reinforcement learning for index selection and neural network-based index advisors. While these techniques show promise, with performance improvements of up to 48.6% in certain scenarios, their effectiveness varies across different query types and data distributions. By offering a comprehensive analysis and practical recommendations, this research contributes to the ongoing pursuit of database performance optimization in the era of heterogeneous computing. The findings inform database administrators, developers, and system architects on effective indexing practices tailored for modern hardware, while also paving the way for future research into adaptive indexing techniques that can dynamically leverage hardware capabilities based on workload characteristics and resource availability.
APA, Harvard, Vancouver, ISO, and other styles
42

Sethu, Sesha Synam Neeli. "Key Challenges and Strategies in Managing Databases for Data Science and Machine Learning." International Journal of Leading Research Publication 2, no. 3 (2021): 1–9. https://doi.org/10.5281/zenodo.15360136.

Full text
Abstract:
The convergence of data science and machine learning (ML) methodologies with enterprise-level data management systems necessitates a paradigm shift in database administration (DBA) practices. This integration presents significant hurdles, including the need for high-throughput data storage solutions (e.g., distributed NoSQL databases, columnar databases), real-time data streaming architectures (e.g., Apache Kafka, Apache Flink), robust data governance frameworks to ensure data quality and compliance (e.g., implementing data lineage tracking, metadata management), efficient management of heterogeneous data sources via ETL/ELT processes, and optimization strategies to mitigate the performance impact of ML model deployment and inference (e.g., model caching, query optimization techniques).Addressing these challenges requires a multi-faceted approach. This includes leveraging scalable database architectures (e.g., sharding, replication), implementing automated data manipulation and transformation processes (e.g., scripting with Python, leveraging cloud-based ETL services), and enforcing stringent security protocols using encryption, access control lists (ACLs), and intrusion detection systems. Furthermore, continuous professional development is crucial, encompassing expertise in areas such as AI-driven database auto-tuning, cloud-native database services (e.g., AWS RDS, Azure SQL Database, Google Cloud SQL), and containerization technologies (e.g., Docker, Kubernetes) for deploying and scaling ML workflows. By adopting these best practices, DBAs can ensure the efficiency, reliability, and scalability of data infrastructures essential for successful data science and ML initiatives
APA, Harvard, Vancouver, ISO, and other styles
43

Selvaraj, P., Venkatesh Kannan, and Bruno Voisin. "Modified Data Storage and Replication Mechanism with Frequent Use-Case Based Indexing." Journal of Computational and Theoretical Nanoscience 17, no. 12 (2020): 5229–37. http://dx.doi.org/10.1166/jctn.2020.9413.

Full text
Abstract:
The real time applications demands high speed and reliable data access from the remote database. An effective logical data management strategy that handles simultaneous connections with better performance negotiation is inevitable. This work considers an e-health care application that proposes MongoDB based modified indexing and performance tuning methods. To cope with certain high frequency use case and its performance mandates, a flexible and efficient logical data management may be preferred. By analysing the data dependency, data decomposition concerns and the performance requirements of the specific use case of the medical application, a logical schema may be customized on an ala-carte basis. This work focused on the flexible logical data modeling schemes and its performance factors of the NoSql DB. The efficiency of unstructured data base management in storing and retrieving the e-health care data was analysed with a web based tool. To enable faster data retrieval and query processing over the distributed nodes, a Spark based storage engine was built on top of the MongoDB based data storage management. With Spark tool, the database has been made distributed as master–slave structures with suitable data replication mechanisms. In such distributed database the fail-over also implemented with the suitable replication mechanism. This work considered MongoDB based flexible schema modeling and Spark based distributed computation with multiple chunks of data. The flexible data modeling scheme with MongoDB with the on-demand Spark based computation framework was proposed. To facilitate the eventual consistency, scalability aspects of the e-health care applications, use case based indexing was proposed. With the effective data management, faster query processing the horizontal scalability has been increased. The overall efficiency and scalability of the proposed logical data management approach was analysed. Through the simulation studies, the proposed approach has been claimed to boost the performance of the bigdata based application to a considerable extent.
APA, Harvard, Vancouver, ISO, and other styles
44

Nida, Bhanu Raju. "SAP Core Data Services (CDS) Views: A Modern Approach to Data Modeling and Performance Optimization in SAP Ecosystem." International Scientific Journal of Engineering and Management 04, no. 03 (2025): 1–7. https://doi.org/10.55041/isjem02351.

Full text
Abstract:
SAP Core Data Services (CDS) Views represent a new approach to data modeling in large enterprises. They can outperform regular SQL Views by pushing code down to the SAP HANA in-memory database. However, due to their unique syntax and query performance tuning, developers must learn ABAP CDS, CDS Annotations, and how to leverage SAP HANA in-memory features. If a CDS View is not properly designed, performance can degrade significantly. Code pushdown, while beneficial, can become a drawback if developers do not follow best practices when creating CDS Views. Additionally, debugging CDS Views presents challenges. Unlike traditional ABAP programs, ABAP Debugger breakpoints cannot be used, requiring SAP ABAP developers to learn tools like the SAP HANA SQL Analyzer and other Performance Trace utilities. Another consideration is that CDS Views are not fully backward compatible with legacy SAP ERP (ECC) systems. To unlock their full potential, including features like CDS Table Functions, companies need SAP HANA and SAP S/4HANA. Therefore, organizations must weigh the disadvantages before implementing CDS Views. They should carefully assess the cost and effort required to move processing to the database level while ensuring performance gains. Keywords—SAP Core Data Services, CDS Views, HANA, ABAP, Performance Optimization, Analytics
APA, Harvard, Vancouver, ISO, and other styles
45

R, Dhaya. "Analysis of Adaptive Image Retrieval by Transition Kalman Filter Approach based on Intensity Parameter." Journal of Innovative Image Processing 3, no. 1 (2021): 7–20. http://dx.doi.org/10.36548/jiip.2021.1.002.

Full text
Abstract:
The information changes in image pixel of retrieved records is very common in image process. The image content extraction is containing many parameters to reconstruct the image again for access the information. The intensity level, edge parameters are important parameter to reconstruct the image. The filtering techniques used to retrieve the image from query images. In this research article, the adaptive function kalman filter function performs for image retrieval to get better accuracy and high reliable compared to previous existing method includes Content Based Image Retrieval (CBIR). The kalman filter is incorporated with adaptive feature extraction for transition framework in the fine tuning of kalman gain. The feature vector database analysis provides transparent to choose the images in retrieval function from query images dataset for higher retrieval rate. The virtual connection is activated once in single process for improving reliability of the practice. Besides, this research article encompasses the adaptive updating prediction function in the estimation process. Our proposed framework construct with adaptive state transition Kalman filtering technique to improve retrieval rate. Finally, we achieved 96.2% of retrieval rate in the image retrieval process. We compare the performance measure such as accuracy, reliability and computation time of the process with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Rong, Lianggui Weng, Wenqing Wei, et al. "PilotScope: Steering Databases with Machine Learning Drivers." Proceedings of the VLDB Endowment 17, no. 5 (2024): 980–93. http://dx.doi.org/10.14778/3641204.3641209.

Full text
Abstract:
Learned databases, or AI4DB techniques, have rapidly developed in the last decade. Deploying machine learning (ML) and AI4DB algorithms into actual databases is the gold standard to examine their performance in practice. However, due to the complexity of database systems, the difference between ML and DB programming paradigms, and the diversity of ML models, the tasks of developing and deploying AI4DB algorithms into databases are prohibitively difficult. Most previous works focus on specific AI4DB algorithms and ML models whose deployment requires close cooperation between ML and DB developers and heavy engineering cost. In this paper, we design and implement PilotScope, an AI4DB middleware with a programming model that largely reduces such difficulties. With a novel abstraction of AI4DB algorithms for, e.g. , knob tuning and query optimization, PilotScope consists of two classes of components, AI4DB drivers and DB interactors , with different programming paradigms and roles in AI4DB tasks. ML developers focus on designing and implementing AI4DB drivers, which are algorithmic workflows that collect statistics from databases, train ML models, make decisions and optimize databases using learned models. AI4DB drivers interact with databases via DB interactors ( e.g. , for collecting data and enforcing actions in databases). DB developers focus on implementing these interactors on one or more database engines, with the interaction details hindered from ML developers. PilotScope supports a variety of AI4DB tasks, and the implementation of an AI4DB algorithm on PilotScope can be deployed in different databases with only minimum modifications. PilotScope is effective in benchmarking these AI4DB algorithms in real-world scenarios. We hope that PilotScope could significantly accelerate iterating AI4DB research and make AI4DB techniques truly applicable in production.
APA, Harvard, Vancouver, ISO, and other styles
47

Kraska, Tim. "Towards instance-optimized data systems." Proceedings of the VLDB Endowment 14, no. 12 (2021): 3222–32. http://dx.doi.org/10.14778/3476311.3476392.

Full text
Abstract:
In recent years, we have seen increased interest in applying machine learning to system problems. For example, there has been work on applying machine learning to improve query optimization, indexing, storage layouts, scheduling, log-structured merge trees, sorting, compression, and sketches, among many other data management tasks. Arguably, the ideas behind these techniques are similar: machine learning is used to model the data and/or workload in order to derive a more efficient algorithm or data structure. Ultimately, these techniques will allow us to build "instance-optimized" systems: that is, systems that self-adjust to a given workload and data distribution to provide unprecedented performance without the need for tuning by an administrator. While many of these techniques promise orders-of-magnitude better performance in lab settings, there is still general skepticism about how practical the current techniques really are. The following is intended as a progress report on ML for Systems and its readiness for real-world deployments, with a focus on our projects done as part of the Data Systems and AI Lab (DSAIL) at MIT By no means is it a comprehensive overview of all existing work, which has been steadily growing over the past several years not only in the database community but also in the systems, networking, theory, PL, and many other adjacent communities.
APA, Harvard, Vancouver, ISO, and other styles
48

Kraska, Tim, Tianyu Li, Samuel Madden, et al. "Check Out the Big Brain on BRAD: Simplifying Cloud Data Processing with Learned Automated Data Meshes." Proceedings of the VLDB Endowment 16, no. 11 (2023): 3293–301. http://dx.doi.org/10.14778/3611479.3611526.

Full text
Abstract:
The last decade of database research has led to the prevalence of specialized systems for different workloads. Consequently, organizations often rely on a combination of specialized systems, organized in a Data Mesh. Data meshes present significant challenges for system administrators, including picking the right system for each workload, moving data between systems, maintaining consistency, and correctly configuring each system. Many non-expert end users (e.g., data analysts or app developers) either cannot solve their business problems, or suffer from sub-optimal performance or cost due to this complexity. We envision BRAD, a cloud system that automatically integrates and manages data and systems into an instance-optimized data mesh, allowing users to efficiently store and query data under a unified data model (i.e., relational tables) without knowledge of underlying system details. With machine learning, BRAD automatically deduces the strengths and weaknesses of each engine through a combination of offline training and online probing. Then, BRAD uses these insights to route queries to the most suitable (combination of) system(s) for efficient execution. Furthermore, BRAD automates configuration tuning, resource scaling, and data migration across component systems, and makes recommendations for more impactful decisions, such as adding or removing systems. As such, BRAD exemplifies a new class of systems that utilize machine learning and the cloud to make complex data processing more accessible to end users, raising numerous new problems in database systems, machine learning, and the cloud.
APA, Harvard, Vancouver, ISO, and other styles
49

Murali Natti. "Reducing postgreSQL read and write latencies through optimized fillfactor and hot percentages for high-update applications." International Journal of Science and Research Archive 9, no. 2 (2023): 1059–62. https://doi.org/10.30574/ijsra.2023.9.2.0657.

Full text
Abstract:
In PostgreSQL, optimizing performance [1] for high-transaction, high-update applications is crucial for maintaining low latency and high throughput. One of the primary challenges faced in these environments is the default behavior of PostgreSQL, which can lead to row migration, the accumulation of dead tuples, and increased vacuum overhead due to frequent updates to the same rows. When data is updated frequently, PostgreSQL typically writes updated rows into new locations, which can result in row migration and the creation of "dead tuples" (old versions of rows that are no longer needed). This can slow down database performance because the system has to manage and clean up these dead tuples, which requires additional processing time and resources. Furthermore, PostgreSQL’s vacuum process, which is responsible for cleaning up these dead tuples, can add significant overhead, especially during peak transaction times. This paper proposes a performance-tuning strategy aimed at addressing these challenges by optimizing PostgreSQL’s fillfactor and Heap-Only Tuple (HOT) percentages. The fillfactor determines how much space is left in each data page for future updates, and by adjusting it to leave more space, we reduce the need for row migration. Additionally, by maximizing the efficiency of HOT updates—updates that allow changes to be made within the same data block rather than creating new tuples and moving them—we significantly reduce the overhead caused by dead tuples and row migration. By leveraging these two adjustments, this strategy leads to significant reductions in both read and write latencies, improving query performance and overall application responsiveness. This approach is particularly beneficial for applications with high-frequency updates, such as real-time data systems, where data is frequently modified, and transactional workloads, where consistent, low-latency performance is essential. In these environments, even small performance improvements can have a substantial impact on system efficiency and user experience. By focusing on reducing the time spent managing dead tuples and minimizing the need for row migration, PostgreSQL can be tuned to provide better performance and scalability in high-update, high-transaction settings.
APA, Harvard, Vancouver, ISO, and other styles
50

Raihan Siddik, Muhammad, Mhd Arief Hasan, Andika Fajar Kesuma, Nurmala Sari, Shania Dwi Putri, and Qurrotul Uyun Harahap. "IMPLEMENTASI QUERY TUNING UNTUK PENINGKATAN PERFORMA PADA DATABASE BARANG MINI MARKET NAN." JATI (Jurnal Mahasiswa Teknik Informatika) 9, no. 2 (2025): 3183–87. https://doi.org/10.36040/jati.v9i2.13217.

Full text
Abstract:
Query tuning merupakan suatu langkah optimasi performa database pada SQL Server. Query Tuning ini bertujuan untuk meningkatkan efisiensi eksekusi query dengan meminimalkan penggunaan sumber daya seperti waktu proses dan konsumsi memori. Dalam pengoperasiannya, Query Tuning melibatkan analisis query plan, indeks, serta penggunaan teknik-teknik seperti pembaruan statistik, restrukturisasi query, dan pengelolaan indeks yang tepat. Selain itu, fitur bawaan SQL Server seperti Database Engine Tuning Advisor dan Query Store memberikan panduan praktis dalam mengidentifikasi bottleneck performa. Dengan menerapkan query tuning secara efektif, performa aplikasi berbasis database dapat ditingkatkan secara signifikan, memastikan akses data yang cepat dan handal. Penelitian ini bertujuan untuk mengeksplorasi metode-metode utama dalam query tuning serta dampaknya terhadap kinerja sistem database SQL Server Penerapan query tuning dalam penelitian ini menunjukkan peningkatan efisiensi eksekusi query secara signifikan. Optimasi pada tabel barang mengurangi waktu eksekusi dari 229 ms menjadi 162 ms (29,26%), sementara query kompleks dengan indeks tambahan turun dari 223 ms menjadi 140 ms (37,22%). Strategi optimasi, seperti identifikasi query lambat, penerapan indeks cluster dan non-cluster, serta query refactoring, berdampak positif pada performa sistem, mengurangi waktu eksekusi serta penggunaan CPU dan memori.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!