To see the other types of publications on this topic, follow the link: Database Management Systems.

Dissertations / Theses on the topic 'Database Management Systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Database Management Systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alkahtani, Mufleh M. "Modeling relational database management systems." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/865955.

Full text
Abstract:
Almost all of the database products developed over the past few years are based on what is called the relational approach.The purpose of this thesis is to characterize a relational data base management system, we do this by studying the relational model in some depth.The relational model is not static, rather it has been evolving over time. We trace the evolution of the relational model. We will also consider the ramifications of the relational model for modern database systems.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
2

Beyers, Hector Quintus. "Database forensics : Investigating compromised database management systems." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/41016.

Full text
Abstract:
The use of databases has become an integral part of modern human life. Often the data contained within databases has substantial value to enterprises and individuals. As databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity, negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics, database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS. PROBLEM STATEMENT The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the integrity of the forensic results. APPROACH The study divides the relational DBMS into four abstract layers, illustrates how the layers can be prepared to be either in a found or clean forensic state, and experimentally combines the prepared layers of the DBMS according to the forensic scenario. The study commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore, each layer must be studied individually in a forensic context before all four layers are considered collectively. A forensic study is conducted on each abstract layer of the DBMS that has the potential to influence other layers to deliver incorrect results. Ultimately, the DBMS will be used as a forensic tool to extract evidence from its own encrypted data and data structures. Therefore, the last chapter shall illustrate how a forensic investigator can prepare a trustworthy forensic environment where a forensic investigation could be conducted on an entire PostgreSQL DBMS by constructing a combination of the appropriate forensic states of the abstract layers. RESULTS The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of various states of abstract layers in the DBMS. Approaches are suggested on how to deal with a forensic query on the data model, data dictionary and application schema layer of the DBMS. A forensic process is suggested on how to prepare the DBMS to extract evidence from the DBMS. Another function of this study is that it advises forensic investigators to consider alternative possibilities on how the DBMS could be attacked. These alternatives might not have been considered during investigations on DBMSs to date. Our methods have been tested at hand of a practical example and have delivered promising results.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

Fredstam, Marcus, and Gabriel Johansson. "Comparing database management systems with SQLAlchemy : A quantitative study on database management systems." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155648.

Full text
Abstract:
Knowing which database management system to use for a project is difficult to know in advance. Luckily, there are tools that can help the developer apply the same database design on multiple different database management systems without having to change the code. In this thesis, we investigate the strengths of SQLAlchemy, which is an SQL toolkit for Python. We compared SQLite, PostgreSQL and MySQL using SQLAlchemy as well as compared a pure MySQL implementation against the results from SQLAlchemy. We conclude that, for our database design, PostgreSQL was the best database management system and that for the average SQL-user, SQLAlchemy is an excellent substitution to writing regular SQL.
APA, Harvard, Vancouver, ISO, and other styles
4

Liang, Xing, and Yongyu Lu. "EVALUATION OF DATABASE MANAGEMENT SYSTEMS." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6255.

Full text
Abstract:
Qualitative and quantitative analysis of different database management systems (DBMS) have been performed in order to identify and compare those which address requirements such as public domain licensing, free of charge, high product support, ADO .NET Entity Framework compatibility, good performance, referential integrity, among others. More than 20 existing database management systems have been selected as possible candidates. Qualitative analysis reduced that number to 4 candidates DBMSs (PostgreSQL, SQLite, Firebird and MySQL). Quantitative analysis has been used to test the performance of these 4 DBMSs while performing the most common structured query language (SQL) data manipulation statements (INSERT, UPDATE, DELETE and SELECT). Referential integrity and easy to install were also evaluated for these 4 DBMSs. As results, Firebird is the most suitable DBMS which best addressed all desired requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Rui. "Live video database management systems." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4609.

Full text
Abstract:
With the proliferation of inexpensive cameras and the availability of high-speed wired and wireless networks, networks of distributed cameras are becoming an enabling technology for a broad range of interdisciplinary applications in domains such as public safety and security, manufacturing, transportation, and healthcare. Today's live video processing systems on networks of distributed cameras, however, are designed for specific classes of applications. To provide a generic query processing platform for applications of distributed camera networks, we designed and implemented a new class of general purpose database management systems, the live video database management system (LVDBMS). We view networked video cameras as a special class of interconnected storage devices, and allow the user to formulate ad hoc queries over real-time live video feeds. In the first part of this dissertation, an Internet scale framework for sharing and dissemination of general sensor data is presented. This framework provides a platform for general sensor data to be published, searched, shared, and delivered across the Internet. The second part is the design and development of a Live Video Database Management System. LVDBMS allows users to easily focus on events of interest from a multitude of distributed video cameras by posing continuous queries on the live video streams. In the third part, a distributed in-memory database approach is proposed to enhance the LVDBMS with an important capability of tracking objects across cameras.
ID: 029049951; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 96-101).
Ph.D.
Doctorate
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
6

Lo, Chi Lik Eric. "Test automation for database management systems and database applications /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhasker, Bharat. "Query processing in heterogeneous distributed database management systems." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/39437.

Full text
Abstract:
The goal of this work is to present an advanced query processing algorithm formulated and developed in support of heterogeneous distributed database management systems. Heterogeneous distributed database management systems view the integrated data through an uniform global schema. The query processing algorithm described here produces an inexpensive strategy for a query expressed over the global schema. The research addresses the following aspects of query processing: (1) Formulation of a low level query language to express the fundamental heterogeneous database operations; (2) Translation of the query expressed over the global schema to an equivalent query expressed over a conceptual schema; (3) An estimation methodology to derive the intermediate result sizes of the database operations; (4) A query decomposition algorithm to generate an efficient sequence of the basic database operations to answer the query. This research addressed the first issue by developing an algebraic query language called cluster algebra. The cluster algebra consists of the following operations: (a) Selection, union, intersection and difference, which are extensions of their relational algebraic counterparts to heterogeneous databases; (b) Normal-join and normal-projection which replace their counterparts, join and projection, in the relational algebra; (c) Two new operators embed and unembed to restructure the database schema. The second issue of the query translation was addressed by development of an algorithm that translates a cluster algebra query expressed over the virtual views to an equivalent cluster algebra query expressed over the conceptual databases. A non-parametric estimation methodology to estimate the result size of a cluster algebra operation was developed to address the third issue described above. Finally, this research developed a query decomposition algorithm, applicable to the relational and non-relational databases, that decomposes a query by computing all profitable semi-join operations, followed by the determination of the best sequence of join operations per processing site. The join optimization is performed by formulating a zero-one integer linear program that uses the non-parametric estimation technique to compute the sizes of intermediate results. The query processing algorithm was implemented in the context of DAVID, a heterogeneous distributed database management system.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Goralwalla, Iqbal A. "Temporality in object database management systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ29042.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karatasios, Labros G. "Software engineering with database management systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moolman, George Christiaan. "A relational database management systems approach to system design." Thesis, Virginia Tech, 1992. http://hdl.handle.net/10919/43628.

Full text
Abstract:
Systems are developed to fulfill certain requirements. Several system design configurations usually can fulfill the technical requirements, but at different equivalent life-cycle costs. The problem is how to manipulate and evaluate different system configurations so that the required system effectiveness can be achieved at a minimum equivalent cost. It is also important to have a good definition of all the major consequences of each design configuration. For each alternative configuration considered, it is useful to know the number of units to deploy, the inventory and other logistic requirements, as well as the sensitivity of the system to changes in input variable values. An intelligent relational database management system is defined to solve the problem described. Table structures are defined to maintain the required data elements and algorithms are constructed to manipulate the data to provide the necessary information. The methodology is as follows: Customer requirements are analyzed in functional terms. Feasible design alternatives are considered and defined as system design configurations. The reliability characteristics of each system configuration are determined, initially from a system-level allocation, and later determined from test and evaluation data. A maintenance analysis is conducted to determine the inventory requirements (using reliability data) and the other logistic requirements for each design configuration. A vector of effectiveness measures can be developed for each customer, depending on objectives, constraints, and risks. These effectiveness measures, consisting of a combination of performance and cost measures, are used to aid in objectively deciding which alternative is preferred. Relationships are defined between the user requirements, the reliability and maintainability of the system, the number of units deployed, the inventory level, and other logistic characteristics of the system. A heuristic procedure is developed to interactively manipulate these parameters to obtain a good solution to the problem with technical performance and cost measures as criteria. Although it is not guaranteed that the optimal solution will be found, a feasible solution close to the optimal will be found. Eventually the user will have, at any time, the ability to change the value of any parameter modeled. The impact on the total system will subsequently be made visible.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Moolman, G. Chris. "A relational database management systems approach to system design /." This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-07102009-040421/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Scheuerl, S. "Modelling recovery in database systems." Thesis, University of St Andrews, 1998. http://hdl.handle.net/10023/13482.

Full text
Abstract:
The execution of modern database applications requires the co-ordination of a number of components such as: the application itself, the DBMS, the operating system, the network and the platform. The interaction of these components makes understanding the overall behaviour of the application a complex task. As a result the effectiveness of optimisations are often difficult to predict. Three techniques commonly available to analyse system behaviour are empirical measurement, simulation-based analysis and analytical modelling. The ideal technique is one that provides accurate results at low cost. This thesis investigates the hypothesis that analytical modelling can be used to study the behaviour of DBMSs with sufficient accuracy. In particular the work focuses on a new model for costing recovery mechanisms called MaStA and determines if the model can be used effectively to guide the selection of mechanisms. To verify the effectiveness of the model a validation framework is developed. Database workloads are executed on the flexible Flask architecture on different platforms. Flask is designed to minimise the dependencies between DBMS components and is used in the framework to allow the same workloads to be executed on a various recovery mechanisms. Empirical analysis of executing the workloads is used to validate the assumptions about CPU, I/O and workload that underlie MaStA. Once validated, the utility of the model is illustrated by using it to select the mechanisms that provide optimum performance for given database applications. By showing that analytical modelling can be used in the selection of recovery mechanisms, the work presented makes a contribution towards a database architecture in which the implementation of all components may be selected to provide optimum performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Lepinioti, Konstantina. "Data mining and database systems : integrating conceptual clustering with a relational database management system." Thesis, Bournemouth University, 2011. http://eprints.bournemouth.ac.uk/17765/.

Full text
Abstract:
Many clustering algorithms have been developed and improved over the years to cater for large scale data clustering. However, much of this work has been in developing numeric based algorithms that use efficient summarisations to scale to large data sets. There is a growing need for scalable categorical clustering algorithms as, although numeric based algorithms can be adapted to categorical data, they do not always produce good results. This thesis presents a categorical conceptual clustering algorithm that can scale to large data sets using appropriate data summarisations. Data mining is distinguished from machine learning by the use of larger data sets that are often stored in database management systems (DBMSs). Many clustering algorithms require data to be extracted from the DBMS and reformatted for input to the algorithm. This thesis presents an approach that integrates conceptual clustering with a DBMS. The presented approach makes the algorithm main memory independent and supports on-line data mining.
APA, Harvard, Vancouver, ISO, and other styles
14

Ibrahim, Karim. "Management of Big Annotations in Relational Database Management Systems." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/272.

Full text
Abstract:
Annotations play a key role in understanding and describing the data, and annotation management has become an integral component in most emerging applications such as scientific databases. Scientists need to exchange not only data but also their thoughts, comments and annotations on the data as well. Annotations represent comments, Lineage of data, description and much more. Therefore, several annotation management techniques have been proposed to efficiently and abstractly handle the annotations. However, with the increasing scale of collaboration and the extensive use of annotations among users and scientists, the number and size of the annotations may far exceed the size of the original data itself. However, current annotation management techniques don’t address large scale annotation management. In this work, we propose three chapters to that tackle the Big annotations from three different perspectives (1) User-Centric Annotation Propagation, (2) Proactive Annotation Management and (3) InsightNotes Summary-Based Querying. We capture users' preferences in profiles and personalizes the annotation propagation at query time by reporting the most relevant annotations (per tuple) for each user based on time plan. We provide three Time-Based plans, support static and dynamic profiles for each user. We support a proactive annotation management which suggests data tuples to be annotated in case new annotation has a reference to a data value and user doesn’t annotate the data precisely. Moreover, we provide an extension on the InsightNotes: Summary-Based Annotation Management in Relational Databases by adding query language that enable the user to query the annotation summaries and add predicates on the annotation summaries themselves. Our system is implemented inside PostgreSQL.
APA, Harvard, Vancouver, ISO, and other styles
15

Zou, Beibei 1974. "Data mining with relational database management systems." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82456.

Full text
Abstract:
With the increasing demands of transforming raw data into information and knowledge, data mining becomes an important field to the discovery of useful information and hidden patterns in huge datasets. Both machine learning and database research have made major contributions to the field of data mining. However, there is still little effort made to improve the scalability of algorithms applied in data raining tasks. Scalability is crucial for data mining algorithms, since they have to handle large datasets quite often. In this thesis we take a step in this direction by extending a popular machine learning software, Weka3.4, to handle large datasets that can not fit into main memory by relying on relational database technology. Weka3.4-DB is implemented to store the data into and access the data from DB2 with a loose coupling approach in general. Additionally, a semi-tight coupling is applied to optimize the data manipulation methods by implementing core functionalities within the database. Based on the DB2 storage implementation, Weka3.4-DB achieves better scalability, but still provides a general interface for developers to implement new algorithms without the need of database or SQL knowledge.
APA, Harvard, Vancouver, ISO, and other styles
16

Dempster, Euan W. "Performance prediction for parallel database management systems." Thesis, Heriot-Watt University, 2004. http://hdl.handle.net/10399/341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Helmer, Sven. "Performance enhancements for advanced database management systems /." [S.l. : s.n.], 2000. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB8952361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jeong, Byeong-Soo. "Indexing in parallel database systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/8189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fu, Gregory Chung Yin. "Skyline queries in database systems /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20FU.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 51-52). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
20

Mühlberger, Ralf Maximilian. "Data management for interoperable systems /." [St. Lucia, Qld.], 2001. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16277.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jakobovits, Rex M. "The Web interfacing repository manager : a framework for developing web-based experiment management systems /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/7007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kanne, Carl-Christian. "Core technologies for native XML database management systems." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Scott, Heidi. "User-level I/O for database management systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ59400.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Tomov, Neven T. "Modelling parallel database management systems for performance prediction." Thesis, Heriot-Watt University, 1999. http://hdl.handle.net/10399/1263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Eaglestone, B. M. "Semantic-constraint modelling for small database management systems." Thesis, University of Huddersfield, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.354878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sayli, Ayla. "Semantic query optimization in relational database management systems." Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

CASTRO, LIESTER CRUZ. "TUNING OF DATABASE MANAGEMENT SYSTEMS IN VIRTUALIZED ENVIRONMENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34071@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Devido à enorme quantidade de dados nas aplicações atuais, observa-se o uso crescente dos Sistemas Gerenciadores de Bancos de Dados Relacionais (SGBDR) em ambientes virtualizados. Isto contribui para aumentar os requisitos das operações de entrada e saída (E/S) das cargas de trabalho relacionadas. É introduzida uma grande sobrecarga para aplicações intensivas em operações de E/S, devida à virtualização dos dispositivos e ao escalonamento das máquinas virtuais. Este trabalho tem por objetivo propor estratégias que permitam aumentar o rendimento das operações de E/S gerenciadas pelos SGBDR em ambientes virtualizados. Por meio da alocação de recursos computacionais, realizamos uma sintonia fina nas ações do escalonador do ambiente virtualizado e também nos parâmetros dos bancos de dados envolvidos. Para isso, foi desenvolvido um sistema que trabalha de maneira coordenada com as diferentes camadas de virtualização. Foram realizados experimentos que permitem avaliar e medir o impacto da abordagem aqui proposta.
Due to the huge amount of data present in current applications there is a growing use of Relational Database Management Systems (RDBMS) in virtualized environments. This fact increases the workloads input/output (I/O) requirements with respect to the corresponding workloads. This is due to resources virtualization and virtual machines scheduling. Our work s goal is to propose strategies that enable better performances for the I/O operations managed by the RDBMS. Considering an intelligent assignment of computational resources, we have executed fine tuning actions at the virtualized environment and on database parameters. We consider a system that works coordinately with distinct virtualization layers. We show some experimental results that evaluate and measure the impact of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Chan, Francis. "Knowledge management in Naval Sea Systems Command : a structure for performance driven knowledge management initiative." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FChan.pdf.

Full text
Abstract:
Thesis (M.S. in Product Development)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Mark E. Nissen, Donald H. Steinbrecher. Includes bibliographical references (p. 113-117). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
29

Coats, Sidney M. (Sidney Mark). "The Object-Oriented Database Editor." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc500921/.

Full text
Abstract:
Because of an interest in object-oriented database systems, designers have created systems to store and manipulate specific sets of abstract data types that belong to the real world environment they represent. Unfortunately, the advantage of these systems is also a disadvantage since no single object-oriented database system can be used for all applications. This paper describes an object-oriented database management system called the Object-oriented Database Editor (ODE) which overcomes this disadvantage by allowing designers to create and execute an object-oriented database that represents any type of environment and then to store it and simulate that environment. As conditions within the environment change, the designer can use ODE to alter that environment without loss of data. ODE provides a flexible environment for the user; it is efficient; and it can run on a personal computer.
APA, Harvard, Vancouver, ISO, and other styles
30

Landbris, Johan. "A Non-functional evaluation of NoSQL Database Management Systems." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46804.

Full text
Abstract:
NoSQL is basically a family name for all Database Management Systems (DBMS) that is not Relational DBMS. The fast growth of all social networks has led to a huge amount of unstructured data that NoSQL DBMS is supposed to handle better than Relational DBMS. Most comparisons performed are between Relational DBMS and NoSQL DBMS. In this paper, the comparison is about non-functional properties for different types of NoSQL DBMS instead. Three of the most common NoSQL types are Document Stores, Key-Value Stores and Column Stores. The most used DBMS of those types are MongoDB, Redis and Apache Cassandra. After working with the databases and performing YCSB Benchmarking the conclusion is that if the database should handle an enormous amount of data, Cassandra is most probably best choice. If speed is the most important property and if all data fits within the memory; Redis is probably the most well suited database. If the database needs to be flexible and versatile, MongoDB is probably the best choice.
APA, Harvard, Vancouver, ISO, and other styles
31

Clark, Allan M. "A framework for monitoring resources in database management systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36015.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chaudhri, Akmal Bashir. "A systematic performance study of object database management systems." Thesis, City University London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Floratos, Sofoklis. "High Performance Iterative Processing in Relational Database Management Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1605909940057503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jiao, Toni C. "Database development and intranet based image included database management system for ballistic firearm identification system." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1551.

Full text
Abstract:
The process of imaging, collecting and searching a cartridge case to identifying its suspected firearm is a time consuming procedure. Within this study, a cartridge case identification database management system in an Intranet environment is designed and implemented, thus enabling firearm examiners from different forensic laboratories to engage firearm identification without the constraints of time and location. Specifically, the study investigates appropriate database management system for image involved and Intranet secured ballistics firearm identification database. The results demonstrated that a computerized firearm identification system could be implemented in Intranet with a secure, scalable, performable Intranet database management system.
APA, Harvard, Vancouver, ISO, and other styles
35

Fischer, Ulrike. "Forecasting in Database Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-133281.

Full text
Abstract:
Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Heng. "Efficient database management based on complex association rules." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31917.

Full text
Abstract:
The large amount of data accumulated by applications is stored in a database. Because of the large amount, name conflicts or missing values sometimes occur. This prevents certain types of analysis. In this work, we solve the name conflict problem by comparing the similarity of the data, and changing the test data into the form of a given template dataset. Studies on data use many methods to discover knowledge from a given dataset. One popular method is association rules mining, which can find associations between items. This study unifies the incomplete data based on association rules. However, most rules based on traditional association rules mining are item-to-item rules, which is a less than perfect solution to the problem. The data recovery system is based on complex association rules able to find two more types of association rules, prefix pattern-to-item, and suffix pattern-to-item rules. Using complex association rules, several missing values are filled in. In order to find the frequent prefixes and frequent suffixes, this system used FP-tree to reduce the time, cost and redundancy. The segment phrases method can also be used for this system, which is a method based on the viscosity of two words to split a sentence into several phrases. Additionally, methods like data compression and hash map were used to speed up the search.
APA, Harvard, Vancouver, ISO, and other styles
37

Mishra, Rajesh S. M. Massachusetts Institute of Technology. "Information transmission in the MIMIC II clinical database." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59265.

Full text
Abstract:
Thesis (S.M. in System Design and Management)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 127-129).
The promise of the Electronic Medical Record (EMR) to store, retrieve, and communicate medical information effectively for a caregiver team has remained largely unfulfilled since its advent in the late 1960's. Previous studies have cited that the communication function of the EMR is critical to its successful adoption. Based on Mediated Agent Interaction theory, this study proposes a message-based model of transmission of clinical information in the EMR. This model is implemented on an existing ICU clinical database, MIMIC II, to produce a database of transmission events. Three metrics for information transmission are derived from exploratory and object-attribute analyses: transmission volume, duration, and load (or rate). Also derived is a set of features that includes a patient's clinical conditions (with acuity scores and mortality), caregiver type and distribution, care-unit locations, duration of care, and types of clinical records. This list of features is reduced to a set of explanatory variables using correlation and univariate logistic regression. Bayesian Network (BN) models are constructed to predict levels of the transmission metrics. BN models show high prediction accuracy for measuring various levels of messaging volume and load, but marginal accuracy for messaging duration. Results from these methods suggest that the volume of information transmitted in the ICU for adult patients is primarily through charts entered by nurses and respiratory technicians (RTs). The amount of data recorded by RTs increases for patients with higher acuity scores, but transmission from nurses decreases for these patients. The rate at which information is transmitted in the ICU for adult patients is directly related to the rate at which notes and charts are entered, as well as the care-unit location where the data is recorded. Further study is required to investigate factors influencing the length of time information is transmitted in the ICU. This study's findings are based on data recorded by caregivers as clinical observations. Further study is necessary to corroborate these results with clinical communications data, including evidence of reception of clinical information by caregivers. The model proposed by this study may be used as a basis for future research and to discover other patterns of clinical communications.
by Rajesh Mishra.
S.M.in System Design and Management
APA, Harvard, Vancouver, ISO, and other styles
38

Sheth, Amit Pravin. "Adaptive concurrency control for distributed database systems /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487262513408523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jansson, Jens, Alexandar Vukosavljevic, and Ismet Catovic. "Performance comparison between multi-model, key-value and documental NoSQL database management systems." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19857.

Full text
Abstract:
This study conducted an experiment that compares the multi-model NoSQL DBMS ArangoDB with other NoSQL DBMS, in terms of the average response time of queries. The DBMS compared in this experiment are the following: Redis, MongoDB, Couchbase, and OrientDB. The hypothesis that is answered in this study is the following: “There is a significant difference between ArangoDB, OrientDB, Couchbase, Redis, MongoDB in terms of the average response time of queries”. This is examined by comparing the average response time of 1 000, 100 000, and 1 000 000 queries between these database systems. The results show that ArangoDB performs worse compared to the other DBMS. Examples of future work include using additional DBMS in the same experiment and replacing ArangoDB with another multi-model DBMS to decide whether such a DBMS, in general, performs worse than single-model DBMS.
APA, Harvard, Vancouver, ISO, and other styles
40

Visavapattamawon, Suwanna. "Application of active rules to support database integrity constraints and view management." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1981.

Full text
Abstract:
The project demonstrates the enforcement of integrity constraints in both the conventional and active database systems. The project implements a more complex user-defined constraint, a complicated view and more detailed database auditing on the active database system.
APA, Harvard, Vancouver, ISO, and other styles
41

Bryant, Miranda A. "Representing meaningful provenance in scientific workflow systems." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1402176091&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hoecherl, Joseph A. "A prototype web-enabled information management and decision support system for Army aviation logistics management." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Sep%5FHoecherl.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Huang, Jianyuan. "Computer science graduate project management system." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3250.

Full text
Abstract:
This project is a development and tracking system for graduate students in the Department of Computer Science of CSUSB. This project will cover front-end web site development, back-end database design and security. This website provides secure access to information about ideas for projects, status on on-going projects, and reports of finished projects using My SQL and Apache Tomcat.
APA, Harvard, Vancouver, ISO, and other styles
44

Ritsch, Roland. "Optimization and evaluation of array queries in database management systems." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959772502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ryeng, Norvald. "Improving Query Processing Performance in Large Distributed Database Management Systems." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14695.

Full text
Abstract:
The dream of computing power as readily available as the electricity in a wall socket is coming closer to reality with the arrival of grid and cloud computing. At the sametime, databases grow to sizes beyond what can be efficiently managed by single server systems. There is a need for efficient distributed database management systems (DBMSs). Current distributed DBMSs are not built to scale to more than tensor hundreds of sites (i.e., nodes or computers). Users of grid and cloud computingexpect not only almost innite scalability, i.e., at least to thousands of sites, but alsothat the scale is adapted automatically to meet the demand, whether it increases or decreases. This is a challenge to current distributed DBMSs. In this thesis, the focus is on how to improve performance of query processingin large distributed DBMSs where coordination between sites has been reduced inorder to increase scalability. The challenge is for the sites to make decisions thatare globally benecial when their view of the complete system is limited. The main contributions of this thesis are methods to increase failure resilience of aggregation queries, adaptively place data on dierent sites and locate these sites afterwards,and cache intermediate results of query processing. The study of failure resilience in aggregation queries presented in this thesisshows that dierent aggregation functions react dierently to failures and that countermeasures must be adapted to each function. A low-cost method to increase accuracyis proposed. The dynamic data placement method presented in this thesis allows data to befragmented, allocated, and replicated to adapt to the current system conguration and workload. Fragments are split, coalesced, reallocated, and replicated during query processing to improve query processing performance by allowing more data to be accessed locally. The proposed look up method uses range indexing to make it possible to efficiently identify the sites that store relevant data for a query with low overhead when data is updated. During query execution, a number of intermediate results are produced, and this thesis proposes a method to cache these results and use them to answer other,similar queries. In particular, a caching method to improve execution times of top-kqueries is presented. Results of experiments in simulators and on an implementation in the DASCOSADB distributed DBMS prototype show that these methods lead to signicant savings in query execution time.
APA, Harvard, Vancouver, ISO, and other styles
46

Nguyen, Long Phi M. Eng Massachusetts Institute of Technology. "Exploring learned join algorithm selection in relational database management systems." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130706.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (page 81).
Query optimizers, crucial components of relational database management systems, are responsible for generating efficient query execution plans. Despite many advances in the database community over the last few decades, most popular relational database management systems today still use cost-based optimizers that do not always model the underlying data's characteristics accurately. These cost-based optimizers brutally slow down a query if they make even one gross underestimate of a database table's cardinality. In this work, we improve on native cost-based optimizer performance by identifying the most ideal join algorithms for query execution plans in two popular relational database management systems, PostgreSQL and Microsoft SQL. First, we gather baseline query execution times for the entire IMDb Join Order Benchmark under different subsets of usable join algorithms to show that no subset yields high performance across all queries. We then show that it is feasible to use deep reinforcement learning to choose one of these subsets for each query seen and achieve far better performance on the intensive JOB queries. Finally, we introduce the idea of k-edits, showing results that indicate that for some queries, isolating just 1 "bad" join and changing its join algorithm can yield better performance. Our work suggests that reinforcement learning with both coarse and fine decisions shows huge potential for the future of query optimization and relational database management systems.
by Long Phi Nguyen.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
47

Ansari, Hamon. "Performance Comparison of Two Database Management Systems MySQL vs MongoDB." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155398.

Full text
Abstract:
Databases are commonly used today in a vast amounts of applications. The main point in using databases is to be able to store and access data fast and in a secure way. These databases need to be able make different operations as fast as possible without losing data. The two main database technologies used today are NoSQL and SQL (StructuredQuery Language) databases. NoSQL is an umbrella term for all DBMS (Database Management system) which are not using SQL like relational databases do. NoSQL stands for non-SQL, non-relational or not only SQL. In this thesis one DBMS from each database technology is compared to each other. The comparison is based on space allocation when they contain different amounts of records and time performance when executing different operations on different amounts of records. The operation stested for the speed performance were insertion, select, update and remove. The results showed that MySQL allocated less space when containing large amounts of records. While MongoDB was faster in almost all test case for every operation.
APA, Harvard, Vancouver, ISO, and other styles
48

Bartlang, Udo. "Architecture and methods for flexible content management in peer-to-peer systems." Wiesbaden : Vieweg + Teubner Research, 2010. http://site.ebrary.com/id/10383036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Agrawal, Tanu. "Fear and desire in systems design : negotiating database usefulness." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/42392.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, 2008.
Includes bibliographical references (p. 224-235).
Databases are ubiquitous. They are used for a host of functions including coordination, decision making, and memory archiving. Despite their importance and ubiquity, databases continue to frustrate us, often departing from the goals originally set for them. If databases are such essential ingredients for organizations, what diminishes their usefulness? Besides the nature of the data that is entered into the database, usefulness is also shaped by the fields, features, and functionalities that the database designers originally construct that then shape the kind of data that can be entered into the system. This dissertation examines the process of database design and the assumptions and concerns adopted by the stakeholders involved in it. I conducted a year long ethnographic study of a university that has been engaged in creating a self-sustaining Environment Health and Safety system to manage research related hazards and to ensure regulatory compliance. The integrated database system was envisioned as a tool that would allow the university to observe and improve compliance practices while keeping records that would be available for self-auditing and government inspection. My research observations suggest that actors imagine diverse purposes that the database, when complete, should serve. These purposes - entailing the three themes of accountability, efficiency and comparability - appear to guide the design process. As these imagined purposes gain momentum, they translate into both desires and fears for the features of the database. For instance, when efficiency is imagined as a purpose, it creates a desire for features such as drop-down menus that are easy enter information into. The inclusion of such features, however, creates a fear of oversimplification.
(cont.) Through a negotiated process, features such as text boxes are added to address the fears. Yet, every design change negotiated within the database system creates ripple effects with regard to other purposes, generating the need for still further changes. The process of database design becomes highly dynamic and the final database system is a negotiated compromise between multiple trade-offs over time. By juxtaposing these fears and desires, and through the use of causal-flow models, I articulate the process by which databases depart from their original goals.
by Tanu Agrawal.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
50

Xu, Zichen. "Power-Performance Tradeoffs in Database Systems." Scholar Commons, 2009. https://scholarcommons.usf.edu/etd/95.

Full text
Abstract:
With the total energy consumption of computing systems increasing at a steep rate, much attention had been paid to the design of energy-efficient computing systems and applications. So far, database system design has focused on improving the performance of query processing. The objective of this study is to explore the potential of energy conservation in relational database management systems. The hypothesis is: by modifying the query optimizer in a Database management system (DBMS) to take the energy cost of query plans into consideration, we will be able to reduce the energy usage of database servers and control the tradeoffs between energy consumption and system performance. In this thesis, we provide an in-depth anatomy of typical queries in various benchmarks and qualitatively analyze the energy profile of such queries. The results of extensive experiments show that power savings in the range of 11% to 22% can be achieved by equipping the DBMS with a simple query optimizer that selects query plans based on both estimated processing time and energy requirements. We advocate more research efforts be invested into the design and evaluation of power-aware DBMSs in hope to reach higher level of energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography