To see the other types of publications on this topic, follow the link: Database Management System (DBMS).

Dissertations / Theses on the topic 'Database Management System (DBMS)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Database Management System (DBMS).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wijesekera, Primal. "Scalable Database Management System (DBMS) architecture with Innesto." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43363.

Full text
Abstract:
Database Management systems (DBMS) have been in the core of Information Systems for decades and their importance is getting higher and higher with current high growth in user demand and rising necessity to handle big data. With recent emergence of new style of deployments in the cloud, decades old architectures in DBMS have been greatly challenged due to their inability to scale beyond single computing node and to handle big data. This new requirement has spawned new directions along scaling data storage architectures. Most of the work surfaced lacks the applicability across many domains as they were targeting only a specific domain. We present a novel scalable architecture which is implemented using a distributed spatial partitioning tree (SPT). This new architecture replaces only the storage layer of a conventional DBMS thus leaving its applicability across domains intact and provides strict consistency and isolation. Indexing and locking are two important components of a Relational Database Management System (DBMS) which pose as potential bottleneck when scaling. Our new approach based on SPT provides a novel scalable alternative for these components. Our evaluations using the TPC-C workload show they are capable of scaling beyond single computing node and support more concurrent users compared to a single node conventional system. We believe our contributions to be an important first step towards the goal of a scalable, cloud aware and full-featured DBMS as a service.
APA, Harvard, Vancouver, ISO, and other styles
2

Fredstam, Marcus, and Gabriel Johansson. "Comparing database management systems with SQLAlchemy : A quantitative study on database management systems." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155648.

Full text
Abstract:
Knowing which database management system to use for a project is difficult to know in advance. Luckily, there are tools that can help the developer apply the same database design on multiple different database management systems without having to change the code. In this thesis, we investigate the strengths of SQLAlchemy, which is an SQL toolkit for Python. We compared SQLite, PostgreSQL and MySQL using SQLAlchemy as well as compared a pure MySQL implementation against the results from SQLAlchemy. We conclude that, for our database design, PostgreSQL was the best database management system and that for the average SQL-user, SQLAlchemy is an excellent substitution to writing regular SQL.
APA, Harvard, Vancouver, ISO, and other styles
3

Lacordais, Sophie. "Analisi e confronto di Database Management System per la gestione di serie temporali IoT." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
La tesi presenterà un confronto qualitativo e quantitativo tra Database Management System per la gestione di serie temporali IoT. Il confronto verrà effettuato tra RDBMS e TS-DBMS: in particolare verranno studiati rispettivamente PostgreSQL e InfluxDB. Inizialmente viene introdotto il contesto in cui i dati delle serie temporali presentano una maggiore esigenza e il loro utilizzo. Presentato l'ambiente in cui si trovano questi dati, poi verrà definita l'architettura di base dei sistemi e la loro configurazione. A seguire, verranno illustreranno le tecniche e le query che andranno a definire lo studio dei test effettuati in entrambi i database. Assieme ad ogni test implementato, verranno mostrati i relativi risultati ottenuti calcolando la media del tempo di esecuzione. Raccolti i risultati dei test, sarà effettuato un confronto tra tali risultati sottoponendoli una breve introduzione del risultato in riferimento a tabelle e grafici ottenuti. In conclusine, verranno fornite considerazioni in merito alle prestazioni dei database ed individuati eventuali sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
4

Jäkel, Tobias. "Role-based Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.

Full text
Abstract:
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
APA, Harvard, Vancouver, ISO, and other styles
5

Behzadnia, Peyman. "Dynamic Energy-Aware Database Storage and Operations." Scholar Commons, 2018. http://scholarcommons.usf.edu/etd/7125.

Full text
Abstract:
Energy consumption has become a first-class optimization goal in design and implementation of data-intensive computing systems. This is particularly true in the design of database management systems (DBMS), which is one of the most important servers in software stack of modern data centers. Data storage system is one of the essential components of database and has been under many research efforts aiming at reducing its energy consumption. In previous work, dynamic power management (DPM) techniques that make real-time decisions to transition the disks to low-power modes are normally used to save energy in storage systems. In this research, we tackle the limitations of DPM proposals in previous contributions and design a dynamic energy-aware disk storage system in database servers. We introduce a DPM optimization model integrated with model predictive control (MPC) strategy to minimize power consumption of the disk-based storage system while satisfying given performance requirements. It dynamically determines the state of disks and plans for inter-disk data fragment migration to achieve desirable balance between power consumption and query response time. Furthermore, via analyzing our optimization model to identify structural properties of optimal solutions, a fast-solution heuristic DPM algorithm is proposed that can be integrated in large-scale disk storage systems, where finding the most optimal solution might be long, to achieve near-optimal power saving solution within short periods of computational time. The proposed ideas are evaluated through running simulations using extensive set of synthetic workloads. The results show that our solution achieves up to 1.65 times more energy saving while providing up to 1.67 times shorter response time compared to the best existing algorithm in literature. Stream join is a dynamic and expensive database operation that performs join operation in real-time fashion on continuous data streams. Stream joins, also known as window joins, impose high computational time and potentially higher energy consumption compared to other database operations, and thus we also tackle energy-efficiency of stream join processing in this research. Given that there is a strong linear correlation between energy-efficiency and performance of in-memory parallel join algorithms in database servers, we study parallelization of stream join algorithms on multicore processors to achieve energy efficiency and high performance. Equi-join is the most frequent type of join in query workloads and symmetric hash join (SHJ) algorithm is the most effective algorithm to evaluate equi-joins in data streams. To best of our knowledge, we are the first to propose a shared-memory parallel symmetric hash join algorithm on multi-core CPUs. Furthermore, we introduce a novel parallel hash-based stream join algorithm called chunk-based pairing hash join that aims at elevating data throughput and scalability. We also tackle parallel processing of multi-way stream joins where there are more than two input data streams involved in the join operation. To best of our knowledge, we are also the first to propose an in-memory parallel multi-way hash-based stream join on multicore processors. Experimental evaluation on our proposed parallel algorithms demonstrates high throughput, significant scalability, and low latency while reducing the energy consumption. Our parallel symmetric hash join and chunk-based pairing hash join achieve up to 11 times and 12.5 times more throughput, respectively, compared to that of state-of-the-art parallel stream join algorithm. Also, these two algorithms provide up to around 22 times and 24.5 times more throughput, respectively, compared to that of non-parallel (sequential) stream join computation where there is one processing thread.
APA, Harvard, Vancouver, ISO, and other styles
6

Lehner, Wolfgang. "Energy-Efficient In-Memory Database Computing." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-115547.

Full text
Abstract:
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
APA, Harvard, Vancouver, ISO, and other styles
7

Kanchev, Kancho. "Employee Management System." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1048.

Full text
Abstract:

This report includes a development presentation of an information system for managing the staff data within a small company or organization. The system as such as it has been developed is called Employee Management System. It consists of functionally related GUI (application program) and database.

The choice of the programming tools is individual and particular.

APA, Harvard, Vancouver, ISO, and other styles
8

Kernert, David. "Density-Aware Linear Algebra in a Column-Oriented In-Memory Database System." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210043.

Full text
Abstract:
Linear algebra operations appear in nearly every application in advanced analytics, machine learning, and of various science domains. Until today, many data analysts and scientists tend to use statistics software packages or hand-crafted solutions for their analysis. In the era of data deluge, however, the external statistics packages and custom analysis programs that often run on single-workstations are incapable to keep up with the vast increase in data volume and size. In particular, there is an increasing demand of scientists for large scale data manipulation, orchestration, and advanced data management capabilities. These are among the key features of a mature relational database management system (DBMS). With the rise of main memory database systems, it now has become feasible to also consider applications that built up on linear algebra. This thesis presents a deep integration of linear algebra functionality into an in-memory column-oriented database system. In particular, this work shows that it has become feasible to execute linear algebra queries on large data sets directly in a DBMS-integrated engine (LAPEG), without the need of transferring data and being restricted by hard disc latencies. From various application examples that are cited in this work, we deduce a number of requirements that are relevant for a database system that includes linear algebra functionality. Beside the deep integration of matrices and numerical algorithms, these include optimization of expressions, transparent matrix handling, scalability and data-parallelism, and data manipulation capabilities. These requirements are addressed by our linear algebra engine. In particular, the core contributions of this thesis are: firstly, we show that the columnar storage layer of an in-memory DBMS yields an easy adoption of efficient sparse matrix data types and algorithms. Furthermore, we show that the execution of linear algebra expressions significantly benefits from different techniques that are inspired from database technology. In a novel way, we implemented several of these optimization strategies in LAPEG’s optimizer (SpMachO), which uses an advanced density estimation method (SpProdest) to predict the matrix density of intermediate results. Moreover, we present an adaptive matrix data type AT Matrix to obviate the need of scientists for selecting appropriate matrix representations. The tiled substructure of AT Matrix is exploited by our matrix multiplication to saturate the different sockets of a multicore main-memory platform, reaching up to a speed-up of 6x compared to alternative approaches. Finally, a major part of this thesis is devoted to the topic of data manipulation; where we propose a matrix manipulation API and present different mutable matrix types to enable fast insertions and deletes. We finally conclude that our linear algebra engine is well-suited to process dynamic, large matrix workloads in an optimized way. In particular, the DBMS-integrated LAPEG is filling the linear algebra gap, and makes columnar in-memory DBMS attractive as efficient, scalable ad-hoc analysis platform for scientists.
APA, Harvard, Vancouver, ISO, and other styles
9

Kanani, Saleh. "A method to evaluate database management systems for Big Data : focus on spatial data." Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74172.

Full text
Abstract:
Big data of type spatial is growing exponentially with the highest rate due to extensive growth in usage of sensors, IoT and mobile devices’ spatial data generation, therefore maintaining, processing and using such data efficiently, effectively with high performance has become one of the top priorities for Database management system providers, hence spatial database features and datatypes have become serious criteria in evaluating database management systems that are supposed to work as the back-end for spatial applications and services. With exponential growth of data and introducing of new types of data, “Big Data” has become strongly focused area that has gained the attention of different sectors e.g. academia, industries and governments to other organizations and studies. The rising trend in high resolution and large-scale geographical information systems have resulted in more companies providing location-based applications and services, therefore finding a proper database management system solution that can support spatial big data features, with multi-model big data support that is reliable and affordable has become a business need for many companies. Concerning the fact that choosing proper solution for any software project can be crucial due to the total cost and desired functionalities that any product could possibly bring into the solution. Migration is also a very complicated and costly procedure that many companies should avoid, which justifies the criticality of choosing the right solution based on the specific needs of any organization. Companies providing spatial applications and services are growing with the common concern of providing successful solutions and robust services. One of the most significant elements that ensures services’ and hence the providers’ reputation and positive depiction is services’ high availability. The possible future work for the thesis could be to develop the framework into a decision support solution for IT businesses with emphasize on spatial features. Another possibility for the future works would be to evaluate the framework by testing the evaluation framework on many other different DBMSs.
APA, Harvard, Vancouver, ISO, and other styles
10

Battaglia, Bruno. "Studio e valutazione di database management system per la gestione di serie temporali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17270/.

Full text
Abstract:
La tesi è incentrata sulle time series e la loro gestione. Dopo aver esposto cosa fosse una serie temporale ed alcuni casi di utilizzo, la dissertazione prosegue elencando le famiglie di DBMS ed i criteri attraverso i quali valutarli. Successivamente si è descritto il modello che ogni DBMS implementava e, dopo aver dato un accenno di esso, si è passati alle tecniche usate per la gestione e l'analisi delle serie temporali. Ancora dopo, invece, si sono viste le tecniche di modellazione di un database in grado di gestire serie storiche e sono stati analizzati tutti i DBMS presi in esame attraverso i criteri sopracitati. Una comparazione, anche tramite forma tabellare, è stata accompagnata da una descrizione che potesse guidare il lettore ad una comprensione rapida delle differenze, dei punti di forza e delle debolezze di ogni TSDB. Infine sono state tratte le conclusioni che, in seguito al percorso svolto, sono sembrate più appropriate, sono stati individuati dei punti chiave su cui incentrare i lavori futuri e sono stati proposti altri spunti di lavoro ai quali non si è potuto lavorare per mancanza di ulteriore tempo e di disponibilità dei software completi di tutte le loro funzionalità.
APA, Harvard, Vancouver, ISO, and other styles
11

Tolvaišis, Andrius. "DBVS praplėtimo nauju funkcionalumu galimybių tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100826_110401-00307.

Full text
Abstract:
Duomenų bazių valdymo sistema (DBVS) yra pagrindas beveik visų šiuolaikinių informacinių sistemų (IS). Iš esmės kiekvienas verslo, mokslo arba valdžios valdymo procesas remiasi duomenų baze. Interneto plėtra tik paspartino šią tendenciją – šiandien duomenų bazių operacijos yra kiekvieno duomenų pakeitimo didesniuose tinklalapiuose, paieškos arba apsipirkimo internete variklis [1]. Šiuo metu rinkoje yra didelis komercinių ir nemokamų (taip pat ir atviro kodo) duomenų bazių valdymo sistemų (DBVS) pasirinkimas, pavyzdžiui: Oracle, Microsoft SQL Server, IBM DB2, Microsoft Access, MySQL, PostgreSQL. Kiekviena jų turi savo privalumų ir trukumų. Tačiau informacinių sistemų projektavimo eiga, naudojant šias DBVS ir neatsižvelgiant į jų ypatumus, yra panaši: suprojektuojama duomenų bazė (sukuriamos lentelės, nustatomi jų tarpusavio ryšiai), rašomos užklausos, kuriamos (arba generuojamos) duomenų įvedimo/redagavimo formos bei kuriamos duomenų išrinkimo ataskaitos. Ši informacinių sistemų kūrimo tvarka yra nusistovėjusi per daugelį metų. Tačiau DB projektavimo procesas taptų lengvesnis, pakeitus IS projektavimo procesą taip, kad realizacijos metu iš pradžių būtų kuriamos formos, o tik po to iš sukurtų formų būtų generuojama duomenų bazė. Toks IS kūrimo procesas leistų iš dalies automatizuotų DB projektavimą. Be to, galutinai suderinus prototipus su užsakovu, užtektų tik sugeneruoti DB, t.y. nereikėtų iš naujo kurti formų, o sistema sugeneruotų DB bei automatiškai susietų formų laukus... [toliau žr. visą tekstą]
The Data Base Management System (DBMS) is the foundation of almost every modern business information system. Virtually every administrative process in business, science or government relies on a database. There are a lot of DBMS products in our days, such as Oracle, Microsoft SQL Server, IBM DB2, Microsoft Access, MySQL and PostgreSQL. Each of it has their advantages and disadvantages. But the database design process using these DBMS is the same – at the first stage we need to create a database (tables and relationships between them), then we need to create (or generate by using wizard) forms for data input/modification and reports for data selection. However, the database design process would become easier by changing database design process in such a way that at first we create forms and then database is generated from forms data and forms are automatically associated with database tables. The task of research is to extend chosen free open source DBMS by new functions which would enable to develop forms and DB using new methods – automated database generation from forms and automatic forms association with database tables. OpenOffice.org Base DBMS and Java programming language has been chosen for the task implementation. This thesis consists of analysis, design, user manual, experimental and conclusion parts.
APA, Harvard, Vancouver, ISO, and other styles
12

Hall, Andrew Brian. "DJ: Bridging Java and Deductive Databases." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/33383.

Full text
Abstract:

Modern society is intrinsically dependent on the ability to manage data effectively. While relational databases have been the industry standard for the past quarter century, recent growth in data volumes and complexity requires novel data management solutions. These trends revitalized the interest in deductive databases and highlighted the need for column-oriented data storage. However, programming technologies for enterprise computing were designed for the relational data management model (i.e., row-oriented data storage). Therefore, developers cannot easily incorporate emerging data management solutions into enterprise systems.

To address the problem above, this thesis presents Deductive Java (DJ), a system that enables enterprise programmers to use a column oriented deductive database in their Java applications. DJ does so without requiring that the programmer become proficient in deductive databases and their non-standardized, vendor-specific APIs. The design of DJ incorporates three novel features: (1) tailoring orthogonal persistence technology to the needs of a deductive database with column-oriented storage; (2) using Java interfaces as a primary mapping construct, thereby simplifying method call interception; (3) providing facilities to deploy light-weight business rules.

DJ was developed in partnership with LogicBlox Inc., an Atlanta based technology startup.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Paičienė, Kristina. "Optikos įmonės kompiuterizuotos IS sukūrimas ir tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040920_123554-74048.

Full text
Abstract:
Many small enterprises in Lithuania don’t use information systems in their accounting. This is because almost all of already developed accounting software is quite complex, expensive and has many additional features, witch aren’t useful for a small enterprises. This is why it has been decided to develop own specific software for a goods accounting. User interface and data structure should be adapted to the specific functions of the small optical enterprise. The purposes of the developed information system are to increase work and accounting quality, to decrease time needed for accounting, to avoid saving redundant information, to automate and simplify the process of creating analytical reports, to avoid mistakes in accounting and make accounting more efficient. In the process of developing this information system there was analyzed functional and nonfunctional, manage mental and common requirement issues. The models of dataflow, data structure, and applications were used in the requirement specification. Architecture of components and software structure is also provided in this project. The realization of this project was accomplished by means of Microsoft Access 2000. There was created database, graphical user interface, and integrated Microsoft Visual Basic for Applications was used to perform programming tasks. Abilities of this software are fully sufficient for these tasks. Selected design techniques and tools had proved themselves in solving software for small... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
14

Atila, Yavuz Vural. "Design and implementation of a multimedia DBMS sound management integration /." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA245774.

Full text
Abstract:
Thesis (M.S. in Engineering Science)--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Lum, Vincent Y. Second Reader: Hsiao, David. "December 1990." Description based on title screen as viewed on ... DTIC Descriptor(s): Data bases, data management, sound, systems engineering, interfaces, alphanumeric data, integration, user needs, media, microcomputers, records, storage, sun, computers, local area networks, management, environments, theses DTIC Identifier(s): Data bases, systems engineering, sound, data management, MDBMS (multimedia database management system), theses, man computer interface, catalogs, data storage systems Author(s) subject terms: Multimedia Database Management System, Multimedia, DBMS, MDBMS, Sound Media Management, Sound Database Includes bibliographical references (p. 62-64).62 Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
15

Thomas, Jeffrey Alan. "P2 : a lightweight DBMS generator /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Carmo, Samuel Sullivan. "Sistema de gerenciamento da informação: alterações neurológicas em chagásicos crônicos não-cardíacos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/17/17140/tde-26052010-105637/.

Full text
Abstract:
O presente trabalho ocupa-se no desenvolvimento de um sistema computacional de gerenciamento da informação para auxiliar os estudos científicos sobre o sistema nervoso de chagásicos crônicos não-cardíacos. O objetivo é desenvolver o sistema requerido, pelo pressuposto de praticidade nas análises decorrentes da investigação. O método utilizado para desenvolver este sistema computacional, dedicado ao gerenciamento das informações da pesquisa sobre as alterações neurológicas de seus sujeitos, foi; compor o arquétipo de metas e a matriz de levantamento de requisitos das variantes do sistema; listar os atributos, domínios e qualificações das suas variáveis; elaborar o quadro de escolha de equipamentos e aplicativos necessários para sua implantação física e lógica e; implantá-lo mediante uma modelagem de base de dados, e uma programação lógica de algoritmos. Como resultado o sistema foi desenvolvido. A discussão de análise é; a saber, que a informatização pode tornar mais eficaz as operações de cadastro, consulta e validação de campo, além da formatação e exportação de tabelas pré-tratadas para análises estatísticas, atuando assim como uma ferramenta do método científico. Ora, a argumentação lógica é que a confiabilidade das informações computacionalmente registradas é aumentada porque o erro humano é diminuído na maioria dos processamentos. Como discussão de cerramento, estudos dotados de razoável volume de variáveis e sujeitos de pesquisa são mais bem geridos caso possuam um sistema dedicado ao gerenciamento de suas informações.
This is the development of a computer information management system to support scientific studies about the nervous system of non-cardiac chronic chagasic patients. The goal is to develop the required system, by assumption of the convenience in the analysis of research results. The method used to develop this computer system, dedicated to information management of research about the neurological disorders of their human subject research, were; compose the archetypal matrix of targets and requirements elicitation of the system variants; list the attributes, qualifications and domains of its variables; draw up the choice framework of equipment and required applications for its physical and logic implementation, and; deploying it through a data modeling, an adapted entity-relationship diagram and programmable logic algorithms. As a result the required system was developed. The analytical discussion is that the computerization makes the data processing faster and safer. The more practical information management processes are: the operations of registration, queries and fields\' validations, as well as the advanced and basic queries of records, in addition to table formatting and exporting of pre-treated for statistical analysis. The logical argument is that the reliability of the recorded computationally information is increased because is insured that bias of human error is absent from most of the steps, including several the data processing operations. As end discussion, scientific studies with reasonable amount of variables and research subjects are better managed if they have a dedicated system to managing their information.
APA, Harvard, Vancouver, ISO, and other styles
17

Nayeem, Fatima, and M. Vijayakamal. "Policies Based Intrusion Response System for DBMS." IJCSN, 2012. http://hdl.handle.net/10150/271494.

Full text
Abstract:
Relational databases are built on Relational Model proposed by Dr. E. F. Codd. The relational model has become a consistent and widely used DBMS in the world. The databases in this model are efficient in storing and retrieval of data besides providing authentication through credentials. However, there might be many other attacks apart from stealing credentials and intruding database. Adversaries may always try to intrude into the relational database for monetary or other gains [1]. The relational databases are subjected to malicious attacks as they hold the valuable business data which is sensitive in nature. Monitoring such database continuously is a task which is inevitable keeping the importance of database in mind. This is a strategy that is in top five database strategies as identified by Gartner research which are meant for getting rid of data leaks in organizations [2]. There are regulations from governments like US with respect to managing data securely. The data management like HIAPP, GLBA, and PCI etc. is mentioned in the regulations as examples.
Intrusion detection systems play an important role in detecting online intrusions and provide necessary alerts. Intrusion detection can also be done for relational databases. Intrusion response system for a relational database is essential to protect it from external and internal attacks. We propose a new intrusion response system for relational databases based on the database response policies. We have developed an interactive language that helps database administrators to determine the responses to be provided by the response system based on the malicious requests encountered by relational database. We also maintain a policy database that maintains policies with respect to response system. For searching the suitable policies algorithms are designed and implemented. Matching the right policies and policy administration are the two problems that are addressed in this paper to ensure faster action and prevent any malicious changes to be made to policy objects. Cryptography is also used in the process of protecting the relational database from attacks. The experimental results reveal that the proposed response system is effective and useful.
APA, Harvard, Vancouver, ISO, and other styles
18

Pua, Chai Seng. "Process algebra approach to parallel DBMS performance modelling." Thesis, Heriot-Watt University, 1999. http://hdl.handle.net/10399/1262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Soo, Michael Dennis 1962. "Constructing a temporal database management system." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290685.

Full text
Abstract:
Temporal database management systems provide integrated support for the storage and retrieval of time-varying information. Despite the extensive research which has been performed in this area over the last fifteen years, no commercial products exist and few viable prototypes have been constructed. It is our thesis that through the use of the proper abstractions, it is possible to construct a temporal database management system with robust semantics, without sacrificing performance, and with minimal implementation cost. Our approach parallels the development of relational database management systems, beginning with a theoretically sound abstract model, and then developing the underlying techniques to efficiently implement it. The major theme underlying this research is practicality, in terms of both semantics and implementation. We will show that expressive temporal semantics can be supported while still maintaining reasonable performance, and with relatively small implementation effort. This is made possible, in part, by minimally extending the relational model to support time, thereby allowing the reuse or easy adaptation of well-established relational technology. In particular, we investigate how relational database design, algebras, architectures, and query evaluation can be adapted or extended to the temporal context. Our aim is that software vendors could incorporate these results into existing non-temporal, commercial products with relatively small effort.
APA, Harvard, Vancouver, ISO, and other styles
20

Papastathi, Maria. "Database management system using IDEF methodologies." Thesis, University of Salford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zou, Hanzheng. "Build an Inventory Tracking System." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1580.

Full text
Abstract:

This thesis paper introduces you about the process of how to build an inventory tracking system in a local Swedish company. The related project is to support the thesis paper, and is also for the company’ using. The software product of this project is an application that works for managing various types’ instruments in the company of SWECO-Vaxjo. It will play an important role in the further management work of the company.

In this thesis paper, the candidate techniques and theories for implementing this system are discussed. And in the end a good solution for this problem will be presented in the paper.

APA, Harvard, Vancouver, ISO, and other styles
22

Jonsson, Josefine. "Change And Version Management Of Transport Network Data Between Different Database Models : A Case Study On The Swedish National Road Database." Thesis, KTH, Geoinformatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254520.

Full text
Abstract:
The Swedish Road Administration wants to compile all the national road database data from The Swedish Mapping, Cadastral and Land Registration Authority using a Geographical Information System compiler in order to increase the efficiency of data flow between their respective databases. The objective of this master’s thesis has been to build a software solution containing changed private road data input from The Swedish Mapping, Cadastral and Land Registration Authority and processing it into the OpenTNF standard format. This would enable automatic processing and of private road data to the national road database at the Swedish Road Administration. The work is divided into four parts; 1. Researching standards for databases and version control. 2. Plan the methodology using different resources. 3. Development of a software solution. 4. Analysis. The chosen software is FME by Safe Software. A number of shortcomings such as lack of information on the practical input for the future ANDA system were discovered, therefor some assumptions and simplifications had to be made. Using the assumptions and examples, a functioning solution was created according to the OpenTNF and INSPIRE standards. The examples to fills that gap in knowledge and provide a greater understanding of the usage of the INSPIRE and OpenTNF standards for transport networks. An analysis and a discussion about the existing solution, bottlenecks, faults with the existing database and version management between the databases related to found research is presented. Workflows on different examples for the software solution can be seen in the results. The national road database suffers from low implementation rate and creates issues for making new applications and the ability to adapt to ever-changing nature of planning. Creating a software for automatic update on network data is crucial for the Swedish Road Administration for implementing technologies that are dependent on frequent updates, such as self-driving vehicles.
APA, Harvard, Vancouver, ISO, and other styles
23

Beyers, Hector Quintus. "Database forensics : Investigating compromised database management systems." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/41016.

Full text
Abstract:
The use of databases has become an integral part of modern human life. Often the data contained within databases has substantial value to enterprises and individuals. As databases become a greater part of people’s daily lives, it becomes increasingly interlinked with human behaviour. Negative aspects of this behaviour might include criminal activity, negligence and malicious intent. In these scenarios a forensic investigation is required to collect evidence to determine what happened on a crime scene and who is responsible for the crime. A large amount of the research that is available focuses on digital forensics, database security and databases in general but little research exists on database forensics as such. It is difficult for a forensic investigator to conduct an investigation on a DBMS due to limited information on the subject and an absence of a standard approach to follow during a forensic investigation. Investigators therefore have to reference disparate sources of information on the topic of database forensics in order to compile a self-invented approach to investigating a database. A subsequent effect of this lack of research is that compromised DBMSs (DBMSs that have been attacked and so behave abnormally) are not considered or understood in the database forensics field. The concept of compromised DBMSs was illustrated in an article by Olivier who suggested that the ANSI/SPARC model can be used to assist in a forensic investigation on a compromised DBMS. Based on the ANSI/SPARC model, the DBMS was divided into four layers known as the data model, data dictionary, application schema and application data. The extensional nature of the first three layers can influence the application data layer and ultimately manipulate the results produced on the application data layer. Thus, it becomes problematic to conduct a forensic investigation on a DBMS if the integrity of the extensional layers is in question and hence the results on the application data layer cannot be trusted. In order to recover the integrity of a layer of the DBMS a clean layer (newly installed layer) could be used but clean layers are not easy or always possible to configure on a DBMS depending on the forensic scenario. Therefore a combination of clean and existing layers can be used to do a forensic investigation on a DBMS. PROBLEM STATEMENT The problem to be addressed is how to construct the appropriate combination of clean and existing layers for a forensic investigation on a compromised DBMS, and ensure the integrity of the forensic results. APPROACH The study divides the relational DBMS into four abstract layers, illustrates how the layers can be prepared to be either in a found or clean forensic state, and experimentally combines the prepared layers of the DBMS according to the forensic scenario. The study commences with background on the subjects of databases, digital forensics and database forensics respectively to give the reader an overview of the literature that already exists in these relevant fields. The study then discusses the four abstract layers of the DBMS and explains how the layers could influence one another. The clean and found environments are introduced due to the fact that the DBMS is different to technologies where digital forensics has already been researched. The study then discusses each of the extensional abstract layers individually, and how and why an abstract layer can be converted to a clean or found state. A discussion of each extensional layer is required to understand how unique each layer of the DBMS is and how these layers could be combined in a way that enables a forensic investigator to conduct a forensic investigation on a compromised DBMS. It is illustrated that each layer is unique and could be corrupted in various ways. Therefore, each layer must be studied individually in a forensic context before all four layers are considered collectively. A forensic study is conducted on each abstract layer of the DBMS that has the potential to influence other layers to deliver incorrect results. Ultimately, the DBMS will be used as a forensic tool to extract evidence from its own encrypted data and data structures. Therefore, the last chapter shall illustrate how a forensic investigator can prepare a trustworthy forensic environment where a forensic investigation could be conducted on an entire PostgreSQL DBMS by constructing a combination of the appropriate forensic states of the abstract layers. RESULTS The result of this study yields an empirically demonstrated approach on how to deal with a compromised DBMS during a forensic investigation by making use of a combination of various states of abstract layers in the DBMS. Approaches are suggested on how to deal with a forensic query on the data model, data dictionary and application schema layer of the DBMS. A forensic process is suggested on how to prepare the DBMS to extract evidence from the DBMS. Another function of this study is that it advises forensic investigators to consider alternative possibilities on how the DBMS could be attacked. These alternatives might not have been considered during investigations on DBMSs to date. Our methods have been tested at hand of a practical example and have delivered promising results.
Dissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
24

Ma, Xuesong 1975. "Data mining using relational database management system." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98757.

Full text
Abstract:
With the wide availability of huge amounts of data and the imminent demands to transform the raw data into useful information and knowledge, data mining has become an important research field both in the database area and the machine learning areas. Data mining is defined as the process to solve problems by analyzing data already present in the database and discovering knowledge in the data. Database systems provide efficient data storage, fast access structures and a wide variety of indexing methods to speed up data retrieval. Machine learning provides theory support for most of the popular data mining algorithms. Weka-DB combines properties of these two areas to improve the scalability of Weka, which is an open source machine learning software package. Weka implements most of the machine learning algorithms using main memory based data structure, so it cannot handle large datasets that cannot fit into main memory. Weka-DB is implemented to store the data into and access the data from DB2, so it achieves better scalability than Weka. However, the speed of Weka-DB is much slower than Weka because secondary storage access is more expensive than main memory access. In this thesis we extend Weka-DB with a buffer management component to improve the performance of Weka-DB. Furthermore, we increase the scalability of Weka-DB even further by putting further data structures into the database, which uses a buffer to access the data in database. Furthermore, we explore another method to improve the speed of the algorithms, which takes advantage of the data access properties of machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
25

Albazi, Adnan. "An architecture for expert database system." Thesis, University of Bradford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Pilewski, Frank Michael. "System design of a discrepancy reporting system." Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-03302010-020515/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ercin, Nazif Ilker. "Fmdbms - A Fuzzy Mpeg-7 Database Management System." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614441/index.pdf.

Full text
Abstract:
Continuous progress in multimedia research in recent years have led to proliferation of their applications in everyday life. The ever-growing demand in high performance multimedia applications creates the need for new and efficient storage and retrieval techniques. There exist numerous studies in the literature attempting to describe the content of these multimedia documents. Moving Picture Experts Group&rsquo
s XML based MPEG-7 is one of these studies that makes it possible to describe multimedia content in terms of both low and high level properties. MPEG-7 DDL allows defining new types using already defined types. Within the past ten years, it became a widely accepted standard in multimedia applications. In this thesis, an XML database application is developed to manage MPEG-7 descriptions, utilizing eXist XML DB as the database management system and a JAVA application as the frontend. MPEG-7 Description Schemes are extended by introducing fuzzy semantic types, such as FuzzyObject and FuzzyEvent, using the MPEG-7 DDL. From this point of view, the application of fuzzy XML methods in MPEG-7 standard is a novel approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Yao, Bin. "Building an interoperable distributed image database management system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0011/MQ59905.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fathy, Sherif Kassem. "Exploring parallelism with object oriented database management system." Thesis, University of Kent, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Guadagnini, Luca. "Dionysius : a Peer-to-peer Database Management System." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5399.

Full text
Abstract:
With the introduction of the peer-to-peer paradigm in the world of software, a lot of applications have been created in order to such architecture. Most of them are developed for providing a data sharing service among users connected to a network and programs such as Napster, Gnutella, eMule and BitTorrent have became the so called killer-applications. However some eorts have been spent in order to develop other solutions with the usage of peer-to-peer paradigm. In the case of databases some projects are started with the general purpose of sharing data sets with other databases. Generally they push on the idea of providing the data contained in their database schemes with other peers in the network showing concepts such schema matching, mapping tables and others which are necessary to establish connections and data sending. The thesis analyzes some of such projects in order to see which of them is the most dened and well-supported by concepts and deni- tions. Hyperion Project of the University of Torono in collaboration with the University of Trento is the most promising and it aims to be one of the rst Peer-to-Peer Database Management Systems. However the common idea of considering the peer-to-peer paradigm equal to data sharing - in the way presented by applications such as Napster or others - leads to a lot diculties, it is hard to handle the data sets, some operations must be done manually and there can be some cases where the peer-to-peer paradigm is not applied at all. For this reason the goal is to dene and show the concept of peer-to-peer database built from the scratch with a suitable DBMS for it. A real denition of peer-to-peer database has not been ever made and here for the rst time we tried to give one according to our vision. The denition depends on some precise concepts such global schema - which is the original design of the database -, sub-schema - a well logical dened sub-set of entities of the original schema - and binding tables - necessary to allow the creation of constraints and relations among the entities. Then to show the validity of such concepts and how a management system for peer-to-peer databases can be developed and used, a prototype (named Dionysius) has been realized by modifying HSQLDB - an ordinary DBMS developed in Java - and adding the peer-to-peer platform by using the JXTA libray set.
APA, Harvard, Vancouver, ISO, and other styles
31

洪宜偉 and Edward Hung. "Data cube system design: an optimization problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hung, Edward. "Data cube system design : an optimization problem /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21852340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kola, Marin. "Progettazione ed implementazione di un database per la gestione della mappa della connettivita urbana utilizzando tecnologie nosql." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9696/.

Full text
Abstract:
Nella tesi, inizialmente, viene introdotto il concetto di Big Data, descrivendo le caratteristiche principali, il loro utilizzo, la provenienza e le opportunità che possono apportare. Successivamente, si sono spiegati i motivi che hanno portato alla nascita del movimento NoSQL, come la necessità di dover gestire i Big Data pur mantenendo una struttura flessibile nel tempo. Inoltre, dopo un confronto con i sistemi tradizionali, si è passati al classificare questi DBMS in diverse famiglie, accennando ai concetti strutturali sulle quali si basano, per poi spiegare il funzionamento. In seguito è stato descritto il database MongoDB orientato ai documenti. Sono stati approfonditi i dettagli strutturali, i concetti sui quali si basa e gli obbiettivi che si pone, per poi andare ad analizzare nello specifico importanti funzioni, come le operazioni di inserimento e cancellazione, ma anche il modo di interrogare il database. Grazie alla sue caratteristiche che lo rendono molto performante, MonogDB, è stato utilizzato come supporto di base di dati per la realizzazione di un applicazione web che permette di mostrare la mappa della connettività urbana.
APA, Harvard, Vancouver, ISO, and other styles
34

Park, Seong Seung. "The development of a database management system for library loan management." Thesis, Monterey, California. Naval Postgraduate School, 1990. http://hdl.handle.net/10945/30707.

Full text
Abstract:
Approved for public release, distribution is unlimited
This thesis deals with the procedures for and the issues in the analysis, design, and implementation of Library Loan Management System (LLMS). LLMS is a low-volume real-time transaction processing system intended for small or medium size libraries. It is designed to provide such library functions as library cataloging, patron registration, circulation, and reference services based on a relational database management system. We implemented prototype LLMS to run on IBM PC/AT or XT compatible microcomputer using dBASE IV. The developed prototype system has been documented in this thesis. We also discuss some issues in implementing LLMS in a networked environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Lepinioti, Konstantina. "Data mining and database systems : integrating conceptual clustering with a relational database management system." Thesis, Bournemouth University, 2011. http://eprints.bournemouth.ac.uk/17765/.

Full text
Abstract:
Many clustering algorithms have been developed and improved over the years to cater for large scale data clustering. However, much of this work has been in developing numeric based algorithms that use efficient summarisations to scale to large data sets. There is a growing need for scalable categorical clustering algorithms as, although numeric based algorithms can be adapted to categorical data, they do not always produce good results. This thesis presents a categorical conceptual clustering algorithm that can scale to large data sets using appropriate data summarisations. Data mining is distinguished from machine learning by the use of larger data sets that are often stored in database management systems (DBMSs). Many clustering algorithms require data to be extracted from the DBMS and reformatted for input to the algorithm. This thesis presents an approach that integrates conceptual clustering with a DBMS. The presented approach makes the algorithm main memory independent and supports on-line data mining.
APA, Harvard, Vancouver, ISO, and other styles
36

Gåfvels, Niklas. "Searching Web Feeds from a Functional Database Management System." Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-110893.

Full text
Abstract:

Web feeds are a popular technique to distribute information about contents of web pages. RSS and Atom are two standards used to syndicate web contents as web feeds. This project investigates how to make different kinds of Internet web feeds searchable by implementing a general wrapper for web feeds in an extensible and functional DBMS, Amos II. The system, RSS-Amos, makes it possible to search the contents of any RSS or Atom based web feed using the query language AmosQL. New web feeds simply have to be declared to the system in order to make them searchable. The system guarantees that added feeds always are up to date when queries are made. The wrapper is implemented in Java using the ROME API from java.net. The project includes an evaluation of the performance of the system. Due to the fact that the actual data sources are located on the Internet, a cache of read feeds has been implemented to improve performance. The cache makes queries over 150 times faster.

APA, Harvard, Vancouver, ISO, and other styles
37

Spear, Ronald L. "A relational/object-oriented database management system : R/OODBMS." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/24026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wyrick, Lynn A. "Implementation of a distributed object-oriented database management system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25987.

Full text
Abstract:
Distributed database management systems provide for more flexible and efficient processing. Research in object-oriented database management systems is revealing an abundance of additional benefits that cannot be provided by more traditional database management systems. The Naval Military Personnel Command (NMPC) is used as a case study to evaluate the requirements of transitioning from a centralized to a distributed database management system. Features and characteristics of both distributed and object-oriented database management systems are used to determine the appropriate configuration for different application environments. The distributed and object-oriented concepts are evaluated in detail in order to allow an organization to appropriately select the type of system to meet their needs. Transition requirements for NMPC, in particular, are identified and a suggested plan of action is presented. Keywords: Theses, Database implementation, Database design, Distributed architecture, KBSA(Knowledge Base Software Assistant). (kr)
APA, Harvard, Vancouver, ISO, and other styles
39

Moolman, G. Chris. "A relational database management systems approach to system design /." This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-07102009-040421/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Moolman, George Christiaan. "A relational database management systems approach to system design." Thesis, Virginia Tech, 1992. http://hdl.handle.net/10919/43628.

Full text
Abstract:
Systems are developed to fulfill certain requirements. Several system design configurations usually can fulfill the technical requirements, but at different equivalent life-cycle costs. The problem is how to manipulate and evaluate different system configurations so that the required system effectiveness can be achieved at a minimum equivalent cost. It is also important to have a good definition of all the major consequences of each design configuration. For each alternative configuration considered, it is useful to know the number of units to deploy, the inventory and other logistic requirements, as well as the sensitivity of the system to changes in input variable values. An intelligent relational database management system is defined to solve the problem described. Table structures are defined to maintain the required data elements and algorithms are constructed to manipulate the data to provide the necessary information. The methodology is as follows: Customer requirements are analyzed in functional terms. Feasible design alternatives are considered and defined as system design configurations. The reliability characteristics of each system configuration are determined, initially from a system-level allocation, and later determined from test and evaluation data. A maintenance analysis is conducted to determine the inventory requirements (using reliability data) and the other logistic requirements for each design configuration. A vector of effectiveness measures can be developed for each customer, depending on objectives, constraints, and risks. These effectiveness measures, consisting of a combination of performance and cost measures, are used to aid in objectively deciding which alternative is preferred. Relationships are defined between the user requirements, the reliability and maintainability of the system, the number of units deployed, the inventory level, and other logistic characteristics of the system. A heuristic procedure is developed to interactively manipulate these parameters to obtain a good solution to the problem with technical performance and cost measures as criteria. Although it is not guaranteed that the optimal solution will be found, a feasible solution close to the optimal will be found. Eventually the user will have, at any time, the ability to change the value of any parameter modeled. The impact on the total system will subsequently be made visible.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

So, Maria Yuen-Ling. "A database system for the management of numerical experiments." Thesis, University of Ottawa (Canada), 1987. http://hdl.handle.net/10393/5098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Burghardt, Josef. "Database system for teaching German." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834506.

Full text
Abstract:
It is not revolutionary to say that repetition and practical experience is a very important aspect in learning about and understanding a topic. This is especially true for languages, particularly from the point of view of vocabulary.Like in many other processes that deal with gaining knowledge, studying foreign words involves a lot of side work: For instance the selection of words, or their presentation for the actual training.The purpose of this thesis is to automate the study of vocabulary. To do so, an intelligent software package was developed. Divided into three parts the project takes into account the aspects from the language point of view, from the studying point of view, and from the computer science point of view.The fundamental idea to accomplish the goal is a relational database system. It is utilized by software programs that solve their tasks in respect to data management, data manipulation, storage and retrieval, in an efficient way.The system is developed for English speaking persons studying German as a foreign language. And with every language having its own nature, it naturally influences all levels and aspects of design and utilization of the database.l:
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
43

Daly, William G. "A graphical management system for semantic muiltimedia databases." Thesis, University of York, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Bing. "A visual query facility for DISIMA image database management system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0012/MQ60196.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Nes, Nicolaas Johannes. "Image database management system[s] design considerations, algorithms and architecture." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2000. http://dare.uva.nl/document/55891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Petras, Juraj Carleton University Dissertation Computer Science. "Modeling and implementation in an object-oriented database management system." Ottawa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
47

Walldén, Marcus, and Aylin Özkan. "A graph database management system for a logistics-related service." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205184.

Full text
Abstract:
Higher demands on database systems have lead to an increased popularity of certain database system types in some niche areas. One such niche area is graph networks, such as social networks or logistics networks. An analysis made on such networks often focus on complex relational patterns that sometimes can not be solved efficiently by traditional relational databases, which has lead to the infusion of some specialized non-relational database systems. Some of the database systems that have seen a surge in popularity in this area are graph database systems. This thesis presents a prototype of a logistics network-related service using a graph database management system called Neo4j, which currently is the most popular graph database management system in use. The logistics network covered by the service is based on existing data from PostNord, Sweden’s biggest provider of logistics solutions, and primarily focuses on customer support and business to business. By creating a prototype of the service this thesis strives to indicate some of the positive and negative aspects of a graph database system, as well as give an indication of how a service using a graph database system could be created. The results indicate that Neo4j is very intuitive and easy to use, which would make it optimal for prototyping and smaller systems, but due to the used evaluation method more research in this area would need to be carried out in order to confirm these conclusions.
Högre krav på databassystem har lett till en ökad popularitet för vissa databassystemstyper i några nischområden. Ett sådant nischområde är grafnätverk, såsomsociala nätverk eller logistiknätverk. Analyser på grafnätverk fokuserar ofta påkomplexa relationsmönster som ibland inte kan lösas effektivt av traditionella relationsdatabassystem, vilket har lett till att vissa specialiserade icke-relationella databassystem har blivit populära alternativ. Många av de populära databassystemen inom detta område är grafdatabassystem. Detta arbete presenterar en prototyp av en logistiknätverksrelaterad tjänst som använder sig av ett grafdatabashanteringssystem som heter Neo4j, vilket är det mest använda grafdatabashanteringssystemet. Logistiknätverket som täcks av tjänsten är baserad på existerande data från PostNord, Sveriges ledande leverantör av logistiklösningar, och fokuserar primärt på kundsupport och företagsrelaterad analys. Genom att skapa en prototyp av tjänsten strävar detta arbete efter att uppvisa vissa av de positiva och negativa aspekterna av ett grafdatabashanteringssystem samt att visa hur en tjänst kan skapas genom att använda ett grafdatabashanteringssystem. Resultaten indikerar att Neo4j är väldigt intuitivt och lättanvänt, vilket skulle göra den optimal för prototyping och mindre system, men på grund av den använda evalueringsmetoden så behöver mer forskning inom detta område utföras innan dessa slutsatser kan bekräftas.
APA, Harvard, Vancouver, ISO, and other styles
48

Singh, Parmjit. "Web based forensic information management system." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4721.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xiii, 316 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 315-316).
APA, Harvard, Vancouver, ISO, and other styles
49

Kamuhanda, Dany. "Visualising M-learning system usage data." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/11015.

Full text
Abstract:
Data storage is an important practice for organisations that want to track their progress. The evolution of data storage technologies from manual methods of storing data on paper or in spreadsheets, to the automated methods of using computers to automatically log data into databases or text files has brought an amount of data that is beyond the level of human interpretation and comprehension. One way of addressing this issue of interpreting large amounts of data is data visualisation, which aims to convert abstract data into images that are easy to interpret. However, people often have difficulty in selecting an appropriate visualisation tool and visualisation techniques that can effectively visualise their data. This research proposes the processes that can be followed to effectively visualise data. Data logged from a mobile learning system is visualised as a proof of concept to show how the proposed processes can be followed during data visualisation. These processes are summarised into a model that consists of three main components: the data, the visualisation techniques and the visualisation tool. There are two main contributions in this research: the model to visualise mobile learning usage data and the visualisation of the usage data logged from a mobile learning system. The mobile learning system usage data was visualised to demonstrate how students used the mobile learning system. Visualisation of the usage data helped to convert the data into images (charts and graphs) that were easy to interpret. The evaluation results indicated that the proposed process and resulting visualisation techniques and tool assisted users in effectively and efficiently interpreting large volumes of mobile learning system usage data.
APA, Harvard, Vancouver, ISO, and other styles
50

Beernink, Kathleen A. "A conceptual database design of a Naval shore command management information system." Thesis, Monterey, Calif. : Naval Postgraduate School, 1992. http://handle.dtic.mil/100.2/ADA250091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography