To see the other types of publications on this topic, follow the link: Database relation schema.

Dissertations / Theses on the topic 'Database relation schema'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Database relation schema.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wheeler, Jared Thomas. "Extracting a Relational Database Schema from a Document Database." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/730.

Full text
Abstract:
As NoSQL databases become increasingly used, more methodologies emerge for migrating from relational databases to NoSQL databases. Meanwhile, there is a lack of methodologies that assist in migration in the opposite direction, from NoSQL to relational. As software is being iterated upon, use cases may change. A system which was originally developed with a NoSQL database may accrue needs which require Atomic, Consistency, Isolation, and Durability (ACID) features that NoSQL systems lack, such as consistency across nodes or consistency across re-used domain objects. Shifting requirements could result in the system being changed to utilize a relational database. While there are some tools available to transfer data between an existing document database and existing relational database, there has been no work for automatically generating the relational database based upon the data already in the NoSQL system. Not taking the existing data into account can lead to inconsistencies during data migration. This thesis describes a methodology to automatically generate a relational database schema from the implicit schema of a document database. This thesis also includes details of how the methodology is implemented, and what could be enhanced in future works.
APA, Harvard, Vancouver, ISO, and other styles
2

Al, Arrayedh Osama M. "Evolution of synthesised relational database schemas." Thesis, University of Nottingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wright, Christopher. "Mutation analysis of relational database schemas." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/12059/.

Full text
Abstract:
The schema is the key artefact used to describe the structure of a relational database, specifying how data will be stored and the integrity constraints used to ensure it is valid. It is therefore surprising that to date little work has addressed the problem of schema testing, which aims to identify mistakes in the schema early in software development. Failure to do so may lead to critical faults, which may cause data loss or degradation of data quality, remaining undetected until later when they will prove much more costly to fix. This thesis explores how mutation analysis – a technique commonly used in software testing to evaluate test suite quality – can be applied to evaluate data generated to exercise the integrity constraints of a relational database schema. By injecting faults into the constraints, modelling both faults of omission and commission, this enables the fault-finding capability of test suites generated by different techniques to be compared. This is essential to empirically evaluate further schema testing research, providing a means of assessing the effectiveness of proposed techniques. To mutate the integrity constraints of a schema, a collection of novel mutation operators are proposed and implementation described. These allow an empirical evaluation of an existing data generation approach, demonstrating the effectiveness of the mutation analysis technique and identifying a configuration that killed 94% of mutants on average. Cost-effective algorithms for automatically removing equivalent mutants and other ineffective mutants are then proposed and evaluated, revealing a third of mutation scores to be mutation adequate and reducing time taken by an average of 7%. Finally, the execution cost problem is confronted, with a range of optimisation strategies being applied that consistently improve efficiency, reducing the time taken by several hours in the best case and as high as 99% on average for one DBMS.
APA, Harvard, Vancouver, ISO, and other styles
4

Kutan, Kent. "Transformation of relational schema into static object schema." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02022010-020303/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boháč, Martin. "Perzistence XML v relační databázi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237200.

Full text
Abstract:
The aim of this thesis is to create a client xDB database, which supports visualization and management of XML documents and schemas. The first part deals with the introduction of XML, XML schemas (DTD, XML Schema, RelaxNG, etc.) and contextual technologies. After that the thesis deals with the problem of the XML persistence and it focuses on mapping techniques necessary for an efficient storage in a relational database. The main part is devoted to the design and implementation of client application XML Admin, which is programmed in Java. The application uses the XML:DB interface to communicate with the xDB database. It supports storing XML documents to a collection and the XPath language for querying them. The final section is devoted to application performance testing and comparison with existing native database eXist.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Anna. "Transformation of set schema into relational structures." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26431.

Full text
Abstract:
This thesis describes a new approach of relational database design using the SET conceptual model. The SET conceptual model is used for information modelling. The database schema generated from the information modelling is called the SET schema. The SET schema consists of the declarations of all the sets of the database schema. A domain graph can be constructed based on the information declared in the SET schema. A domain graph is a directed graph with nodes labelled with declared sets and arcs labelled with degree information. Each are in the domain graph points to a node S from a node labelled with an immediate domain predecessor of S. The new method of table design for the relational database involves partitioning the domain graph into mutually exclusive <1,1>-connected components based on the degree information. These components (subgraphs) are then transformed into tree structures. These trees are extended to include the domain predecessors of their nodes to make them predecessor total. The projections of these extended trees into the value sets labelling their leaf nodes form a set of relations which can be represented by tables. This table design method is described and presented in this thesis, along with d program that automates the method. Given a schema of the SET model, together with some degree information about defined sets that a user must calculate based on the intention of the defined sets, the program produces a relational database schema that will record data for the SET schema correctly and completely.<br>Science, Faculty of<br>Computer Science, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
7

Tanaka, Mitsuru. "Classifier System Learning of Good Database Schema." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/859.

Full text
Abstract:
This thesis presents an implementation of a learning classifier system which learns good database schema. The system is implemented in Java using the NetBeans development environment, which provides a good control for the GUI components. The system contains four components: a user interface, a rule and message system, an apportionment of credit system, and genetic algorithms. The input of the system is a set of simple database schemas and the objective for the classifier system is to keep the good database schemas which are represented by classifiers. The learning classifier system is given some basic knowledge about database concepts or rules. The result showed that the system could decrease the bad schemas and keep the good ones.
APA, Harvard, Vancouver, ISO, and other styles
8

Shamsedin, Tekieh Razieh Sadat Information Systems Technology &amp Management Australian School of Business UNSW. "An XML-based framework for electronic business document integration with relational databases." Publisher:University of New South Wales. Information Systems, Technology & Management, 2009. http://handle.unsw.edu.au/1959.4/43695.

Full text
Abstract:
Small and medium enterprises (SMEs) are becoming increasingly engaged in B2B interactions. The ubiquitousness of the Internet and the quasi-reliance on electronic document exchanges with larger trading partners have fostered this move. The main technical challenge that this brings to SMEs is that of business document integration: they need to exchange business documents with heterogeneous document formats and also integrate these documents with internal information systems. Often they can not afford using expensive, customized and proprietary solutions for document exchange and storage. Rather they need cost-effective approaches designed based on open standards and backed with easy-to-use information systems. In this dissertation, we investigate the problem of business document integration for SMEs following a design science methodology. We propose a framework and conceptual architecture for a business document integration system (BDIS). By studying existing business document formats, we recommend using the GS1 XML standard format as the intermediate format for business documents in BDIS. The GS1 standards are widely used in supply chains and logistics globally. We present an architecture for BDIS consisting of two layers: one for the design of internal information system based on relational databases, capable of storing XML business documents, and the other enabling the exchange of heterogeneous business documents at runtime. For the design layer, we leverage existing XML schema conversion approaches, and extend them, to propose a customized and novel approach for converting GS1 XML document schemas into relational schemas. For the runtime layer, we propose wrappers as architectural components for the conversion of various electronic documents formats into the GS1 XML format. We demonstrate our approach through a case study involving a GS1 XML business document. We have implemented a prototype BDIS. We have evaluated and compared it with existing research and commercial tools for XML to relational schema conversion. The results show that it generates operational and simpler relational schemas for GS1 XML documents. In conclusion, the proposed framework enables SMEs to engage effectively in electronic business.
APA, Harvard, Vancouver, ISO, and other styles
9

Macák, Martin. "Editor relačních tabulek." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-412810.

Full text
Abstract:
This thesis deals with aspects of using the common, universal relational table editor as a simple information system, which would be fully independent on the underlying database system and partially independent on the underlying's database schema. One part of this thesis deals with potential of using such universal information system to create a framework to allow fast and easy development of small and medium information systems. The practical part of this thesis is the application, which implements the basics of simple relational table editor, which is fully independent on the underlying database provider and schema and servers as the demonstrative table editor.
APA, Harvard, Vancouver, ISO, and other styles
10

Janse, van Rensburg J., and H. Vermaak. "The design of a JADE compliant manufacturing ontology and accompanying relational database schema." Interim : Interdisciplinary Journal, Vol 10 , Issue 1: Central University of Technology Free State Bloemfontein, 2011. http://hdl.handle.net/11462/333.

Full text
Abstract:
Published Article<br>To enable meaningful and consistent communication between different software systems in a particular domain (such as manufacturing, law or medicine), a standardised vocabulary and communication language is required by all the systems involved. Concepts in the domain about which the systems want to communicate are formalized in an ontology by establishing the meaning of concepts and creating relationships between them. The inputs to this process in found by analysing the physical domain and its processes. The resulting ontology structure is a computer useable representation of the physical domain about which the systems want to communicate. To enable the long term persistence of the actual data contained in these concepts and the enforcement of various business rules, a sufficiently powerful database system is required. This paper presents the design of a manufacturing ontology and its accompanying relational database schema that will be used in a manufacturing test domain.
APA, Harvard, Vancouver, ISO, and other styles
11

Saini, Varun. "Visualization Schemas: A User Interface Extending Relational Data Schemas for Flexible, Multiple-View Visualization of Diverse Databases." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/32621.

Full text
Abstract:
<p> Information visualizations utilizing multiple coordinated views allow users to rapidly explore complex data spaces and discover complex relationships. Most multiple-view visualizations are static with regard to the views that they provide and the coordinations that they support. Despite significant progress in the field of Information Visualization and development of novel interaction techniques, user interfaces for exploring data have lacked flexibility. As a result, the vast quantities of information rapidly being collected in databases are underutilized and opportunities for advancement of knowledge are lost. This research proposes the central concept of visualization schemas based on the Snap-Together Visualization (Snap) model, analogous to the successful database concept of data schemas, which will enable dynamic specification of information visualizations for any given database without programming. <p> Relational databases provide significant flexibility to organize, store, and manipulate an infinite variety of complex data collections. This flexibility is enabled by the concept of relational data schemas, which allow data owners to easily design custom databases according to their unique needs. We bring the same level of flexibility to visualization development through visualization schemas. Visualization Schemas is a conceptual model, user interface, and software architecture while Fusion is the implemented system that enable users to rapidly and dynamically construct personalized multi-view visualization workspaces by coordinating visualizations in ways unforeseen by the original developers.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Lozano, Aparicio Jose Martin. "Data exchange from relational databases to RDF with target shape schemas." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I063.

Full text
Abstract:
Resource Description Framework (RDF) est un modèle de graphe utilisé pour publier des données sur le Web à partir de bases de données relationnelles. Nous étudions l'échange de données depuis des bases de données relationnelles vers des graphes RDF avec des schémas de formes cibles. Essentiellement, échange de données modélise un processus de transformation d'une instance d'un schéma relationnel, appelé schéma source, en un graphe RDF contraint par un schéma cible, selon un ensemble de règles, appelé tuple source-cible générant des dépendances. Le graphe RDF obtenu est appelé une solution. Étant donné que les dépendances générant des tuple définissent ce processus de manière déclarative, il peut y avoir de nombreuses solutions possibles ou aucune solution du tout. Nous étudions le système d'échange de données relationnel avec RDF constructive avec des schémas de formes cibles, qui est composé d'un schéma source relationnel, un schéma de formes pour le schéma cible, un ensemble de mappages utilisant des constructeurs IRI. De plus, nous supposons que deux constructeurs IRI ne se chevauchent pas. Nous proposons un langage visuel pour la spécification des correspondances (VML) qui aide les utilisateurs non experts à spécifier des mappages dans ce système. De plus, nous développons un outil appelé ShERML qui effectue l'échange de données avec l'utilisation de VML et pour les utilisateurs qui souhaitent comprendre le modèle derrière les mappages VML, nous définissons R2VML, un langage texte, qui capture VML et présente une syntaxe succincte pour définition des mappages.Nous étudions le problème de la vérification de la consistance: un système d'échange de données est consistent si pour chaque instance de source d'entrée, il existe au moins une solution. Nous montrons que le problème de consistance est coNP-complet et fournissons un algorithme d'analyse statique du système qui permet de décider si le système est consistent ou non.Nous étudions le problème du calcul de réponses certaines. Une réponse est certaine si la réponse tient dans chaque solution. En générale, réponses certaines sont calculées en utilisant d'une solution universelle. Cependant, dans notre contexte, une solution universelle pourrait ne pas exister. Ainsi, nous introduisons la notion de solution de simulation universelle, qui existe toujours et permet de calculer certaines réponses à n'importe quelle classe de requêtes robustes sous simulation. Une de ces classes sont les expressions régulières imbriquées (NRE) qui sont forward c'est-à-dire qui n'utilisent pas l’opération inverse. L'utilisation d'une solution de simulation universelle rend traitable le calcul de réponses certaines pour les NRE (data-complexity).Enfin, nous étudions le problème d'extraction de schéma des formes qui consiste à construire un schéma de formes cibles à partir d'un système constructif d'échange de données relationnel vers RDF sans le schéma de formes cibles. Nous identifions deux propriétés souhaitables d'un bon schéma cible, qui sont la correction c'est-à-dire que chaque graphe RDF produit est accepté par le schéma cible; et la complétude c'est-à-dire que chaque graphe RDF accepté par le schéma cible peut être produit. Nous proposons un algorithme d'extraction qui convient à tout système d'échange de données sans schéma, mais qui est également complet pour une grande classe pratique de systèmes sans schéma<br>Resource Description Framework (RDF) is a graph data model which has recently found the use of publishing on the web data from relational databases. We investigate data exchange from relational databases to RDF graphs with target shapes schemas. Essentially, data exchange models a process of transforming an instance of a relational schema, called the source schema, to a RDF graph constrained by a target schema, according to a set of rules, called source-to-target tuple generating dependencies. The output RDF graph is called a solution. Because the tuple generating dependencies define this process in a declarative fashion, there might be many possible solutions or no solution at all. We study constructive relational to RDF data exchange setting with target shapes schemas, which is composed of a relational source schema, a shapes schema for the target schema, a set of mappings that uses IRI constructors. Furthermore, we assume that any two IRI constructors are non-overlapping. We propose a visual mapping language (VML) that helps non-expert users to specify mappings in this setting. Moreover, we develop a tool called ShERML that performs data exchange with the use of VML and for users that want to understand the model behind VML mappings, we define R2VML, a text-based mapping language, that captures VML and presents a succinct syntax for defining mappings.We investigate the problem of checking consistency: a data exchange setting is consistent if for every input source instance, there is at least one solution. We show that the consistency problem is coNP-complete and provide a static analysis algorithm of the setting that allows to decide if the setting is consistent or not. We study the problem of computing certain answers. An answer is certain if the answer holds in every solution. Typically, certain answers are computed using a universal solution. However, in our setting a universal solution might not exist. Thus, we introduce the notion of universal simulation solution, which always exists and allows to compute certain answers to any class of queries that is robust under simulation. One such class is nested regular expressions (NREs) that are forward i.e., do not use the inverse operation. Using universal simulation solution renders tractable the computation of certain answers to forward NREs (data-complexity).Finally, we investigate the shapes schema elicitation problem that consists of constructing a target shapes schema from a constructive relational to RDF data exchange setting without the target shapes schema. We identity two desirable properties of a good target schema, which are soundness i.e., every produced RDF graph is accepted by the target schema; and completeness i.e., every RDF graph accepted by the target schema can be produced. We propose an elicitation algorithm that is sound for any schema-less data exchange setting, but also that is complete for a large practical class of schema-less settings
APA, Harvard, Vancouver, ISO, and other styles
13

Fatrdla, Pavel. "Porovnání technologií pro objektově relační mapování." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237102.

Full text
Abstract:
Diploma thesis deals with the contemporary object-relational mapping (ORM) technologies for Java. It briefly describes also competing technologies for persisting objects in files, object and object-relational databases. However main part of the thesis is the persistence of objects in relational databases using ORM frameworks. The work begins with studying general methods and issues, that these frameworks have to solve. Next, it chooses and deeply describes some ORM frameworks. They are later demonstrated on the demo application. In the following part there is a description of the problems I have been facing during the implementation of the persistence using these frameworks. Finally, there is an evaluation and a comparison of these frameworks.
APA, Harvard, Vancouver, ISO, and other styles
14

Voigt, Hannes. "Flexibility in Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-136681.

Full text
Abstract:
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
APA, Harvard, Vancouver, ISO, and other styles
15

Solihin, Wawan. "A simplified BIM data representation using a relational database schema for an efficient rule checking system and its associated rule checking language." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54831.

Full text
Abstract:
Efforts to automate building rule checking have not brought us anywhere near to the ultimate goal to fully automate the rule checking process. With the advancement in BIM and the latest tools and computing capability, we have what is necessary to achieve it. And yet challenges still abound. This research takes a holistic approach to solve the issue by first examining the rule complexity and its logic structure. Three major aspects of the rules are addressed in this research. The first is a new approach to transform BIM data into a simple database schema and to make it easily query-able by adopting the data warehouse approach. Geometry and spatial operations are also commonly needed for automating rules, and therefore the second approach is to integrate these into a database in the form of multiple representations. The third is a standardized rule language that leverages the database query integrated with its geometry and spatial query capability, called BIMRL. It is designed for a non-programmatic approach to the rule definitions that is suitable for typical rule experts. The rule definition takes a form of triplet command: CHECK – EVALUATE – ACTION statement that can be chained to support more complex rules. A prototype system has been developed as a proof-of-concept using selected rules taken from various sources to demonstrate the validity of the approach to solve the challenges of automating the building rule checking.
APA, Harvard, Vancouver, ISO, and other styles
16

Petrikas, Giedrius. "OWL transformavimas į reliacinių duomenų bazių schemas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100826_105919-45823.

Full text
Abstract:
Ontologijų aprašymai yra dažniausiai naudojami semantiniame žiniatinklyje (Semantic Web/Web 2.0), tačiau pastaruoju metu jie randa vis daugiau ir daugiau pritaikymo kasdienėms informacijos sistemoms. Puikiai suformuota ontologija privalo turėti teisingą sintaksę ir nedviprasmišką mašinai suprantamą interpretaciją, tokiu būdu ji gali aiškiai apibrėžti fundamentalias sąvokas ir ryšius probleminėje srityje. Ontologijos vis plačiau naudojamos įvairiuose taikymuose: verslo procesų ir informacijos integravime, paieškoje ir žvalgyme. Tokie taikymai reikalauja geros greitaveikos, efektyvaus saugojimo ir didelio mąsto ontologinių duomenų manipuliavimo. Kai ontologijomis paremtos sistemos auga tiek akiračiu, tiek apimtimi, specialistų sistemose naudojami samprotavimo varikliai tampa nebetinkami. Tokiomis aplinkybėmis, ontologijų saugojimas reliacinėse duomenų bazėse tampa būtinas semantiniame žiniatinklyje ir įmonėse. Šiame darbe atsakoma į klausimą kokiu būdu OWL ontologijas galima efektyviai transformuoti į reliacinių duomenų bazių schemas.<br>Ontology descriptions are typically used in Semantic Web/Web2.0, but nowadays they find more and more adaptability in everyday Information Systems. Well-formed ontology must have correct syntax and unambiguous machine-understandable interpretation, so it is capable to clearly defining fundamental concepts and relationships of the problem domain. Ontologies are increasingly used in many applications: business process and information integration, search and navigation. Such applications require scalability and performance, efficient storage and manipulation of large scale ontological data. In such circumstances, storing ontologies in relational databases are becoming the relevant needs for Semantic Web and enterprises. For ontology development, Semantic Web languages are dedicated: Resource Description Framework (RDF) and schema RDFS, and Web Ontology Language (OWL) that consists of three sublanguages – OWL Lite, OWL Description Logic (DL) and OWL Full. When ontology based systems are growing in scope and volume, reasoners of expert systems are becoming unsuitable. In this work an algorithm which fully automatically transforms ontologies, represented in OWL, to RDB schemas is proposed. Some concepts, e.g. ontology classes and properties are mapped to relational tables, relations and attributes, other (constraints) are stored like metadata in special tables. Using both direct mapping and metadata, it is possible to obtain appropriate relational structures and not to lose the... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
17

Vyšniauskas, Ernestas. "OWL ontologijų transformavimas į reliacinių duomenų bazių schemas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070115_115940-61347.

Full text
Abstract:
The current work has arisen with respect to the growing importance of ontology modelling in Information Systems development. Due to emerging technologies of Semantic Web, it is desirable to use for this purpose the Web Ontology Language OWL. From the other side, the relational database technology has ensured the best facilities for storing, updating and manipulating the information of problem domain. This work covers analysis of the process how ontology of a particular domain described in OWL may be transformed and stored in a relational database. The algorithms for transformation of domain ontology, described in OWL, to relational database are proposed. According this algorithm, ontology classes are mapped to relational tables, properties to relations and attributes, and constraints – to metadata. The proposed algorithm is capable to transform all OWL Lite and part of OWL DL syntax. The further expansion of the algorithm to cover more capabilities of OWL should be based on the same principles. The prototypical tool, performing transformations, has been implemented as add-in for the ontology development tool “Protégé����. The methodology of transformation is illustrated with an example.
APA, Harvard, Vancouver, ISO, and other styles
18

Ivan, Luković. "Integracija šema modula baze podataka informacionog sistema." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 1996. http://dx.doi.org/10.2298/NS1996LUKOVICIVAN.

Full text
Abstract:
Paralelan i nezavisan rad vi&scaron;e projektanata na različitim modulima (podsistemima) nekog informacionog sistema, identifikovanim saglasno početnoj funkcionalnoj dekompoziciji realnog sistema, nužno dovodi do međusobno nekonzistentnih re&scaron;enja &scaron;ema modula baze podataka. Rad se bavi pitanjima identifikacije i razre&scaron;avanja problema, vezanih za automatsko otkrivanje kolizija, koje nastaju pri paralelnom projektovanju različitih &scaron;ema modula i problema vezanih za integraciju &scaron;ema modula u jedinstvenu &scaron;emu baze podataka informacionog sistema.Identifikovani su mogući tipovi kolizija &scaron;ema modula, formulisan je i dokazan potreban i dovoljan uslov stroge i intenzionalne kompatibilnosti &scaron;ema modula, &scaron;to je omogućilo da se, u formi algoritama, prikažu postupci za ispitivanje&nbsp;stroge i intenzionalne kompatibilnosti &scaron;ema modula. Formalizovan je i postupak integracije kompatibilnih &scaron;ema u jedinstvenu (strogo pokrivajuću) &scaron;emu baze podataka. Dat je, takođe, prikaz metodologije primene algoritama za testiranje kompatibilnosti i integraciju &scaron;ema modula u jedinstvenu &scaron;emu baze podataka informacionog sistema.<br>Parallel and independent work of a number of designers on different information system modules (i.e. subsystems), identified by the initial real system functional decomposition, necessarily leads to mutually inconsistent database (db) module schemas. The thesis considers the problems concerning automatic detection of collisions, that can appear during the simultaneous design of different db module schemas, and integration of db module schemas into the unique information system db schema.All possible types of db module schema collisions have been identified. Necessary and sufficient condition of strong and intensional db module schema compatibility has been formu-lated and proved. It has enabled to formalize the process of&nbsp;db module schema strong and intensional compatibility checking and to construct the appropriate algorithms. The integration process of the unique (strong covering) db schema, on the basis of compatible db module schemas, is formalized, as well. The methodology of applying the algorithms for compatibility checking and unique db schema integration is also presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Městka, Milan. "Tvorba databázové aplikace a řešení pro Business Intelligence." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2012. http://www.nusl.cz/ntk/nusl-223400.

Full text
Abstract:
Theme of this master’s thesis is design of software support for business intelligence. Design is realized in cooperation with corporation ZZN Pelhřimov a.s. Introduction is focused on theoretical description of business intelligence and datamining and also on development environment in which is project designed. Corporation is characterised also in introduction. Main part contains data collecting and definition of individual modules. In conclusion of this thesis will be several types of analysis from collected data and then according to these analysis, we can draw measures to improve current state of corporation.
APA, Harvard, Vancouver, ISO, and other styles
20

Chytil, Martin. "Adaptation of Relational Database Schema." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-313837.

Full text
Abstract:
In the presented work we study evolution of a database schema and its impact on related issues. The work contains a review of important problems related to the change in a respective storage of the data. It describes existing approaches of these problems as well. In detail the work analyzes an impact of database schema changes on database queries, which relate to the particular database schema. The approach presented in this thesis shows a ability to model database queries together with a database schema model. The thesis describes a solution how to adapt database queries related to the evolved database schema. Finally the work contains a number of experiments that verify a proposal of the presented solution.
APA, Harvard, Vancouver, ISO, and other styles
21

Plachý, Milan. "Fuzzy databáze založená na E-R schématu." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-306028.

Full text
Abstract:
This text is especialy intended to those who are interested into fuzzy logic and its application in relational databases. It is mainly focused on concept of fuzzyfied relational database and implementation of such database. This text consists of two parts: theoretical aspects of fuzzyfication and implementation part. Selected extension is based on fuzzy E-R model so the requirements of the real world can be better met. This paper also describes existing solutions on different level of fuzzyfication. Part of the work is design and implementation of a simple software for querying over fuzzyfied relational database. This work shoud also serve as a guide for design and implementation of fuzzy database.
APA, Harvard, Vancouver, ISO, and other styles
22

Smith, David Frank. "A comparison of seven relational database schemas." 1987. http://hdl.handle.net/2097/23671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chang, Fang-Jing, and 張芳菁. "XRoute+:A Relational Database Schema for Storing XML Documents." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/357db9.

Full text
Abstract:
碩士<br>靜宜大學<br>資訊管理學系研究所<br>92<br>In this paper, we propose a relational database schema for storing XML documents, called XRoute+. Since Chih-Chiang Lee proposed a schema, call XRoute that can’t accurately access parts of XML data in relational database. For accessing XML data accurately, we modify parts of Lee’s schema in this paper. We also uses a path idea of Jiang H. et al.’s and design two tables for storing XML data information in our relational database schema. The major purpose of our relational database schema is to achieve better performance when we accesses XML data in our relational database. Further, for increasing performance of data access, we propose an efficient algorithm to translate XML query (XQuery) to structured query language (SQL) commands. In terms of those SQL commands, we can easily access XML data in relational database system and achieve to better performance. The capabilities of algorithm are verified in a series of experiments by using our relational database schema. The most part of experiments result have better performance than Lee’s.
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Shing-Han, and 李興漢. "Translating a Relational Database Schema into an Extended Entity-Relationship Database Schema: a Data Mining and Data Dictionary Approach." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/72625653472998993971.

Full text
Abstract:
碩士<br>大同工學院<br>資訊工程學系<br>85<br>The schema translation technique involves semantic re- constructionand the mapping of the original schema into the new schema. Traditionally, a database administrator determines the database schema. The database administrator may not fully understand the user's view of the real world. Therefore, the data semantic may miss in the system analysis phase. During the database design phase,the semantic meanings are, also, lost in the existing logical schema once the conceptual model has been mapped into logical model. It is difficult recapture it. Much research has been done to solve this problem. Unfortunately, these works can not automatically recapture the missing semantics. They require extra information or knowledge from users to identify the missing semantic.After recapturing the semantic of the original conceptual database schema during the schema translation, we need also to have a sophistic knowledge base that can store this knowledge. Unfortunately, much current research misses over look this step. Most of the research use only a flat file or a simple database to store these knowledge. These structures can not fully represent the recapturing knowledge and cause a distortion problem during the database schema translation.The main focus of this thesis is to investigate a default automatic database schema translation mechanism for information systems reengineering. The translation mechanism uses the systematic approach by accompanies with the data mining technique to recapture most of EER conceptual model semantics through the existing relational data dictionary system and the databases. The entire system will retrieve an existing database schema through its relational DDS; translate it to an EER database schema; and stored the new schema into the EER DDS. A prototype system, DARDSTS (Default Automatic Relational Database Schema Translation System), has been implemented by the author to prove the hypothesis. The kernel of the system is an EER DDS, which is design by the author. Finally, the practical approach has been taken to evaluate the system performance and the future work is also explored.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Huei-ru, and 吳蕙如. "The Research on Portable Relational Database Based on XML Schema." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/68660243077450724888.

Full text
Abstract:
碩士<br>大葉大學<br>資訊工程學系碩士班<br>97<br>Due to its extensibility and self-definability, XML is becoming the de facto standard for data exchange on the Internet. Since it is a fairly new descriptive language for data structures, it lacks some utilities, such as indexing and querying tools, normally found in existing relational database environment. As a result, new users of this language may face the difficulties in querying XML data. To lessen the burden of learning the query language, XQuery, for XML, we develop a system that uses the querying technique of relational database and on the XML data. In our research, we first convert the schema of the XML data to a relational schema and the data are fed into the relational database. The users then have the ability of using various relational techniques to query the data. To ensure the outcome fulfills at least the standard of second normal form, we develop a set of rules to remove the duplicity of XML data and maintain the integrity of the output data. Our experimental results are positive and we believe the system can largely reduce the efforts of querying XML data.
APA, Harvard, Vancouver, ISO, and other styles
26

Fann, Sheng-Jey, and 范勝傑. "An Access Control Scheme on Relational Databases." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/79233951496462223596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rahmati, Sanaz. "Converting relational database to XML schema and vice versa using ContextMap." Thesis, 2006. http://spectrum.library.concordia.ca/8858/1/MR14334.pdf.

Full text
Abstract:
An important issue in modern information systems and e-commerce applications is providing support for inter-operability of independent data sources. A broad variety of data is available on the web in distinct heterogeneous sources, stored under different formats: database formats (e.g., relational model), semi-structured formats (e.g., DTD, SGML or XSD schema), scientific formats, etc. Integration of such types of data is an increasingly important problem. Nonetheless, the effort involved in such integration, in practice, is considerable: conversion of database schemas from one format to another requires writing and managing complex data transformation programs or queries. Realizing the importance of converting structured schemas to semi-structured schemas and vice versa, we have developed a method that successfully performed the conversion. Our approach benefits highly from the reverse engineering of structured or semi-structured schema into a ContextMap representation and leads to both identifying and understanding all components of an existing database system and the relationships among them. We have successfully developed a prototype, called CODAX, for the schema conversion process. While more sophisticated techniques are required in this context, we believe the ideas proposed in this work lend themselves to useful analysis and tracing tools for schema conversion
APA, Harvard, Vancouver, ISO, and other styles
28

CHEN, SHAO-CHUNG, and 陳紹中. "A study for schema transformation between relational and column-based databases." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/z52w69.

Full text
Abstract:
碩士<br>中國科技大學<br>資訊工程系資訊科技應用碩士在職專班<br>105<br>Relation database Through Key Value, describes the Relationship between Data Entity, Normalization to remove the contents of Consistency and Redundancy, resulting in Logical Database Schema (hereinafter referred to as Schema). So the database is designed to be Schema, not Record. Schema through the expansion, modification, and migration and other processing, the need for expensive maintenance costs. In addition to the data structure is not easy to change with the effectiveness of equipment alone for data processing. Big data databases have many advantages over traditional relational databases. Storage costs are low, only need to increase the general computer equipment or expansion of the server node can achieve the same expansion of the same purpose. Data processing performance advantages, through the decentralized hardware architecture to spread data to multiple devices for data operations, take much less time than the traditional relational database. Data structure changes are easy; you can infinitely expand the field without the need to define the data type, which in the data structure changes will be easier. Based on the above reasons, the traditional relational database migration to the Column-Based large data database must be the future trend. This study has the concept of system maintenance, from "top down" to the planning. The ER-model of the relational database is extended to EER-Model, and the schema will change as the data model changes.EE-R Model is object-oriented, and the data model of Column Based data database is also an object-oriented concept, and the rules of mutual conversion are found by similar data models. AS long as you can use this rule, you can convert the Schema of a traditional relational database to a Column Based database schema without the need for expensive maintenance costs.
APA, Harvard, Vancouver, ISO, and other styles
29

Steyn, Genevieve Lee. "The design of a database of resources for rational therapy." Diss., 1999. http://hdl.handle.net/10500/16116.

Full text
Abstract:
The purpose of this study is to design a database of resources for rational therapy. An investigation of the current health situation and reorientation towards primary health care (PHC) in South Africa evidenced the need for a database of resources which would meet the demand for rational therapy information made on the Helderberg College Library by various user groups as well as make a contribution to the national health information infrastructure. Rational therapy is viewed as an approach within PHC that is rational, common-sense, wholistic and credible, focusing on the prevention and maintenance of health. A model of the steps in database design was developed. A user study identified users' requirements for design and the conceptual schema was developed. The entities, attributes, relationships and policies were presented and graphically summarised in an Entity-Relationship (E-R) diagram. The conceptual schema is the blueprint for further design and implementation of the database.<br>Information Science<br>M.Inf.
APA, Harvard, Vancouver, ISO, and other styles
30

Patil, Priti. "Holistic Source-centric Schema Mappings For XML-on-RDBMS." Thesis, 2004. http://etd.iisc.ernet.in/handle/2005/1403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Patil, Priti. "Holistic Source-centric Schema Mappings For XML-on-RDBMS." Thesis, 2005. https://etd.iisc.ac.in/handle/2005/1403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chou, Shih-Ming, and 周世民. "High-Performance Query Processing Techniques and Schema Definitions for XML-Relational Databases with Inter-Referenced Supported." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/91698073547013211854.

Full text
Abstract:
碩士<br>國立屏東商業技術學院<br>資訊工程系(所)<br>99<br>The design of XML-relational databases has become an active research topic in the recent decade. Many excellent approaches have been proposed to store and manipulate XML data in relational databases. The most critical issue for designing a high-performance XML-relational database is the mapping technique which translates XML data into/from relational data. Researchers and venders have proposed many well-design mapping techniques, however, most previous work has a potential performance problem for processing XML data with cross-reference relationships. In this these, we will propose a novel model-mapping-schema-based approach, called XPred+, to store and process cross-reference XML data more efficiently. The proposed approach can reduce significant join costs for processing cross-reference XML data. The basic idea is to store data and its referenced data in the same place such that the number of join operations can be reduced when processing cross-reference data. In particular, for every node in a given XML document, we store its referenced data's ID within itself. It can eliminate the join operation for cross-reference accessing such that the performance of query processing can be improved. The capability of our proposed approach was verified by a series of simulation experiments for which we have some encouraging experimental results.
APA, Harvard, Vancouver, ISO, and other styles
33

Voigt, Hannes. "Flexibility in Data Management." Doctoral thesis, 2013. https://tud.qucosa.de/id/qucosa%3A27720.

Full text
Abstract:
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
APA, Harvard, Vancouver, ISO, and other styles
34

Mikuš, Tomáš. "Aktualizace XML dat." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-310953.

Full text
Abstract:
Updating XML data is very wide area, which must solve a number of difficult problems. From designing language with sufficient expressive power to the XML data repository able to apply the changes. Ways to deal with them are few. From this perspective, is this work very closely dedicated only to the language XQuery. Thus, its extension for updates, for which the candidate recommendation by the W3C were published only recently. Another specialization of this work is to focus only on the XML data stored in the object­relational database with that repository will enforce the validity of documents to the scheme described in XML Schema. This requirement, combined with the possibility of updating of data in the repository is on the contradictory requirements. In this thesis is designed language based on XQuery language, designed and implemented evaluating of the update queries of the language on the store and a description and implementation of the store in object­relational database.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography