To see the other types of publications on this topic, follow the link: XML Databases.

Dissertations / Theses on the topic 'XML Databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'XML Databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ustunkaya, Ekin. "Fuzzy Querying In Xml Databases." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605729/index.pdf.

Full text
Abstract:
Real-world information containing subjective opinions and judgments has emerged the need to represent complex and imprecise data in databases. Additionally, the challenge of transferring information between databases whose data storage methods are not compatible has been an important research topic. Extensible Markup Language (XML) has the potential to meet these challenges since it has the ability to represent complex and imprecise data. In this thesis, an XML based fuzzy data representation and querying system is designed and implemented. The resulting system enables fuzzy querying on XML documents by using XQuery, a language used for querying XML documents. In the system, complex and imprecise data are represented using XML combined with the fuzzy representation. In addition to fuzzy querying, the system enables restructuring of XML Schemas by merging of elements of the XML documents. By using this feature of the system, one can generate a new XML Schema and new XML documents from the existing documents according to this new XML Schema. XML data used in the system are retrieved from Internet by Web Services, which can make use of XML&rsquo
s capabilities to transfer data and, XML documents are stored in a native XML database management system.
APA, Harvard, Vancouver, ISO, and other styles
2

Lam, Franky Shung Lai Chemical Sciences &amp Engineering Faculty of Engineering UNSW. "Optimization techniques for XML databases." Awarded by:University of New South Wales. Chemical Sciences & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40702.

Full text
Abstract:
In this thesis, we address several fundamental concerns of maintaining and querying huge ordered label trees. We focus on practical implementation issues of storing, updating and query optimization of XML database management system. Specifically, we address the XML order maintenance problem, efficient evaluation of structural join, intrinsic skew handling of join, succinct storage of XML data and update synchronization of mobile XML data.
APA, Harvard, Vancouver, ISO, and other styles
3

Maatuk, Abdelsalam. "Migrating relational databases into object-based and XML databases." Thesis, Northumbria University, 2009. http://nrl.northumbria.ac.uk/3374/.

Full text
Abstract:
Rapid changes in information technology, the emergence of object-based and WWW applications, and the interest of organisations in securing benefits from new technologies have made information systems re-engineering in general and database migration in particular an active research area. In order to improve the functionality and performance of existing systems, the re-engineering process requires identifying and understanding all of the components of such systems. An underlying database is one of the most important component of information systems. A considerable body of data is stored in relational databases (RDBs), yet they have limitations to support complex structures and user-defined data types provided by relatively recent databases such as object-based and XML databases. Instead of throwing away the large amount of data stored in RDBs, it is more appropriate to enrich and convert such data to be used by new systems. Most researchers into the migration of RDBs into object-based/XML databases have concentrated on schema translation, accessing and publishing RDB data using newer technology, while few have paid attention to the conversion of data, and the preservation of data semantics, e.g., inheritance and integrity constraints. In addition, existing work does not appear to provide a solution for more than one target database. Thus, research on the migration of RDBs is not fully developed. We propose a solution that offers automatic migration of an RDB as a source into the recent database technologies as targets based on available standards such as ODMG 3.0, SQL4 and XML Schema. A canonical data model (CDM) is proposed to bridge the semantic gap between an RDB and the target databases. The CDM preserves and enhances the metadata of existing RDBs to fit in with the essential characteristics of the target databases. The adoption of standards is essential for increased portability, flexibility and constraints preservation. This thesis contributes a solution for migrating RDBs into object-based and XML databases. The solution takes an existing RDB as input, enriches its metadata representation with the required explicit semantics, and constructs an enhanced relational schema representation (RSR). Based on the RSR, a CDM is generated which is enriched with the RDB's constraints and data semantics that may not have been explicitly expressed in the RDB metadata. The CDM so obtained facilitates both schema translation and data conversion. We design sets of rules for translating the CDM into each of the three target schemas, and provide algorithms for converting RDB data into the target formats based on the CDM. A prototype of the solution has been implemented, which generates the three target databases. Experimental study has been conducted to evaluate the prototype. The experimental results show that the target schemas resulting from the prototype and those generated by existing manual mapping techniques were comparable. We have also shown that the source and target databases were equivalent, and demonstrated that the solution, conceptually and practically, is feasible, efficient and correct.
APA, Harvard, Vancouver, ISO, and other styles
4

Arion, Andrei. "XML access modules : towards physical data independence in XML databases." Paris 11, 2007. http://www.theses.fr/2007PA112288.

Full text
Abstract:
Nous étudions dans cette thèse le problème de l'indépendance physique des données dans les bases de données XML. Dans une première partie de cette thèse nous proposons les modules d'accès XML (XML Access Modules ou XAMs) - un langage de motifs d'arbre conçu pour exprimer un grand sous-ensemble de XQuery, et enrichi avec des noeuds optionnels (permettant de capturer des motifs qui couvrent plusieurs requêtes imbriquées) et des identiants structurels (qui augmentent les possibilités de réécriture). Nous démontrons que ce langage des vues peut être utilise pour décrire uniformément un grand nombre de schémas de stockage, d'index et de vues matérialisées. Dans une deuxième partie de cette thèse nous étudions le problème de la réécriture des requêtes XQuery à travers des vues exprimées par des modules d'accès XML. Dans un premier temps, nous présentons un algorithme capable d'extraire des motifs XAM à partir des requêtes XQuery et nous démontrons l'importance d'utiliser des vues qui peuvent enjamber plusieurs blocs XQuery imbriqués. Par la suite, nous étudions le problème de la réécriture des requêtes en utilisant des vues materialisées, où la requête et les vues sont décrites par des vues XAM. Nous caractérisons la complexité de l'inclusion de motifs d'arbre et de la réécriture des requêtes sous les contraintes exprimées par des résumés structurels, dont une forme augmentée permettra également d'exprimer des contraintes d'intégrité
The purpose of this thesis is to design a framework for achieving the goal of physical data independence in XML databases. We first propose the XML Access Modules - a rich tree pattern language featuring multiple returned nodes, nesting, structural identifiers and optional nodes, and we show how it can be used to uniformly describe a large set of XML storage schemes, indices and materialized views. A second part of this thesis focuses on the problem of XQuery rewriting using XML Access Modules. As a first step of our rewriting approach we present an algorithm to extract XML Access Modules patterns from XQuery and we show that the patterns we identify are strictly larger than in previous works, and in particular may span over nested XQuery blocks. We characterize the complexity of tree pattern containment (which is a key subproblem of rewriting) and rewriting itself, under the constraints expressed by a structural summary, whose enhanced form also entails integrity constraints. We also show how to exploit the structural identifiers from the view definitions in order to enhance the rewriting opportunities
APA, Harvard, Vancouver, ISO, and other styles
5

Skoglund, Robin. "Full-Text Search in XML Databases." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9837.

Full text
Abstract:

The Extensible Markup Language (XML) has become an increasingly popular format for representing and exchanging data. Its flexible and exstensible syntax makes it suitable for representing both structured data and textual information, or a mixture of both. The popularization of XML has lead to the development of a new database type. XML databases serve as repositories of large collections of XML documents, and seek to provide the same benefits for XML data as relational databases for relational data; indexing, transactional processing, failsafe physical storage, querying collections etc.. There are two standardized query languages for XML, XQuery and XPath, which are both powerful for querying and navigating the structure XML. However, they offer limited support for full-text search, and cannot be used alone for typical Information Retrieval (IR) applications. To address IR-related issues in XML, a new standard is emerging as an extension to XPath and XQuery: XQuery and XPath Full Text 1.0 (XQFT). XQFT is carefully investigated to determine how well-known IR techniques apply to XML, and the chracateristics of full-text search and indexing in existing XML databases are described in a state-of-the-art study. Based on findings from literature and source code review, the design and implementation of XQFT is discussed; first in general terms, then in the context of Oracle Berkeley DB XML (BDB XML). Experimental support for XQFT is enabled in BDB XML, and a few experiments are conducted in order to evaluate functionality aspects of the XQFT implementation. A scheme for full-text indexing in BDB XML is proposed. The full-text index acts as an augmented version of an inverted list, and is implemented on top of an Oracle Berkeley DB database. Tokens are used as keys, with data tuples for each distinct (document, path) combination the token occurs in. Lookups in the index are based on keywords, and should allow answering various queries without materializing data. Investigation shows that XML-based IR with XQFT is not fundamentally different from traditional text-based IR. Full-text queries rely on linguistic tokens, which --- in XQFT --- are derived from nodes without considering the XML structure. Further, it is discovered that full-text indexing is crucial for query efficiency in large document collections. In summary, common issues with full-text search are present in XML-based IR, and are addressed in the same manner as text-based IR.

APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Bin. "Integration of multiple databases using XML." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0017/MQ48260.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Haifeng. "Efficient structural query processing in XML databases /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20JIANG.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 115-125). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
8

Lau, Ho Lam. "The development of the nested relational sequence model to support XML databases /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20LAU.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 87-96). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
9

Kunovský, Tomáš. "Temporální XML databáze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255389.

Full text
Abstract:
The primary goal of this work is a implementation of temporal XML database in Java. There are described databases for XML documents and temporal databases with emphasis on their query languages and problem data storing is also analyzes for temporal databases. Source codes of the resulting application are public as open-source.
APA, Harvard, Vancouver, ISO, and other styles
10

Hammerschmidt, Beda Christoph. "KeyX selective key-oriented indexing in native XML-databases /." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=97915989X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Boháč, Martin. "Perzistence XML v relační databázi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237200.

Full text
Abstract:
The aim of this thesis is to create a client xDB database, which supports visualization and management of XML documents and schemas. The first part deals with the introduction of XML, XML schemas (DTD, XML Schema, RelaxNG, etc.) and contextual technologies. After that the thesis deals with the problem of the XML persistence and it focuses on mapping techniques necessary for an efficient storage in a relational database. The main part is devoted to the design and implementation of client application XML Admin, which is programmed in Java. The application uses the XML:DB interface to communicate with the xDB database. It supports storing XML documents to a collection and the XPath language for querying them. The final section is devoted to application performance testing and comparison with existing native database eXist.
APA, Harvard, Vancouver, ISO, and other styles
12

Mulchandani, Mukesh K. "Updating XML views of relational data." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0429103-200545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Isikman, Omer Ozgun. "An Automated Conversion Of Temporal Databases Into Xml With Fuzziness Option." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612398/index.pdf.

Full text
Abstract:
The importance of incorporating time in databases has been well realized by the community and time varying databases have been extensively studied by researchers. The main idea is to model up-to-date changes to data since it became available. Time information is mostly overlaid on the traditional databases, and extensional time dimension helps in inquiring or past data
this all becomes possible only once the idea is realized and favored by commercial database management systems. Unfortunately, one disadvantage of the temporal database management system is that it has not been commercialized. Firstly XML ,eXtensible Markup Language, is a defacto standard for data interchange and hence integrating XML as the data model is decided. The motivation for the work described in this thesis is two-fold
transferring databases into XML with changing crisp values into fuzzy variables describing fuzzy sets and second bitemporal databases form one interesting type of temporal databases. Thus, purpose is to suggest a complete automated system that converts any bitemporal database to its fuzzy XML schema definition. However, the implemented temporal database operators are database content independent. Fuzzy elements are capable of having different membership functions and varying number of linguistic variables. A scheme for determining membership function parameters is proposed. Finally, fuzzy queries have also been implemented as part of the system.
APA, Harvard, Vancouver, ISO, and other styles
14

Pradeep, Kris. "XML as a data exchange medium for DoD legacy databases." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FPradeep.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hall, David. "An XML-based Database of Molecular Pathways." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-3717.

Full text
Abstract:

Research of protein-protein interactions produce vast quantities of data and there exists a large number of databases with data from this research. Many of these databases offers the data for download on the web in a number of different formats, many of them XML-based.

With the arrival of these XML-based formats, and especially the standardized formats such as PSI-MI, SBML and BioPAX, there is a need for searching in data represented in XML. We wanted to investigate the capabilities of XML query tools when it comes to searching in this data. Due to the large datasets we concentrated on native XML database systems that in addition to search in XML data also offers storage and indexing specially suited for XML documents.

A number of queries were tested on data exported from the databases IntAct and Reactome using the XQuery language. There were both simple and advanced queries performed. The simpler queries consisted of queries such as listing information on a specified protein or counting the number of reactions.

One central issue with protein-protein interactions is to find pathways, i.e. series of interconnected chemical reactions between proteins. This problem involve graph searches and since we suspected that the complex queries it required would be slow we also developed a C++ program using a graph toolkit.

The simpler queries were performed relatively fast. Pathway searches in the native XML databases took long time even for short searches while the C++ program achieved much faster pathway searches.

APA, Harvard, Vancouver, ISO, and other styles
16

Madiraju, Praveen. "Global Semantic Integrity Constraint Checking for a System of Databases." Digital Archive @ GSU, 2005. http://digitalarchive.gsu.edu/cs_diss/1.

Full text
Abstract:
In today’s emerging information systems, it is natural to have data distributed across multiple sites. We define a System of Databases (SyDb) as a collection of autonomous and heterogeneous databases. R-SyDb (System of Relational Databases) is a restricted form of SyDb, referring to a collection of relational databases, which are independent. Similarly, X-SyDb (System of XML Databases) refers to a collection of XML databases. Global integrity constraints ensure integrity and consistency of data spanning multiple databases. In this dissertation, we present (i) Constraint Checker, a general framework of a mobile agent based approach for checking global constraints on R-SyDb, and (ii) XConstraint Checker, a general framework for checking global XML constraints on X-SyDb. Furthermore, we formalize multiple efficient algorithms for varying semantic integrity constraints involving both arithmetic and aggregate predicates. The algorithms take as input an update statement, list of all global semantic integrity constraints with arithmetic predicates or aggregate predicates and outputs sub-constraints to be executed on remote sites. The algorithms are efficient since (i) constraint check is carried out at compile time, i.e. before executing update statement; hence we save time and resources by avoiding rollbacks, and (ii) the implementation exploits parallelism. We have also implemented a prototype of systems and algorithms for both R-SyDb and X-SyDb. We also present performance evaluations of the system.
APA, Harvard, Vancouver, ISO, and other styles
17

Ulliana, Federico. "Types for Detecting XML Query-Update Independence." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00757597.

Full text
Abstract:
Pendant la dernière décennie, le format de données XML est devenu l'un des principaux moyens de représentation et d'échange de données sur le Web. La détection de l'indépendance entre une requête et une mise à jour, qui a lieu en absence d'impact d'une mise à jour sur une requête, est un problème crucial pour la gestion efficace de tâches comme la maintenance des vues, le contrôle de concurrence et de sécurité. Cette thèse présente une nouvelle technique d'analyse statique pour détecter l'indépendance entre requête et mise à jour XML, dans le cas où les données sont typées par un schéma. La contribution de la thèse repose sur une notion de type plus riche que celle employée jusqu'ici dans la littérature. Au lieu de caractériser les éléments d'un document XML utiles ou touchés par une requête ou mise à jour en utilisant un ensemble d'étiquettes, ceux-ci sont caractérisés par un ensemble de chaînes d'étiquettes, correspondants aux chemins parcourus pendant l'évaluation de l'expression dans un document valide pour le schéma. L'analyse d'indépendance résulte du développement d'un système d'inférence de type pour les chaînes. Cette analyse précise soulève une question importante et difficile liés aux schémas récursifs: un ensemble infini de chaînes pouvant être inférées dans ce cas, est-il possible et comment se ramener à une analyse effective donc finie. Cette thèse présente donc une technique d'approximation correcte et complète assurant une analyse finie. L'analyse de cette technique a conduit à développer des algorithmes pour une implantation efficace de l'analyse, et de mener une large série de tests validant à la fois la qualité de l'approche et son efficacité.
APA, Harvard, Vancouver, ISO, and other styles
18

Čižinská, Martina. "Srovnání nativních XML databází z hlediska správy XML dat." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199568.

Full text
Abstract:
The basis of my work represents a research focused on the problematic of native XML databases. The main goal is to draw comparison between the selected database systems in a sphere of user control and management of XML data. Database products are tested via XMark benchmark test using its XQuery queries and testing XML data. Final comparisons and recommendations for use are based on the concluding evaluation and findings. The results of research may improve orientation in solved problematics. They could help with the selection of a suitable database product to store XML data.
APA, Harvard, Vancouver, ISO, and other styles
19

Halle, Robert F. "Extensible Markup Language (XML) based analysis and comparison of heterogeneous databases." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA393736.

Full text
Abstract:
Thesis (M.S. in Software Engineering) Naval Postgraduate School, June 2001.
Thesis advisor(s): Berzins, Valdis. "June 2001." Includes bibliographical references (p. 137-138). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Ramesh, Kartic. "An XML based scalable implementation of Temporal Databases using Parametric Model." [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1476339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Farooqi, Norah. "Applying dynamic trust based access control to improve XML databases' security." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/4468/.

Full text
Abstract:
XML (Extensible Mark-up Language) databases are an active research area. The topic of security in XML databases is important as it includes protecting sensitive data and providing a secure environment to users. Trust based access is an established technique in many fields, such as networks and distributed systems, but it has not previously been used for XML databases. In Trust Based Access Control, user privileges are calculated dynamically depending on the user’s behaviour. In this thesis, the novel idea of applying Trust Based Access Control (TBAC) for XML databases has been developed. This approach improves security and provides dynamic access control for XML databases. It manages access policy depending on users’ trustworthiness and prevents unauthorised processes, malicious transactions, and misuse from both outsiders and insiders. A practical Trust Based Access Control system for XML databases was evaluated. The dynamic access control has been tested from security, scalability, functionality, performance, and storage perspectives. The experimental results illustrate the flexibility of Trust Values and the scalability of the system with small to large XML databases and with various numbers of users. The results show that the main research idea of this study is worth pursuing and the system could be developed further.
APA, Harvard, Vancouver, ISO, and other styles
22

Leonard, Jonathan Lee. "Strategies for Encoding XML Documents in Relational Databases: Comparisons and Contrasts." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2213.

Full text
Abstract:
The rise of XML as a de facto standard for document and data exchange has created a need to store and query XML documents in relational databases, today's de facto standard for data storage. Two common strategies for storing XML documents in relational databases, a process known as document shredding, are Interval encoding and ORDPATH Encoding. Interval encoding, which uses a fixed mapping for shredding XML documents, tends to favor selection queries, at a potential cost of O(N) for supporting insertion queries. ORDPATH Encoding, which uses a looser mapping for shredding XML, supports fixed-cost insertions, at a potential cost of longer-running selection queries. Experiments conducted for this research suggest that the breakeven point between the two algorithms occurs when users offer an average 1 insertion to every 5.6 queries, relative to documents of between 1.5 MB and 4 MB in size. However, heterogeneous tests of varying mixes of selects and inserts indicate that Interval always outperforms ORDPATH for mixes ranging from 76% selects to 88% selects. Queries for this experiment and sample documents were drawn from the XMark benchmark suite.
APA, Harvard, Vancouver, ISO, and other styles
23

Shamsedin, Tekieh Razieh Sadat Information Systems Technology &amp Management Australian School of Business UNSW. "An XML-based framework for electronic business document integration with relational databases." Publisher:University of New South Wales. Information Systems, Technology & Management, 2009. http://handle.unsw.edu.au/1959.4/43695.

Full text
Abstract:
Small and medium enterprises (SMEs) are becoming increasingly engaged in B2B interactions. The ubiquitousness of the Internet and the quasi-reliance on electronic document exchanges with larger trading partners have fostered this move. The main technical challenge that this brings to SMEs is that of business document integration: they need to exchange business documents with heterogeneous document formats and also integrate these documents with internal information systems. Often they can not afford using expensive, customized and proprietary solutions for document exchange and storage. Rather they need cost-effective approaches designed based on open standards and backed with easy-to-use information systems. In this dissertation, we investigate the problem of business document integration for SMEs following a design science methodology. We propose a framework and conceptual architecture for a business document integration system (BDIS). By studying existing business document formats, we recommend using the GS1 XML standard format as the intermediate format for business documents in BDIS. The GS1 standards are widely used in supply chains and logistics globally. We present an architecture for BDIS consisting of two layers: one for the design of internal information system based on relational databases, capable of storing XML business documents, and the other enabling the exchange of heterogeneous business documents at runtime. For the design layer, we leverage existing XML schema conversion approaches, and extend them, to propose a customized and novel approach for converting GS1 XML document schemas into relational schemas. For the runtime layer, we propose wrappers as architectural components for the conversion of various electronic documents formats into the GS1 XML format. We demonstrate our approach through a case study involving a GS1 XML business document. We have implemented a prototype BDIS. We have evaluated and compared it with existing research and commercial tools for XML to relational schema conversion. The results show that it generates operational and simpler relational schemas for GS1 XML documents. In conclusion, the proposed framework enables SMEs to engage effectively in electronic business.
APA, Harvard, Vancouver, ISO, and other styles
24

Goodfellow, Martin Hugh. "Algebraic methods for incremental maintenance and updates of views within XML databases." Thesis, University of Strathclyde, 2014. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=24341.

Full text
Abstract:
Within XML data management the performance of queries has been improved by using materialised views. However, modifications to XML documents must be reflected to these views. This is known as the view maintenance problem. Conversely updates to the view must be reflected on the XML source documents. This is the view update problem. Fully recalculating these views or documents to reflect these changes is inefficient. To address this, a number of distinct methods are reported in the literature that address either incremental view maintenance or update. This thesis develops a consistent incremental algebraic approach to view maintenance and view update using generic operators. This approach further differs from related work in that it supports views with multiple returned nodes. Generally the data sets to be incrementally maintained are smaller for the view update case. Therefore, it was necessary to investigate the circumstances in which converting view maintenance into view update ga ve better performance. Finally, dynamic reasoning on updates was considered to determine whether it improved the performance of the proposed view maintenance and view update methods. The system was implemented using features of XML stores and XML query evaluation engines including structural identifiers for XML and structural join algorithms. Methods for incrementally handling the view maintenance and view update problem are presented and the benefits of these methods over existing algorithms are established by means of experiments. These experiments also depict the benefit of translating view maintenance updates into view updates, where applicable, and the benefits of dynamic reasoning. The main contribution of this thesis is the development of similar incremental algebraic methods which provide a consistent solution to the view maintenance and view update problems. The originality of these methods is their ability to handle statement-level updates using generic operators and views returning data from multiple nodes.
APA, Harvard, Vancouver, ISO, and other styles
25

Ramani, Ramasubramanian. "A toolkit for managing XML data with a relational database management system." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1308/Thesis.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains x, 54 p.; also contains graphics. Vita. Includes bibliographical references (p. 50-53).
APA, Harvard, Vancouver, ISO, and other styles
26

Schnädelbach, Astrid. "RelAndXML a system to manage XML-based course material with object-relational databases /." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971568545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Teixeira, Marcus Vinícius Carneiro. "Gerenciamento de anotações de biosseqüências utilizando associações entre ontologias e esquemas XML." Universidade Federal de São Carlos, 2008. https://repositorio.ufscar.br/handle/ufscar/384.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:31Z (GMT). No. of bitstreams: 1 2080.pdf: 1369419 bytes, checksum: 4100f6c7c0400bc50f4f2f9a28621613 (MD5) Previous issue date: 2008-05-26
Universidade Federal de Sao Carlos
Bioinformatics aims at providing computational tools to the development of genome researches. Among those tools are the annotations systems and the Database Management Systems (DBMS) that, associated to ontologies, allow the formalization of both domain conceptual and the data scheme. The data yielded by genome researches are often textual and with no regular structures and also requires scheme evolution. Due to these aspects, semi-structured DBMS might offer great potential to manipulate those data. Thus, this work presents architecture for biosequence annotation based on XML databases. Considering this architecture, a special attention was given to the database design and also to the manual annotation task performed by researchers. Hence, this architecture presents an interface that uses an ontology-driven model for XML schemas modeling and generation, and also a manual annotation interface prototype that uses molecular biology domain ontologies, such as Gene Ontology and Sequence Ontology. These interfaces were proven by Bioinformatics and Database experienced users, who answered questionnaires to evaluate them. The answers presented good assessments to issues like utility and speeding up the database design. The proposed architecture aims at extending and improving the Bio-TIM, an annotation system developed by the Database Group from the Computer Science Department of the Federal University from São Carlos (UFSCar).
A Bioinformática é uma área da ciência que visa suprir pesquisas de genomas com ferramentas computacionais que permitam o seu desenvolvimento tecnológico. Dentre essas ferramentas estão os ambientes de anotação e os Sistemas Gerenciadores de Bancos de Dados (SGBDs) que, associados a ontologias, permitem a formalização de conceitos do domínio e também dos esquemas de dados. Os dados produzidos em projetos genoma são geralmente textuais e sem uma estrutura de tipo regular, além de requerer evolução de esquemas. Por suas características, SGBDs semi-estruturados oferecem enorme potencial para tratar tais dados. Assim, este trabalho propõe uma arquitetura para um ambiente de anotação de biosseqüências baseada na persistência dos dados anotados em bancos de dados XML. Neste trabalho, priorizou-se o projeto de bancos de dados e também o apoio à anotação manual realizada por pesquisadores. Assim, foi desenvolvida uma interface que utiliza ontologias para guiar a modelagem de dados e a geração de esquemas XML. Adicionalmente, um protótipo de interface de anotação manual foi desenvolvido, o qual faz uso de ontologias do domínio de biologia molecular, como a Gene Ontology e a Sequence Ontology. Essas interfaces foram testadas por usuários com experiências nas áreas de Bioinformática e Banco de Dados, os quais responderam a questionários para avaliá-las. O resultado apresentou qualificações muito boas em diversos quesitos avaliados, como exemplo agilidade e utilidade das ferramentas. A arquitetura proposta visa estender e aperfeiçoar o ambiente de anotação Bio-TIM, desenvolvido pelo grupo de Banco de Dados do Departamento de Computação da Universidade Federal de São Carlos (UFSCar).
APA, Harvard, Vancouver, ISO, and other styles
28

Murphy, Brian R. "Order-sensitive XML query processing over relational sources." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0505103-123753.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: computation pushdown; XML; order-based Xquery processing; relational database; ordered SQL queries; data model mapping; XQuery; XML data mapping; SQL; XML algebra rewrite rules; XML document order. Includes bibliographical references (p. 64-67).
APA, Harvard, Vancouver, ISO, and other styles
29

Mergen, Sérgio Luis Sardi. "Casamento de esquemas XML e esquemas relacionais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/10421.

Full text
Abstract:
O casamento entre esquemas XML e esquemas relacionais é necessário em diversas aplicações, tais como integração de informação e intercâmbio de dados. Tipicamente o casamento de esquemas é um processo manual, talvez suportado por uma interface grá ca. No entanto, o casamento manual de esquemas muito grandes é um processo dispendioso e sujeito a erros. Disto surge a necessidade de técnicas (semi)-automáticas de casamento de esquemas que auxiliem o usuário fornecendo sugestões de casamento, dessa forma reduzindo o esforço manual aplicado nesta tarefa. Apesar deste tema já ter sido estudado na literatura, o casamento entre esquemas XML e esquemas relacionais é ainda um tema em aberto. Isto porque os trabalhos existentes ou se aplicam para esquemas de nidos no mesmo modelo, ou são genéricos demais para o problema em questão. O objetivo desta dissertação é o desenvolvimento de técnicas especí cas para o casamento de esquemas XML e esquemas relacionais. Tais técnicas exploram as particularidades existentes entre estes esquemas para inferir valores de similaridade entre eles. As técnicas propostas são avaliadas através de experimentos com esquemas do mundo real.
The matching between XML schemas and relational schemas has many applications, such as information integration and data exchange. Typically, schema matching is done manually by domain experts, sometimes using a graphical tool. However, the matching of large schemas is a time consuming and error-prone task. The use of (semi-)automatic schema matching techniques can help the user in nding the correct matches, thereby reducing his labor. The schema matching problem has already been addressed in the literature. Nevertheless, the matching of XML schemas and relational schemas is still an open issue. This comes from the fact that the existing work is whether speci c for schemas designed in the same model, or too generic for the problem in discussion. The mais goal of this dissertation is to develop speci c techniques for the matching of XML schemas and relational schemas. Such techniques exploit the particularities found when analyzing the two schemas together, and use these cues to leverage the matching process. The techniques are evaluated by running experiments with real-world schemas.
APA, Harvard, Vancouver, ISO, and other styles
30

Sanz, Blasco Ismael. "Flexible techniques for heterogeneous XML data retrieval." Doctoral thesis, Universitat Jaume I, 2007. http://hdl.handle.net/10803/10373.

Full text
Abstract:
The progressive adoption of XML by new communities of users has motivated the appearance of applications that require the management of large and complex collections, which present a large amount of heterogeneity. Some relevant examples are present in the fields of bioinformatics, cultural heritage, ontology management and geographic information systems, where heterogeneity is not only reflected in the textual content of documents, but also in the presence of rich structures which cannot be properly accounted for using fixed schema definitions. Current approaches for dealing with heterogeneous XML data are, however, mainly focused at the content level, whereas at the structural level only a limited amount of heterogeneity is tolerated; for instance, weakening the parent-child relationship between nodes into the ancestor-descendant relationship.
The main objective of this thesis is devising new approaches for querying heterogeneous XML collections. This general objective has several implications: First, a collection can present different levels of heterogeneity in different granularity levels; this fact has a significant impact in the selection of specific approaches for handling, indexing and querying the collection. Therefore, several metrics are proposed for evaluating the level of heterogeneity at different levels, based on information-theoretical considerations. These metrics can be employed for characterizing collections, and clustering together those collections which present similar characteristics.
Second, the high structural variability implies that query techniques based on exact tree matching, such as the standard XPath and XQuery languages, are not suitable for heterogeneous XML collections. As a consequence, approximate querying techniques based on similarity measures must be adopted. Within the thesis, we present a formal framework for the creation of similarity measures which is based on a study of the literature that shows that most approaches for approximate XML retrieval (i) are highly tailored to very specific problems and (ii) use similarity measures for ranking that can be expressed as ad-hoc combinations of a set of --basic' measures. Some examples of these widely used measures are tf-idf for textual information and several variations of edit distances. Our approach wraps these basic measures into generic, parametrizable components that can be combined into complex measures by exploiting the composite pattern, commonly used in Software Engineering. This approach also allows us to integrate seamlessly highly specific measures, such as protein-oriented matching functions.
Finally, these measures are employed for the approximate retrieval of data in a context of highly structural heterogeneity, using a new approach based on the concepts of pattern and fragment. In our context, a pattern is a concise representations of the information needs of a user, and a fragment is a match of a pattern found in the database. A pattern consists of a set of tree-structured elements --- basically an XML subtree that is intended to be found in the database, but with a flexible semantics that is strongly dependent on a particular similarity measure. For example, depending on a particular measure, the particular hierarchy of elements, or the ordering of siblings, may or may not be deemed to be relevant when searching for occurrences in the database.
Fragment matching, as a query primitive, can deal with a much higher degree of flexibility than existing approaches. In this thesis we provide exhaustive and top-k query algorithms. In the latter case, we adopt an approach that does not require the similarity measure to be monotonic, as all previous XML top-k algorithms (usually based on Fagin's algorithm) do. We also presents two extensions which are important in practical settings: a specification for the integration of the aforementioned techniques into XQuery, and a clustering algorithm that is useful to manage complex result sets.
All of the algorithms have been implemented as part of ArHeX, a toolkit for the development of multi-similarity XML applications, which supports fragment-based queries through an extension of the XQuery language, and includes graphical tools for designing similarity measures and querying collections. We have used ArHeX to demonstrate the effectiveness of our approach using both synthetic and real data sets, in the context of a biomedical research project.
APA, Harvard, Vancouver, ISO, and other styles
31

Rode, Henning. "Methods and cost models for XPath Query Processing in main memory databases." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11051694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wong, Hing Kwok. "Bidirectional transformation between relational data and XML document with semantic preservation and incremental maintenance /." access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-cs-b19887619a.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2005.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy" Includes bibliographical references (leaves 218-226)
APA, Harvard, Vancouver, ISO, and other styles
33

Vodňanský, Daniel. "Porovnání schématu relační databáze a struktur formátu XML." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-198451.

Full text
Abstract:
The work deals with the relationship of the relational model and XML schema document and its technological and pragmatic aspects. It defines the theoretical field of data modeling at conceptual level and the two mentioned possible implementation models at the physical level. The aim is to answer the question when in the design and development of application or system it is appropriate to proceed with one of these models. Furthermore, this work also provides a general procedure for mapping conceptual schema into XML schema structures and solutions to problems that can come across during the mapping process. The problem is solved by analyzing two real issues - timetables of public transportation and the information system of a swimming school, formalized through a mechanism of predicate logic. Unlike most works on a similar topic this one varies in a pragmatic view on the problem - the concept of data, their origin, their target user and structuring.
APA, Harvard, Vancouver, ISO, and other styles
34

Sousa, FlÃvio Rubens de Carvalho. "RepliX: Um mecanismo para a replicaÃÃo de dados XML." Universidade Federal do CearÃ, 2007. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1361.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
XML has become a widely used standard for data representation and exchange in applications. The growing usage of XML creates a need for ecient storage and recovery systems for XML data. Native XML DBs (NXDBs) are being developed to target this demand. NXDBs implement many characteristics that are common to traditional DBs, such as storage, indexing, query processing, transactions and replication. Most existing solutions solve the replication issue through traditional techniques. However, the exibility of XML data imposes new challenges, so new replication techniques ought to be developed. To improve the performance and availability of NXDBs, this thesis proposes RepliX, a mechanism for XML data replication that takes into account the main characteristics of this data type, making it possible to reduce the response time in query processing and improving the fault-tolerance property of such systems. Although there are several replication protocols, using the group communication abstraction for communication and fault detection has proven to be a good solution, since this abstraction provides ecient message exchanging techniques and conability guarantees. RepliX uses this strategy, organizing the sites into an update group and a read-only group in such a way that allows for the use of load balancing among the sites, and makes the system less susceptible to faults, since there is no single point of failure in each group. In order to evaluate RepliX, a new replication layer was implemented on top of an existing NXDB to introduce the characteristics of the proposed mechanism. Several experiments using this layer were conducted, and their results conrm the mechanism's eciency considering the dierent aspects of a replicated database, improving its performance considerably, as well as its availability.
XML tem se tornado um padrÃo amplamente utilizado na representaÃÃo e troca de dados em aplicaÃÃes. Devido a essa crescente utilizaÃÃo do XML, torna-se necessÃria a existÃncia de sistemas eficientes de armazenamento e recuperaÃÃo de dados XML. EstÃo sendo desenvolvidos para este fim Bancos de Dados XML Nativos (BDXNs). Estes bancos implementam muitas das caracterÃsticas presentes em Bancos de Dados tradicionais, tais como armazenamento, indexaÃÃo, processamento de consultas, transaÃÃes e replicaÃÃo. Tratando-se especificamente de replicaÃÃo, a maioria das soluÃÃes existentes resolve essa questÃo apenas utilizando tÃcnicas tradicionais. Todavia, a exibilidade dos dados XML impÃe novos desafios, de modo que novas tÃcnicas de replicaÃÃo devem ser desenvolvidas. Para melhorar o desempenho e a disponibilidade dos BDXNs, esta dissertaÃÃo propÃe o RepliX, um mecanismo para replicaÃÃo de dados XML que considera as principais caracterÃsticas desses dados. Dessa forma, à possÃvel melhorar o tempo de resposta no processamento de consultas e tornar esses sistemas mais tolerantes a falhas. Dentre vÃrios tipos de protocolos de replicaÃÃo, a utilizaÃÃo da abstraÃÃo de comunicaÃÃo em grupos como estratÃgia de comunicaÃÃo e detecÃÃo de falhas mostrase uma soluÃÃo eficaz, visto que essa abstraÃÃo possui tÃcnicas eficientes para troca de mensagens e provà garantias de confiabilidade. Essa estratÃgia à utilizada no RepliX, que organiza os sites em dois grupos: de atualizaÃÃo e de leitura, permitindo assim balanceamento de carga entre os sites, alÃm de tornar o sistema menos sensÃvel a falhas, jà que nÃo hà um ponto de falha Ãnico em cada grupo. Para validar o RepliX, uma nova camada de replicaÃÃo foi implementada em um BDXN, a _m de introduzir as caracterÃsticas e os comportamentos descritos no mecanismo proposto. Experimentos foram feitos usando essa camada e os resultados obtidos atestam a sua eficÃcia considerando diferentes aspectos de um banco de dados replicado, melhorando o desempenho desses banco de dados consideravelmente bem como sua disponibilidade.
APA, Harvard, Vancouver, ISO, and other styles
35

Sousa, Flávio Rubens de Carvalho. "RepliX: Um mecanismo para a replicação de dados XML." reponame:Repositório Institucional da UFC, 2007. http://www.repositorio.ufc.br/handle/riufc/18058.

Full text
Abstract:
SOUSA, Flávio Rubens de Carvalho. RepliX: Um mecanismo para a replicação de dados XML. 2007. 77 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2007.
Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-29T17:20:02Z No. of bitstreams: 1 2007_dis_frcsousa.pdf: 4193448 bytes, checksum: 9f20d4c36e05e635c6fe3b3114e2228c (MD5)
Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-29T17:21:26Z (GMT) No. of bitstreams: 1 2007_dis_frcsousa.pdf: 4193448 bytes, checksum: 9f20d4c36e05e635c6fe3b3114e2228c (MD5)
Made available in DSpace on 2016-06-29T17:21:26Z (GMT). No. of bitstreams: 1 2007_dis_frcsousa.pdf: 4193448 bytes, checksum: 9f20d4c36e05e635c6fe3b3114e2228c (MD5) Previous issue date: 2007
XML tem se tornado um padrão amplamente utilizado na representação e troca de dados em aplicações. Devido a essa crescente utilização do XML, torna-se necessária a existência de sistemas eficientes de armazenamento e recuperação de dados XML. Estão sendo desenvolvidos para este fim Bancos de Dados XML Nativos (BDXNs). Estes bancos implementam muitas das características presentes em Bancos de Dados tradicionais, tais como armazenamento, indexação, processamento de consultas, transações e replicação. Tratando-se especificamente de replicação, a maioria das soluções existentes resolve essa questão apenas utilizando técnicas tradicionais. Todavia, a exibilidade dos dados XML impõe novos desafios, de modo que novas técnicas de replicação devem ser desenvolvidas. Para melhorar o desempenho e a disponibilidade dos BDXNs, esta dissertação propõe o RepliX, um mecanismo para replicação de dados XML que considera as principais características desses dados. Dessa forma, é possível melhorar o tempo de resposta no processamento de consultas e tornar esses sistemas mais tolerantes a falhas. Dentre vários tipos de protocolos de replicação, a utilização da abstração de comunicação em grupos como estratégia de comunicação e detecção de falhas mostrase uma solução eficaz, visto que essa abstração possui técnicas eficientes para troca de mensagens e provê garantias de confiabilidade. Essa estratégia é utilizada no RepliX, que organiza os sites em dois grupos: de atualização e de leitura, permitindo assim balanceamento de carga entre os sites, além de tornar o sistema menos sensível a falhas, já que não há um ponto de falha único em cada grupo. Para validar o RepliX, uma nova camada de replicação foi implementada em um BDXN, a _m de introduzir as características e os comportamentos descritos no mecanismo proposto. Experimentos foram feitos usando essa camada e os resultados obtidos atestam a sua eficácia considerando diferentes aspectos de um banco de dados replicado, melhorando o desempenho desses banco de dados consideravelmente bem como sua disponibilidade.
XML has become a widely used standard for data representation and exchange in applications. The growing usage of XML creates a need for e cient storage and recovery systems for XML data. Native XML DBs (NXDBs) are being developed to target this demand. NXDBs implement many characteristics that are common to traditional DBs, such as storage, indexing, query processing, transactions and replication. Most existing solutions solve the replication issue through traditional techniques. However, the exibility of XML data imposes new challenges, so new replication techniques ought to be developed. To improve the performance and availability of NXDBs, this thesis proposes RepliX, a mechanism for XML data replication that takes into account the main characteristics of this data type, making it possible to reduce the response time in query processing and improving the fault-tolerance property of such systems. Although there are several replication protocols, using the group communication abstraction for communication and fault detection has proven to be a good solution, since this abstraction provides e cient message exchanging techniques and con ability guarantees. RepliX uses this strategy, organizing the sites into an update group and a read-only group in such a way that allows for the use of load balancing among the sites, and makes the system less susceptible to faults, since there is no single point of failure in each group. In order to evaluate RepliX, a new replication layer was implemented on top of an existing NXDB to introduce the characteristics of the proposed mechanism. Several experiments using this layer were conducted, and their results con rm the mechanism's e ciency considering the di erent aspects of a replicated database, improving its performance considerably, as well as its availability.
APA, Harvard, Vancouver, ISO, and other styles
36

Jindra, Petr. "Verzování databází." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230800.

Full text
Abstract:
This diploma thesis is about versioning of databases. Firsts chapters are shortly describing the biggest problems what will need to solve. Thr second part refers to choosen database systems and introcution to SQL language. The last unit describes the development of program solving sketched problems.
APA, Harvard, Vancouver, ISO, and other styles
37

Senellart, Pierre. "XML probabiliste: Un modèle de données pour le Web." Habilitation à diriger des recherches, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00758055.

Full text
Abstract:
Les données extraites du Web sont chargées d'incertitude: elles peuvent contenir des contradictions ou résulter de processus par nature incertains comme l'intégration de données ou l'extraction automatique d'informations. Dans cette thèse d'habilitation, je présente les modèles de données XML probabilistes, la manière dont ils peuvent être utilisés pour représenter les données du Web, et la complexité de différentes opérations de gestion de données sur ces modèles. Je donne un état de l'art exhaustif du domaine, en insistant sur mes propres contributions. Je termine par un résumé de mes futurs projets de recherche.
APA, Harvard, Vancouver, ISO, and other styles
38

Parthepan, Vijayeandra. "Efficient Schema Extraction from a Collection of XML Documents." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1061.

Full text
Abstract:
The eXtensible Markup Language (XML) has become the standard format for data exchange on the Internet, providing interoperability between different business applications. Such wide use results in large volumes of heterogeneous XML data, i.e., XML documents conforming to different schemas. Although schemas are important in many business applications, they are often missing in XML documents. In this thesis, we present a suite of algorithms that are effective in extracting schema information from a large collection of XML documents. We propose using the cost of NFA simulation to compute the Minimum Length Description to rank the inferred schema. We also studied using frequencies of the sample inputs to improve the precision of the schema extraction. Furthermore, we propose an evaluation framework to quantify the quality of the extracted schema. Experimental studies are conducted on various data sets to demonstrate the efficiency and efficacy of our approach.
APA, Harvard, Vancouver, ISO, and other styles
39

Schuhart, Henrike. "Design and implementation of a database programming language for XML-based applications." Berlin Aka, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2890794&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Marina, Sahakyan. "Optimisation des mises à jours XML pour les systèmes main-memory: implémentation et expériences." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00641579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Williams, Clifton James. "Network application server using Extensible Mark-up Language (XML) to support distributed databases and 3D environments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA401644.

Full text
Abstract:
Thesis (M.S. in Computer Science and M.S. in Information Systems Technology) Naval Postgraduate School, December 2001.
Thesis advisors(s): Brutsman, Don ; Dolk, Daniel. Includes bibliographical references (p. 277-281). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
42

Simmons, Steven A. "Analysis and prototyping of the United States Marine Corps Total Force Administration System (TFAS), Echelon II : a web enabled database for the small unit leader /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FSimmons%5FSteven.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Schneider, Jan, Héctor Cárdenas, and José Alfonso Talamantes. "Using Web Services for Transparent Access to Distributed Databases." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-940.

Full text
Abstract:

This thesis consists of a strategy to integrate distributed systems with the aid of web services. The focus of this research involves three subjects, web services and distributed database systems and its application on a real-life project.

For defining the context in this thesis, we present the research methodology that provides the path where the investigation will be performed and the general concepts of the running environment and architecture of web services.

The mayor contribution for this thesis is a solution for the Chamber Trade in Sweden and VNemart in Vietnam obtaining the requirement specification according to the SPIDER project needs and our software design specification using distributed databases and web services.

As results, we present the software implementation and the way or software meets and the requirements previously defined. For future web services developments, this document provides guidance for best practices in this subject.

APA, Harvard, Vancouver, ISO, and other styles
44

Dang-Ngoc, Tuyet-Tram. "Fédération de données semi-structurées avec XML." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2003. http://tel.archives-ouvertes.fr/tel-00733510.

Full text
Abstract:
Contrairement aux données traditionnelles, les données semi-structurées sont irrégulières : des données peuvent manquer, des concepts similaires peuvent être représentés par différents types de données, et les structures même peuvent être mal connues. Cette absence de schéma prédéfini, permettant de tenir compte de toutes les données du monde extérieur, présente l'inconvénient de complexifier les algorithmes d'intégration des données de différentes sources. Nous proposons une architecture de médiation basée entièrement sur XML. L'objectif de cette architecture de médiation est de fédérer des sources de données distribuées de différents types. Elle s'appuie sur le langage XQuery, un langage fonctionnel conçu pour formuler des requêtes sur des documents XML. Le médiateur analyse les requêtes exprimées en XQuery et répartit l'exécution de la requête sur les différentes sources avant de recomposer les résultats. L'évaluation des requêtes doit se faire en exploitant au maximum les spécificités des données et permettre une optimisation efficace. Nous décrivons l'algèbre XAlgebre à base d'opérateurs conçus pour XML. Cette algèbre a pour but de construire des plans d'exécution pour l'évaluation de requêtes XQuery et traiter des tuples d'arbres XML. Ces plans d'exécution doivent pouvoir être modélisés par un modèle de coût et celui de coût minimum sera sélectionné pour l'exécution. Dans cette thèse, nous définissons un modèle de coût pour les données semi-structurées adapté à notre algèbre. Les sources de données (SGBD, serveurs Web, moteur de recherche) peuvent être très hétérogènes, elles peuvent avoir des capacités de traitement de données très différentes, mais aussi avoir des modèles de coût plus ou moins définis. Pour intégrer ces différentes informations dans l'architecture de médiation, nous devons déterminer comment communiquer ces informations entre le médiateur et les sources, et comment les intégrer. Pour cela, nous utilisons des langages basés sur XML comme XML-Schema et MathML pour exporter les informations de métadonnées, de formules de coûts et de capacité de sources. Ces informations exportées sont communiquées par l'intermédiaire d'une interface applicative nommée XML/DBC. Enfin, des optimisations diverses spécifiques à l'architecture de médiation doivent être considérées. Nous introduisons pour cela un cache sémantique basé sur un prototype de SGBD stockant efficacement des données XML en natif.
APA, Harvard, Vancouver, ISO, and other styles
45

Acar, Esra. "Efficient index structures for video databases." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609322/index.pdf.

Full text
Abstract:
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Dieter, Jahn. "Performance comparison of relational and native-xml databases using the semantics of the land command and control information exchange data model (LC2IEDM)." Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2118.

Full text
Abstract:
Approved for public release, distribution unlimited
Efforts to improve the military decision and action cycle have centered on automating the command and control process and improving interoperability among joint and coalition forces. However, information automation by itself can lead to increased operator overload when the way this information is stored and presented is not structured and consistently filtered. The majority of messaging systems store information in a document-centric free-text format that makes it difficult for command and control systems, relational databases, software agents and web portals to intelligently search the information. Consistent structure and semantic meaning is essential when integrating these capabilities. Military-grade implementations must also provide high performance. A widely accepted platform-independent technology standard for representing document-centric information is the Extensible Markup Language (XML). XML supports the structured representation of information in context through the use of metadata. By using an XML Schema generated from MIPâ s Land Command and Control Information Exchange Data Model (LC2IEDM), it is feasible to compare the syntactic strength of human-readable XML documents with the semantics of LC2IEDM as used within a relational database. The insert, update, retrieve and delete performance of a native-XML database is compared against that of a relational database management system (RDBMS) implementing the same command and control data model (LC2IEDM). Additionally, compression and parsing performance advantages of using various binary XML compression schemes is investigated. Experimental measurements and analytic comparisons are made to determine whether the performance of a native-XML database is a disadvantage to the use of XML. Finally, because of the globally significant potential of these interoperability improvements, a number of look-ahead items to future work are proposed including the use of.
APA, Harvard, Vancouver, ISO, and other styles
47

Mohamed-Amine, Baazizi. "Analyse statique pour l'optimisation des mises à jour de documents XML temporels." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00765066.

Full text
Abstract:
Ces dernières années ont été marquées par l'adoption en masse de XML comme format d'échange et de représentation des données stockées sur le web. Cette évolution s'est accompagnée du développement de langages pour l'interrogation et la manipulation des données XML et de la mise en œuvre de plusieurs systèmes pour le stockage et le traitement des ces dernières. Parmi ces systèmes, les moteurs mémoire centrale ont été développés pour faire face à des besoins spécifiques d'applications qui ne nécessitant pas les fonctionnalités avancées des SGBD traditionnels. Ces moteurs offrent les mêmes fonctionnalités que les systèmes traditionnels sauf que contrairement à ces derniers, ils nécessitent de charger entièrement les documents en mémoire centrale pour pouvoir les traiter. Par conséquent, ces systèmes sont limités quant à la taille des documents pouvant être traités. Dans cette thèse nous nous intéressons aux aspects liés à l'évolution des données XML et à la gestion de la dimension temporelle de celles-ci. Cette thèse comprend deux parties ayant comme objectif commun le développement de méthodes efficaces pour le traitement des documents XML volumineux en utilisant les moteurs mémoire centrale.Dans la première partie nous nous focalisons sur la mise à jour des documents XML statiques. Nous proposons une technique d'optimisation basée sur la projection XML et sur l'utilisation des schémas. La projection est une méthode qui a été proposée dans le cadre des requêtes afin de résoudre les limitations des moteurs mémoire centrale. Son utilisation pour le cas des mises à jour soulève de nouveaux problèmes liés notamment à la propagation des effets des mises à jours. La deuxième partie est consacrée à la construction et à la maintenance des documents temporels, toujours sous la contrainte d'espace. A cette contrainte s'ajoute la nécessité de générer des documents efficaces du point de vue du stockage. Notre contribution consiste en deux méthodes. La première méthode s'applique dans le cas général pour lequel aucune information n'est utilisée pour la construction des documents temporels. Cette méthode est conçue pour être réalisée en streaming et permet ainsi le traitement de document quasiment sans limite de taille. La deuxième méthode s'applique dans le cas où les changements sont spécifiés par des mises à jour. Elle utilise le paradigme de projection ce qui lui permet en outre de manipuler des documents volumineux de générer des documents temporels satisfaisant du point de vue du stockage.
APA, Harvard, Vancouver, ISO, and other styles
48

Karlsson, Stefan. "Experimental Database Export/Import for InPUT." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21089.

Full text
Abstract:
The Intelligent Parameter Utilization Tool (InPUT) is a format and API for thecross-language description of experiments, which makes it possible to defineexperiments and their contexts at an abstract level in the form of XML- andarchive-based descriptors. By using experimental descriptors, programs can bereconfigured without having to be recoded and recompiled and the experimentalresults of third-parties can be reproduced independently of the programminglanguage and algorithm implementation. Previously, InPUT has supported theexport and import of experimental descriptors to/from XML documents, archivefiles and LaTex tables. The overall aim of this project was to develop an SQLdatabase design that allows for the export, import, querying, updating anddeletion of experimental descriptors, implementing the design as an extensionof the Java implementation of InPUT (InPUTj) and to verify the generalapplicability of the created implementation by modeling real-world use cases.The use cases covered everything from simple database transactions involvingsimple descriptors to complex database transactions involving complexdescriptors. In addition, it was investigated whether queries and updates ofdescriptors are executed more rapidly if the descriptors are stored in databasesin accordance with the created SQL schema and the queries and updates arehandled by the DBMS PostgreSQL or, if the descriptors are stored directly infiles and the queries and updates are handled by the default XML-processingengine of InPUTj (JDOM). The results of the test case indicate that the formerusually allows for a faster execution of queries while the latter usually allowsfor a faster execution of updates. Using database-stored descriptors instead offile-based descriptors offers many advantages, such as making it significantlyeasier and less costly to manage, analyze and exchange large amounts of experi-mental data. However, database-stored descriptors complement file-baseddescriptors rather than replace them. The goals of the project were achieved,and the different types of database transactions involving descriptors can nowbe handled via a simple API provided by a Java facade class.
APA, Harvard, Vancouver, ISO, and other styles
49

Braganholo, Vanessa de Paula. "From XML to relational view updates: applying old solutions to solve a new problem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/4952.

Full text
Abstract:
XML vem se tornando um importante meio para intercâmbio de dados, e é frequentemente usada com uma interface para - isto é, uma visão de - um banco de dados relacional. Apesar de existirem muitos trabalhos que tratam de consultas a bancos de dados através de visões XML, o problema de atualização de bancos de dados relacionais através de visões XML não tem recebido muita atenção. Neste trabalho, apresentam-se os primeiros passos para a solução deste problema. Usando query trees para capturar noções de seleção, projeção, aninhamento, agrupamento e conjuntos heterogêneos, presentes na maioria das linguagens de consulta XML, demonstra-se como visões XML expressas através de query trees podem ser mapeadas para um conjunto de visões relacionais correspondentes. Consequentemente, esta tese transforma o problema de atualização de bancos de dados relacionais através de visões XML em um problema clássico de atualização de bancos de dados através de visões relacionais. A partir daí, este trabalho mostra como atualizações na visão XML são mapeadas para atualizações sobre as visões relacionais correspondentes. Trabalhos existentes em atualização de visões relacionais podem então ser aplicados para determinar se as visões são atualizáveis com relação àquelas atualizações relacionais, e em caso a rmativo, traduzir as atualizações para o banco de dados relacional. Como query trees são uma caracterização formal de consultas de de nição de visões, elas não são adequadas para usuários nais. Diante disso, esta tese investiga como um subconjunto de XQuery pode ser usado como uma linguagem de de nição das visões, e como as query trees podem ser usadas como uma representação intermedi ária para consultas de nidas nesse subconjunto.
XML has become an important medium for data exchange, and is frequently used as an interface to - i.e. a view of - a relational database. Although lots of work have been done on querying relational databases through XML views, the problem of updating relational databases through XML views has not received much attention. In this work, we give the rst steps towards solving this problem. Using query trees to capture the notions of selection, projection, nesting, grouping, and heterogeneous sets found throughout most XML query languages, we show how XML views expressed using query trees can be mapped to a set of corresponding relational views. Thus, we transform the problem of updating relational databases through XML views into a classical problem of updating relational databases through relational views. We then show how updates on the XML view are mapped to updates on the corresponding relational views. Existing work on updating relational views can then be leveraged to determine whether or not the relational views are updatable with respect to the relational updates, and if so, to translate the updates to the underlying relational database. Since query trees are a formal characterization of view de nition queries, they are not well suited for end-users. We then investigate how a subset of XQuery can be used as a top level language, and show how query trees can be used as an intermediate representation of view de nitions expressed in this subset.
APA, Harvard, Vancouver, ISO, and other styles
50

Butnaru, Bogdan. "Optimisation de requêtes XQuery dans des bases de données XML distribuées sur des réseaux pair-à-pair." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2012. http://tel.archives-ouvertes.fr/tel-00768416.

Full text
Abstract:
XML distribuées basées sur les réseaux pair-à-pair. Notre approche est unique parce qu'elle est axée sur le traitement global du langage XQuery plutôt que l'étude d'un langage réduit spécifique aux index utilisés. Le système XQ2P présenté dans cette thèse intègre cette architecture ; il se présente comme une collection complète de blocs de logiciels fondamentaux pour développer des applications similaires. L'aspect pair-à-pair est fourni par P2PTester, un " framework " fournissant des modules pour les fonctionnalités P2P de base et un système distribué pour des tests et simulations. Une version de l'algorithme TwigStack adapté au P2P, utilisant un index structurel basé sur le numérotage des noeuds, y est intégré. Avec le concours d'un système de pré-traitement des requêtes il permet à XQ2P l'évaluation efficace des requêtes structurelles sur la base de données distribuée. Une version alternative du même algorithme est aussi utilisée pour l'évaluation efficace de la plupart des requêtes en langage XQuery. L'une des nouveautés majeures de XQuery 3.0 est l'étude des séries temporelles. Nous avons défini un modèle pour traiter ce type de données, utilisant le modèle XML comme représentation des valeurs et des requêtes XQuery 3.0 pour les manipuler. Nous ajoutons à XQ2P un index adapté à ce modèle ; le partitionnement horizontal des longues séries de données chronologiques, des opérateurs optimisés et une technique d'évaluation parallèle des sous-expressions permettent l'exécution efficace d'opérations avec des volumes de données importants.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography