Academic literature on the topic 'XML (Document markup language) Data mining. Database management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'XML (Document markup language) Data mining. Database management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "XML (Document markup language) Data mining. Database management"

1

FONG, JOSEPH, HERBERT SHIU, and JENNY WONG. "METHODOLOGY FOR DATA CONVERSION FROM XML DOCUMENTS TO RELATIONS USING EXTENSIBLE STYLESHEET LANGUAGE TRANSFORMATION." International Journal of Software Engineering and Knowledge Engineering 19, no. 02 (March 2009): 249–81. http://dx.doi.org/10.1142/s0218194009004131.

Full text
Abstract:
Extensible Markup Language (XML) has been used for data-transport and data-transformation while the business sector continues to store critical business data in relational databases. Extracting relational data and formatting it into XML documents, and then converting XML documents back to relational structures, becomes a major daily activity. It is important to have an efficient methodology to handle this conversion between XML documents and relational data. This paper aims to perform data conversion from XML documents into relational databases. It proposes a prototype and algorithms for this conversion process. The pre-process is schema translation using an XML schema definition. The proposed approach is based on the needs of an Order Information System to suggest a methodology to gain the benefits provided by XML technology and relational database management systems. The methodology is a stepwise procedure using XML schema definition and Extensible Stylesheet Language Transformations (XSLT) to ensure that the data constraints are not scarified after data conversion. The implementation of the data conversion is performed by decomposing the XML document of a hierarchical tree model into normalized relations interrelated with their artifact primary keys and foreign keys. The transformation process is performed by XSLT. This paper will also demonstrate the entire conversion process through a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
2

THURAISINGHAM, BHAVANI. "WEB INFORMATION MANAGEMENT AND ITS APPLICATION TO ELECTRONIC COMMERCE." International Journal on Artificial Intelligence Tools 08, no. 02 (June 1999): 107–17. http://dx.doi.org/10.1142/s0218213099000087.

Full text
Abstract:
This paper describes various aspects of web information management with particular emphasis on its application to electronic commerce. We first provide a brief overview of the web. Then we discuss concepts for web database management, as database management is a key part of information management. These include data models and architectures, query processing, transaction management, metadata management, storage issues, and integrity and security. Then we discuss various web information management technologies such as multimedia, visualization, data mining and warehousing, and knowledge management. Then we discuss emerging standards such as Java Database Connectivity, Extended Markup Language (XML), and middleware standards such as Object Request Brokers (ORB) and Remote Method Invocation (RMI). Finally we discuss how web data management technologies can be applied to the important area of electronic commerce.
APA, Harvard, Vancouver, ISO, and other styles
3

Penev, Lyubomir, Donat Agosti, Teodor Georgiev, Viktor Senderov, Guido Sautter, Terry Catapano, and Pavel Stoev. "The Open Biodiversity Knowledge Management (eco-)System: Tools and Services for Extraction, Mobilization, Handling and Re-use of Data from the Published Literature." Biodiversity Information Science and Standards 2 (May 17, 2018). http://dx.doi.org/10.3897/biss.2.25748.

Full text
Abstract:
The Open Biodiversity Knowledge Management System (OBKMS) is an end-to-end, eXtensible Markup Language (XML)- and Linked Open Data (LOD)-based ecosystem of tools and services that encompasses the entire process of authoring, submission, review, publication, dissemination, and archiving of biodiversity literature, as well as the text mining of published biodiversity literature (Fig. 1). These capabilities lead to the creation of interoperable, computable, and reusable biodiversity data with provenance linking facts to publications. OBKMS is the result of a joint endeavour by Plazi and Pensoft lasting many years. The system was developed with the support of several biodiversity informatics projects - initially (Virtual Biodiversity Research and Access Network for Taxonomy) ViBRANT, and then followed by pro-iBiosphere, European Biodiversity Observation Network (EU BON), and Biosystematics, informatics and genomics of the big 4 insect groups (BIG4). The system includes the following key components: ARPHA Journal Publishing Platform: a journal publishing platform based on the TaxPub XML extension for National Library of Medicine (NLM)’s Journal Publishing Document Type Definition (DTD) (Version 3.0). Its advanced ARPHA-BioDiv component deals with integrated biodiversity data and narrative publishing (Penev et al. 2017). GoldenGATE Imagine: an environment for marking up, enhancing, and extracting text and data from PDF files, supporting the TaxonX XML schema. It has specific enhancements for articles containing descriptions of taxa ("taxonomic treatments") in the field of biological systematics, but its core features may be used for general purposes as well. Biodiversity Literature repository (BLR): a public repository hosted at Zenodo (CERN) for published articles (PDF and XML) and images extracted from articles. Ocellus/Zenodeo: a search interface for the images stored at BLR. TreatmentBank: an XML-based repository for taxonomic treatments and data therein extracted from literature. The OpenBiodiv knowledge graph: a biodiversity knowledge graph built according to the Linked Open Data (LOD) principles. Uses the RDF data model, the SPARQL Protocol and RDF Query Language (SPARQL) query language, is open to the public, and is powered by the OpenBiodiv-O ontology (Senderov et al. 2018). OpenBiodiv portal: Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Supporting tools: Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. ARPHA Journal Publishing Platform: a journal publishing platform based on the TaxPub XML extension for National Library of Medicine (NLM)’s Journal Publishing Document Type Definition (DTD) (Version 3.0). Its advanced ARPHA-BioDiv component deals with integrated biodiversity data and narrative publishing (Penev et al. 2017). GoldenGATE Imagine: an environment for marking up, enhancing, and extracting text and data from PDF files, supporting the TaxonX XML schema. It has specific enhancements for articles containing descriptions of taxa ("taxonomic treatments") in the field of biological systematics, but its core features may be used for general purposes as well. Biodiversity Literature repository (BLR): a public repository hosted at Zenodo (CERN) for published articles (PDF and XML) and images extracted from articles. Ocellus/Zenodeo: a search interface for the images stored at BLR. TreatmentBank: an XML-based repository for taxonomic treatments and data therein extracted from literature. The OpenBiodiv knowledge graph: a biodiversity knowledge graph built according to the Linked Open Data (LOD) principles. Uses the RDF data model, the SPARQL Protocol and RDF Query Language (SPARQL) query language, is open to the public, and is powered by the OpenBiodiv-O ontology (Senderov et al. 2018). OpenBiodiv portal: Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Supporting tools: Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. As part of OBKMS, Plazi and Pensoft offer the following services beyond supplying the software toolkit: Digitization through imaging and text capture of paper-based or digitally born (PDF) legacy literature. XML markup of both legacy and newly published literature (journals and books). Data extraction and markup of taxonomic names, literature references, taxonomic treatments and organism occurrence records. Export and storage of text, images, and structured data in data repositories. Linking and semantic enhancement of text and data, bibliographic references, taxonomic treatments, illustrations, organism occurrences and organism traits. Re-packaging of extracted information into new, user-demanded outputs via semantic apps at the OpenBiodiv portal. Re-publishing of legacy literature (e.g., Flora, Fauna, and Mycota series, important biodiversity monographs, etc.). Semantic open access publishing (including data publishing) of journal and books. Integration of biodiversity information from legacy and newly published literature into interoperable biodiversity repositories and platforms (Global Biodiversity Information Facility (GBIF), Encyclopedia of Life (EOL), Species-ID, Plazi, Wikidata, and others). Digitization through imaging and text capture of paper-based or digitally born (PDF) legacy literature. XML markup of both legacy and newly published literature (journals and books). Data extraction and markup of taxonomic names, literature references, taxonomic treatments and organism occurrence records. Export and storage of text, images, and structured data in data repositories. Linking and semantic enhancement of text and data, bibliographic references, taxonomic treatments, illustrations, organism occurrences and organism traits. Re-packaging of extracted information into new, user-demanded outputs via semantic apps at the OpenBiodiv portal. Re-publishing of legacy literature (e.g., Flora, Fauna, and Mycota series, important biodiversity monographs, etc.). Semantic open access publishing (including data publishing) of journal and books. Integration of biodiversity information from legacy and newly published literature into interoperable biodiversity repositories and platforms (Global Biodiversity Information Facility (GBIF), Encyclopedia of Life (EOL), Species-ID, Plazi, Wikidata, and others). In this presentation we make the case for why OpenBiodiv is an essential tool for advancing biodiversity science. Our argument is that through OpenBiodiv, biodiversity science makes a step towards the ideals of open science (Senderov and Penev 2016). Furthermore, by linking data from various silos, OpenBiodiv allows for the discovery of hidden facts. A particular example of how OpenBiodiv can advance biodiversity science is demonstrated by the OpenBiodiv's solution to "taxonomic anarchy" (Garnett and Christidis 2017). "Taxonomic anarchy" is a term coined by Garnett and Christidis to denote the instability of taxonomic names as symbols for taxonomic meaning. They propose an "authoritarian" top-down approach to stablize the naming of species. OpenBiodiv, on the other hand, relies on taxonomic concepts as integrative units and therefore integration can occur through alignment of taxonomic concepts via Region Connection Calculus (RCC-5) (Franz and Peet 2009). The alignment is "democratically" created by the users of system but no consensus is forced and "anarchy" is avoided by using unambiguous taxonomic concept labels (Franz et al. 2016) in addition to Linnean names.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "XML (Document markup language) Data mining. Database management"

1

Ho, Wai-shing. "Techniques for managing and analyzing unconventional data." Click to view the E-thesis via HKUTO, 2004. http://sunzi.lib.hku.hk/hkuto/record/B39849028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ho, Wai-shing, and 何偉成. "Techniques for managing and analyzing unconventional data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B39849028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kroeze, Jan Hendrik. "Developing an XML-based, exploitable linguistic database of the Hebrew text of Gen. 1:1-2:3." Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-07282008-121520/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Yuzhou. "Duplicate detection in XML Web data /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20HUANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Lian, and 王漣. "Mining information from XML documents for query performance enhancement." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30497486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ramani, Ramasubramanian. "A toolkit for managing XML data with a relational database management system." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/uf/2001/anp1308/Thesis.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains x, 54 p.; also contains graphics. Vita. Includes bibliographical references (p. 50-53).
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Yau-tat Thomas, and 李猷達. "Formalisms on semi-structured and unstructured data schema computations." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43703914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Yau-tat Thomas. "Formalisms on semi-structured and unstructured data schema computations." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43703914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tatarinov, Igor. "Semantic data sharing with a peer data management system /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

von, Wenckstern Michael. "Web applications using the Google Web Toolkit." Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Full text
Abstract:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project
Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "XML (Document markup language) Data mining. Database management"

1

Soft computing in XML data management: Intelligent systems from decision making to data mining, Web intelligence and computer vision. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rainer, Unland, Hunt Ela, Rys Michael, and SpringerLink (Online service), eds. Database and XML Technologies: 6th International XML Database Symposium, XSym 2009, Lyon, France, August 24, 2009. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Geva, Shlomo. Focused Retrieval and Evaluation: 8th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2009, Brisbane, Australia, December 7-9, 2009, Revised and Selected Papers. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geva, Shlomo. Focused Retrieval of Content and Structure: 10th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2011, Saarbrücken, Germany, December 12-14, 2011, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pav, Kumar-Chatterjee, ed. DB2 PureXML cookbook: Master the power of IBM's hybrid data server. Indianapolis: Internatioal Business Machines Corp., 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

B, Chaudhri Akmal, ed. XML-based data management and multimedia engineering--EDBT 2002: EDBT 2002 workshops XMLDM, MDDE, and YRWS, Prague, Czech Republic, March 24-28, 2002 : proceedings. New York: Springer, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kerr, W. Scott. Data integration using virtual repositories. [Toronto]: Kerr, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Special Edition Using XML Schema. Upper Saddle River: Pearson Education, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

International XML Database Symposium (7th 2010 Singapore). Database and XML technologies: 7th International XML Database Symposium, XSym 2010, Singapore, September 17, 2010 : proceedings. Berlin: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Structuring XML documents. Upper Saddle River, NJ: Prentice Hall PTR, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "XML (Document markup language) Data mining. Database management"

1

Dweib, Ibrahim, and Joan Lu. "Automatic Mapping of XML Documents into Relational Database." In Advances in Data Mining and Database Management, 180–86. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1975-3.ch013.

Full text
Abstract:
Extensible Markup Language (XML) nowadays is one of the most important standard media used for exchanging and representing data through the Internet. Storing, updating, and retrieving the huge amount of web services data such as XML is an attractive area of research for researchers and database vendors. In this chapter, the authors propose and develop a new mapping model, called MAXDOR, for storing, rebuilding, updating, and querying XML documents using a relational database without making use of any XML schemas in the mapping process. The model addressed the problem of solving the structural hole between ordered hierarchical XML and unordered tabular relational database to enable us to use relational database systems for storing, updating, and querying XML data. A multiple link list is used to maintain XML document structure, manage the process of updating document contents, and retrieve document contents efficiently. Experiments are done to evaluate MAXDOR model. MAXDOR will be compared with other well-known models available in the literature (Tatarinov et al., 2002) and (Torsten et al., 2004) using total expected value of rebuilding XML document execution time and insertion of token execution time.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Hamadani, Badya, and Joan Lu. "Introduction." In Advances in Data Mining and Database Management, 91–95. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1975-3.ch007.

Full text
Abstract:
The eXtensible Markup Language (XML) is a World Wide Web Consortium (W3C) recommendation which has widely been used in both commerce and research. As the importance of XML documents increase, the need to deal with these documents increases as well. This chapter illustrates the methodology that has been used throughout the research, discussing all its parts and how these parts were adopted in the research.
APA, Harvard, Vancouver, ISO, and other styles
3

Rusu, Laura Irina, Wenny Rahayu, and David Taniar. "Mining Association Rules from XML Documents." In Web Data Management Practices, 79–103. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-228-2.ch004.

Full text
Abstract:
This chapter presents some of the existing mining techniques for extracting association rules out of XML documents, in the context of rapid changes in the Web knowledge discovery area. The initiative of this study was driven by the fast emergence of XML (eXtensible Markup Language) as a standard language for representing semi-structured data and as a new standard of exchanging information between different applications. The data exchanged as XML documents becomes every day richer and richer, so the necessity to not only store these large volume of XML data for later use, but to mine them as well, to discover interesting information, has became obvious. The hidden knowledge can be used in various ways, for example to decide on a business issue or to make predictions about future e-customer behaviour in a web-application. One type of knowledge which can be discovered in a collection of XML documents relates to association rules between parts of the document, and this chapter presents some of the top techniques for extracting them.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Yangjun. "Path-Oriented Queries and Tree Inclusion Problem." In Encyclopedia of Database Technologies and Applications, 472–79. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-560-3.ch079.

Full text
Abstract:
With the rapid advance of the Internet, management of structured documents such as XML documents has become more and more important (Marchiori, 1998). As a simplified version of SGML, XML is recommended by W3C (World Wide Web Consortium, 1998a; World Wide Web Consortium, 1998b) as a document description meta-language to exchange and manipulate data and documents on the WWW. It has been used to code various types of data in a wide range of application domains, including a Chemical Markup Language for exchanging data about molecules, the Open Financial Exchange for swapping financial data between banks and banks and customers, as well as a Geographical Markup Language for searching geographical information (Bosak, 1997; Zhang & Gruenwald, 2001). Also, a growing number of legacy systems are adapted to output data in the form of XML documents.
APA, Harvard, Vancouver, ISO, and other styles
5

de la Torre Díez, Isabel, Roberto Hornero Sánchez, Miguel López Coronado, and María Isabel López Gálvez. "Electronic Health Records System Using HL7 and DICOM in Ophthalmology." In Biomedical Knowledge Management, 42–60. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-266-4.ch004.

Full text
Abstract:
Health Level Seven (HL7) and Digital Imaging and Communications in Medicine (DICOM) standards are strongly influencing Electronic Health Records (EHRs) standardization. In this chapter, we present a web-based application, TeleOftalWeb 3.2, to store and exchange EHRs in ophthalmology by using HL7 Clinical Document Architecture (CDA) and DICOM standards. EHRs are stored in the native Extensible Markup Language (XML) database, dbXML 2.0. Application architecture is triple-layered with two database servers (MySQL 5.0 and dbXML) and one application server (Tomcat 5.5.9). Physicians can access and retrieve patient medical information and all types of medical images through web browsers. For security, all data transmissions are carried over encrypted Internet connections such as the Secure Sockets Layer (SSL) and Hypertext Transfer Protocol over SSL (HTTPS). The application verifies the standards related to privacy and confidentiality. The application is being tested by physicians from the University Institute of Applied Ophthalmobiology (IOBA), Spain.
APA, Harvard, Vancouver, ISO, and other styles
6

Algarín, Alberto De la Rosa, Steven A. Demurjian, Timoteus B. Ziminski, Yaira K. Rivera Sánchez, and Robert Kuykendall. "Securing XML with Role-Based Access Control." In E-Health and Telemedicine, 487–522. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8756-1.ch025.

Full text
Abstract:
Today's applications are often constructed by bringing together functionality from multiple systems that utilize varied technologies (e.g. application programming interfaces, Web services, cloud computing, data mining) and alternative standards (e.g. XML, RDF, OWL, JSON, etc.) for communication. Most such applications achieve interoperability via the eXtensible Markup Language (XML), the de facto document standard for information exchange in domains such as library repositories, collaborative software development, health informatics, etc. The use of a common data format facilitates exchange and interoperability across heterogeneous systems, but challenges in the aspect of security arise (e.g. sharing policies, ownership, permissions, etc.). In such situations, one key security challenge is to integrate the local security (existing systems) into a global solution for the application being constructed and deployed. In this chapter, the authors present a Role-Based Access Control (RBAC) security framework for XML, which utilizes extensions to the Unified Modeling Language (UML) to generate eXtensible Access Control Markup Language (XACML) policies that target XML schemas and instances for any application, and provides both the separation and reconciliation of local and global security policies across systems. To demonstrate the framework, they provide a case study in health care, using the XML standards Health Level Seven's (HL7) Clinical Document Architecture (CDA) and the Continuity of Care Record (CCR). These standards are utilized for the transportation of private and identifiable information between stakeholders (e.g. a hospital with an electronic health record, a clinic's electronic health record, a pharmacy system, etc.), requiring not only a high level of security but also compliance to legal entities. For this reason, it is not only necessary to secure private information, but for its application to be flexible enough so that updating security policies that affect millions of documents does not incur a large monetary or computational cost; such privacy could similarly involve large banks and credit card companies that have similar information to protect to deter identity theft. The authors demonstrate the security framework with two in-house developed applications: a mobile medication management application and a medication reconciliation application. They also detail future trends that present even more challenges in providing security at global and local levels for platforms such as Microsoft HealthVault, Harvard SMART, Open mHealth, and open electronic health record systems. These platforms utilize XML, equivalent information exchange document standards (e.g., JSON), or semantically augmented structures (e.g., RDF and OWL). Even though the primary use of these platforms is in healthcare, they present a clear picture of how diverse the information exchange process can be. As a result, they represent challenges that are domain independent, thus becoming concrete examples of future trends and issues that require a robust approach towards security.
APA, Harvard, Vancouver, ISO, and other styles
7

Algarín, Alberto De la Rosa, Steven A. Demurjian, Timoteus B. Ziminski, Yaira K. Rivera Sánchez, and Robert Kuykendall. "Securing XML with Role-Based Access Control." In Architectures and Protocols for Secure Information Technology Infrastructures, 334–65. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4514-1.ch013.

Full text
Abstract:
Today’s applications are often constructed by bringing together functionality from multiple systems that utilize varied technologies (e.g. application programming interfaces, Web services, cloud computing, data mining) and alternative standards (e.g. XML, RDF, OWL, JSON, etc.) for communication. Most such applications achieve interoperability via the eXtensible Markup Language (XML), the de facto document standard for information exchange in domains such as library repositories, collaborative software development, health informatics, etc. The use of a common data format facilitates exchange and interoperability across heterogeneous systems, but challenges in the aspect of security arise (e.g. sharing policies, ownership, permissions, etc.). In such situations, one key security challenge is to integrate the local security (existing systems) into a global solution for the application being constructed and deployed. In this chapter, the authors present a Role-Based Access Control (RBAC) security framework for XML, which utilizes extensions to the Unified Modeling Language (UML) to generate eXtensible Access Control Markup Language (XACML) policies that target XML schemas and instances for any application, and provides both the separation and reconciliation of local and global security policies across systems. To demonstrate the framework, they provide a case study in health care, using the XML standards Health Level Seven’s (HL7) Clinical Document Architecture (CDA) and the Continuity of Care Record (CCR). These standards are utilized for the transportation of private and identifiable information between stakeholders (e.g. a hospital with an electronic health record, a clinic’s electronic health record, a pharmacy system, etc.), requiring not only a high level of security but also compliance to legal entities. For this reason, it is not only necessary to secure private information, but for its application to be flexible enough so that updating security policies that affect millions of documents does not incur a large monetary or computational cost; such privacy could similarly involve large banks and credit card companies that have similar information to protect to deter identity theft. The authors demonstrate the security framework with two in-house developed applications: a mobile medication management application and a medication reconciliation application. They also detail future trends that present even more challenges in providing security at global and local levels for platforms such as Microsoft HealthVault, Harvard SMART, Open mHealth, and open electronic health record systems. These platforms utilize XML, equivalent information exchange document standards (e.g., JSON), or semantically augmented structures (e.g., RDF and OWL). Even though the primary use of these platforms is in healthcare, they present a clear picture of how diverse the information exchange process can be. As a result, they represent challenges that are domain independent, thus becoming concrete examples of future trends and issues that require a robust approach towards security.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography