Academic literature on the topic 'Semantic Web RDF OWL Datenintegration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Semantic Web RDF OWL Datenintegration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Semantic Web RDF OWL Datenintegration"

1

Song, Lan, Li Xia Lei, Hong Wang, and Jun Hong Hua. "Research on Ontology-Based Semantic Reasoning." Advanced Materials Research 171-172 (December 2010): 136–39. http://dx.doi.org/10.4028/www.scientific.net/amr.171-172.136.

Full text
Abstract:
As a new emerging web, semantic web, has recently drawn considerable attention from both academic and industry field. Nowadays, RDF, RDF Schema, OWL etc. have become commonly used languages in the Semantic Web. This paper describes the ontology language and description logic, shows the relationship of them, and finally presents a reasoning path for transitive closure in an ontology document.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Gang, Jie Lin, Qing Qi Long, and Zhi Juan Hu. "OWL-Based Description for Agent." Advanced Materials Research 217-218 (March 2011): 1218–23. http://dx.doi.org/10.4028/www.scientific.net/amr.217-218.1218.

Full text
Abstract:
This paper presents a detailed formal specification of agents and their properties and abilities,based on the Web Ontology Language (OWL). It allows an agent to be specified entirely using standard mark-up languages from the Semantic Web community, namely RDF, RDF Schemaand OWL. The basic agent components are identified and their implementation using ontology development tools is described.The description improves consistency, interoperability and maintainability of agent program. Therefore,the design errors in the early development stages could be efficiently detected and avoided.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Wen Li, Min Huang, and Ying Wang. "Construction of XBRL Semantic Metamodel and Knowledge Base Based on Ontology." Applied Mechanics and Materials 571-572 (June 2014): 1119–28. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.1119.

Full text
Abstract:
In order to improve the interoperability of XBRL format financial reporting on the semantic level, a novel XBRL financial reporting metamodel and a fact data semantic metamodel are proposed, which uses the Semantic Web technologies and Ontology theory. Then, a XBRL knowledge base is constructed based on this metamodel .Using the metamodel-based translation mechanism from XBRL to OWL / RDF, all the semantic information in XBRL taxonomy and instance documents is translated into OWL ontology and RDF instance. Finally, a knowledge base covering the semantic information of financial reporting domain is constructed.
APA, Harvard, Vancouver, ISO, and other styles
4

Sequeda, Juan F., Syed Hamid Tirmizi, Oscar Corcho, and Daniel P. Miranker. "Survey of directly mapping SQL databases to the Semantic Web." Knowledge Engineering Review 26, no. 4 (December 2011): 445–86. http://dx.doi.org/10.1017/s0269888911000208.

Full text
Abstract:
AbstractThe Semantic Web anticipates integrated access to a large number of information sources on the Internet represented as Resource Description Framework (RDF). Given the large number of websites that are backed by SQL databases, methods that automate the translation of those databases to RDF are crucial. One approach, taken by a number of researchers, is to directly map the SQL schema to an equivalent Web Ontology Language (OWL) or RDF Schema representation, which in turn, implies an RDF representation for the relational data. This paper reviews this research, and derives a consolidated, overarching set of translation rules expressible as a stratified Datalog program. We present all the possible key combinations in an SQL schema and consider their implied semantic properties. We review the approaches and characterize them with respect to the scope of their coverage of SQL constructs.
APA, Harvard, Vancouver, ISO, and other styles
5

Nurmikko-Fuller, Terhi, Daniel Bangert, Alan Dix, David Weigl, and Kevin Page. "Building Prototypes Aggregating Musicological Datasets on the Semantic Web." Bibliothek Forschung und Praxis 42, no. 2 (June 1, 2018): 206–21. http://dx.doi.org/10.1515/bfp-2018-0025.

Full text
Abstract:
Abstract Semantic Web technologies such as RDF, OWL, and SPARQL can be successfully used to bridge complementary musicological information. In this paper, we describe, compare, and evaluate the datasets and workflows used to create two such aggregator projects: In Collaboration with In Concert, and JazzCats, both of which bring together a cluster of smaller projects containing concert and performance metadata.
APA, Harvard, Vancouver, ISO, and other styles
6

Viola, Fabio, Luca Roffia, Francesco Antoniazzi, Alfredo D’Elia, Cristiano Aguzzi, and Tullio Salmon Cinotti. "Interactive 3D Exploration of RDF Graphs through Semantic Planes." Future Internet 10, no. 8 (August 17, 2018): 81. http://dx.doi.org/10.3390/fi10080081.

Full text
Abstract:
This article presents Tarsier, a tool for the interactive 3D visualization of RDF graphs. Tarsier is mainly intended to support teachers introducing students to Semantic Web data representation formalisms and developers in the debugging of applications based on Semantic Web knowledge bases. The tool proposes the metaphor of semantic planes as a way to visualize an RDF graph. A semantic plane contains all the RDF terms sharing a common concept; it can be created, and further split into several planes, through a set of UI controls or through SPARQL 1.1 queries, with the full support of OWL and RDFS. Thanks to the 3D visualization, links between semantic planes can be highlighted and the user can navigate within the 3D scene to find the better perspective to analyze data. Data can be gathered from generic SPARQL 1.1 protocol services. We believe that Tarsier will enhance the human friendliness of semantic technologies by: (1) helping newcomers assimilate new data representation formats; and (2) increasing the capabilities of inspection to detect relevant situations even in complex RDF graphs.
APA, Harvard, Vancouver, ISO, and other styles
7

Sireteanu, Alexandru Napoleon. "A Survey of Web Ontology Languages and Semantic Web Services." Annals of the Alexandru Ioan Cuza University - Economics 60, no. 1 (July 1, 2013): 42–53. http://dx.doi.org/10.2478/aicue-2013-0005.

Full text
Abstract:
Abstract In the beginning World Wide Web was syntactic and the content itself was only readable by humans. The modern web combines existing web technologies with knowledge representation formalisms. In this sense, the Semantic Web proposes the mark-up of content on the web using formal ontology that structure essential data for the purpose of comprehensive machine understanding. On the syntactical level, standardization is an important topic. Many standards which can be used to integrate different information sources have evolved. Beside the classical database interfaces like ODBC, web-oriented standard languages like HTML, XML, RDF and OWL increase in importance. As the World Wide Web offers the greatest potential for sharing information, we will base our paper on these evolving standards.
APA, Harvard, Vancouver, ISO, and other styles
8

SCHLICHT, ANNE, and HEINER STUCKENSCHMIDT. "PEER-TO-PEER REASONING FOR INTERLINKED ONTOLOGIES." International Journal of Semantic Computing 04, no. 01 (March 2010): 27–58. http://dx.doi.org/10.1142/s1793351x10000948.

Full text
Abstract:
The Semantic Web is commonly perceived as a web of partially-interlinked machine readable data. This data is inherently distributed and resembles the structure of the web in terms of resources being provided by different parties at different physical locations. A number of infrastructures for storing and querying distributed semantic web data, primarily encoded in RDF have been developed. While there are first attempts for integrating RDF Schema reasoning into distributed query processing, almost all the work on description logic reasoning as a basis for implementing inference in the Web Ontology Language OWL still assumes a centralized approach where the complete terminology has to be present on a single system and all inference steps are carried out on this system. We have designed and implemented a distributed reasoning method that preserves soundness and completeness of reasoning under the original OWL import semantics and has beneficial properties regarding parallel computation and overhead caused by communication effort and additional derivations. The method is based on sound and complete resolution methods for the description logic [Formula: see text] that we modify to work in a distributed setting.
APA, Harvard, Vancouver, ISO, and other styles
9

Khamis, Khamis Abdul Latif, Luo Zhong, and Hua Zhu Song. "Study on Digital Content Representation from Direct Label Graph to RDF/OWL Language into Semantic Web." Applied Mechanics and Materials 644-650 (September 2014): 3304–9. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3304.

Full text
Abstract:
An increasing number of publication and consumptions of media data on the social and dynamic web has allowed ontology technology to grow up unpredictable. News agencies, cultural heritage sites, social media companies and ordinary users contribute a large portion of media contents across web community. These huge amounts of media contents are generally accessed via standardized and proprietary metadata formats through semantic web. But nearly all cases need specific, standardized, and more expressive methods to represent media data into the knowledge representation paradigm. This paper proposes the proper methods to express media ontology based on the nature of media data. At first RDF graph representation model is used to show the expressive power of domain classification with direct label graph concepts. Secondly, events and object class domain are used to express relational properties of media content. Finally, the events and object class domain is expressed into RDF/OWL language, as preferable and standardized language to represent media data in the semantic web.
APA, Harvard, Vancouver, ISO, and other styles
10

Yeh, Ching-Long, Chun-Fu Chang, and Po-Shen Lin. "Ontology-Based Personal Annotation Management on Semantic Peer Network to Facilitating Collaborations in e-Learning." International Journal of Handheld Computing Research 2, no. 2 (April 2011): 20–33. http://dx.doi.org/10.4018/jhcr.2011040102.

Full text
Abstract:
The trend of services on the web is making use of resources on the web collaborative. Semantic Web technology is used to build an integrated infrastructure for the new services. This paper develops a distributed knowledge based system using the RDF/OWL technology on peer-to-peer networks to provide the basis of building personal social collaboration services for e-Learning. This paper extends the current tools accompanied with lecture content to become annotation sharable using the distributed knowledge base.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Semantic Web RDF OWL Datenintegration"

1

Pérez, de Laborda Schwankhart Cristian. "Incorporating relational data into the Semantic Web." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982420390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Koron, Ronald Dean. "Developing a Semantic Web Crawler to Locate OWL Documents." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1347937844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Langer, André Gaedke Martin. "SemProj: ein Semantic Web - basiertes System zur Unterstützung von Workflow- und Projektmanagement." [S.l. : s.n.], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Darr, Timothy, Ronald Fernandes, John Hamilton, Charles Jones, and Annette Weisenseel. "Semantic Web Technologies for T&E Metadata Verification and Validation." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606008.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The vision of the semantic web is to unleash the next generation of information sharing and interoperability by encoding meaning into the symbols that are used to describe various computational capabilities within the World Wide Web or other networks. This paper describes the application of semantic web technologies to Test and Evaluation (T&E) metadata verification and validation. Verification is a quality process that is used to evaluate whether or not a product, service, or system complies with a regulation, specification, or conditions imposed at the start of a development phase or which exists in the organization. Validation is the process of establishing documented evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. While this often involves acceptance and suitability with external customers, automation provides significant assistance to the customers.
APA, Harvard, Vancouver, ISO, and other styles
5

Lehmann, Jens. "Learning OWL Class Expressions." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38351.

Full text
Abstract:
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
APA, Harvard, Vancouver, ISO, and other styles
6

Qu, Xiaoyan Angela. "Discovery and Prioritization of Drug Candidates for Repositioning Using Semantic Web-based Representation of Integrated Diseasome-Pharmacome Knowledge." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1254403900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Santandrea, Luca. "Semantic web approach for italian graduates' surveys: the AlmaLaurea ontology proposal." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15884/.

Full text
Abstract:
Il crescente sviluppo e la promozione della trasparenza dei dati nell’ambito della pubblica amministrazione copre molteplici aspetti, fra cui l’educazione universitaria. Attualmente sono difatti numerosi i dataset rilasciati in formato Linked Open Data disponibili a livello nazionale ed internazionale. Fra le informazioni pubblicamente disponibili spiccano concetti riguardo l’occupazione e la numerosità dei laureati. Nonostante il progresso riscontrato, la mancanza di una metodologia standard per la descrizione di informazioni statistiche sui laureati rende difficoltoso un confronto di determinati fatti a partire da differenti sorgenti di dati. Sul piano nazionale, le indagini AlmaLaurea colmano il gap informativo dell’eterogeneità delle fonti proponendo statistiche centralizzate su profilo dei laureati e relativa condizione occupazionale, aggiornate annualmente. Scopo del progetto di tesi è la realizzazione di un’ontologia di dominio che descriva diverse peculiarità dei laureati, promuovendo allo stesso tempo la definizione strutturata dei dati AlmaLaurea e la successiva pubblicazione nel contesto Linked Open Data. Il progetto, realizzato con l’ausilio delle tecnologie del Web Semantico, propone infine la creazione di un endpoint SPARQL e di una interfaccia web per l'interrogazione e la visualizzazione dei dati strutturati.
APA, Harvard, Vancouver, ISO, and other styles
8

Ouksili, Hanane. "Exploration et interrogation de données RDF intégrant de la connaissance métier." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV069.

Full text
Abstract:
Un nombre croissant de sources de données est publié sur le Web, décrites dans les langages proposés par le W3C tels que RDF, RDF(S) et OWL. Une quantité de données sans précédent est ainsi disponible pour les utilisateurs et les applications, mais l'exploitation pertinente de ces sources constitue encore un défi : l'interrogation des sources est en effet limitée d'abord car elle suppose la maîtrise d'un langage de requêtes tel que SPARQL, mais surtout car elle suppose une certaine connaissance de la source de données qui permet de cibler les ressources et les propriétés pertinentes pour les besoins spécifiques des applications. Le travail présenté ici s'intéresse à l'exploration de sources de données RDF, et ce selon deux axes complémentaires : découvrir d'une part les thèmes sur lesquels porte la source de données, fournir d'autre part un support pour l'interrogation d'une source sans l'utilisation de langage de requêtes, mais au moyen de mots clés. L'approche d'exploration proposée se compose ainsi de deux stratégies complémentaires : l'exploration thématique et la recherche par mots clés. La découverte de thèmes dans une source de données RDF consiste à identifier un ensemble de sous-graphes, non nécessairement disjoints, chacun représentant un ensemble cohérent de ressources sémantiquement liées et définissant un thème selon le point de vue de l'utilisateur. Ces thèmes peuvent être utilisés pour permettre une exploration thématique de la source, où les utilisateurs pourront cibler les thèmes pertinents pour leurs besoins et limiter l'exploration aux seules ressources composant les thèmes sélectionnés. La recherche par mots clés est une façon simple et intuitive d'interroger les sources de données. Dans le cas des sources de données RDF, cette recherche pose un certain nombre de problèmes, comme l'indexation des éléments du graphe, l'identification des fragments du graphe pertinents pour une requête spécifique, l'agrégation de ces fragments pour former un résultat, et le classement des résultats obtenus. Nous abordons dans cette thèse ces différents problèmes, et nous proposons une approche qui permet, en réponse à une requête mots clés, de construire une liste de sous-graphes et de les classer, chaque sous-graphe correspondant à un résultat pertinent pour la requête. Pour chacune des deux stratégies d'exploration d'une source RDF, nous nous sommes intéressés à prendre en compte de la connaissance externe, permettant de mieux répondre aux besoins des utilisateurs. Cette connaissance externe peut représenter des connaissances du domaine, qui permettent de préciser le besoin exprimé dans le cas d'une requête, ou de prendre en compte des connaissances permettant d'affiner la définition des thèmes. Dans notre travail, nous nous sommes intéressés à formaliser cette connaissance externe et nous avons pour cela introduit la notion de pattern. Ces patterns représentent des équivalences de propriétés et de chemins dans le graphe représentant la source. Ils sont évalués et intégrés dans le processus d'exploration pour améliorer la qualité des résultats
An increasing number of datasets is published on the Web, expressed in languages proposed by the W3C to describe Web data such as RDF, RDF(S) and OWL. The Web has become a unprecedented source of information available for users and applications, but the meaningful usage of this information source is still a challenge. Querying these data sources requires the knowledge of a formal query language such as SPARQL, but it mainly suffers from the lack of knowledge about the source itself, which is required in order to target the resources and properties relevant for the specific needs of the application. The work described in this thesis addresses the exploration of RDF data sources. This exploration is done according to two complementary ways: discovering the themes or topics representing the content of the data source, and providing a support for an alternative way of querying the data sources by using keywords instead of a query formulated in SPARQL. The proposed exploration approach combines two complementary strategies: thematic-based exploration and keyword search. Theme discovery from an RDF dataset consists in identifying a set of sub-graphs which are not necessarily disjoints, and such that each one represents a set of semantically related resources representing a theme according to the point of view of the user. These themes can be used to enable a thematic exploration of the data source where users can target the relevant theme and limit their exploration to the resources composing this theme. Keyword search is a simple and intuitive way of querying data sources. In the case of RDF datasets, this search raises several problems, such as indexing graph elements, identifying the relevant graph fragments for a specific query, aggregating these relevant fragments to build the query results, and the ranking of these results. In our work, we address these different problems and we propose an approach which takes as input a keyword query and provides a list of sub-graphs, each one representing a candidate result for the query. These sub-graphs are ordered according to their relevance to the query. For both keyword search and theme identification in RDF data sources, we have taken into account some external knowledge in order to capture the users needs, or to bridge the gap between the concepts invoked in a query and the ones of the data source. This external knowledge could be domain knowledge allowing to refine the user's need expressed by a query, or to refine the definition of themes. In our work, we have proposed a formalization to this external knowledge and we have introduced the notion of pattern to this end. These patterns represent equivalences between properties and paths in the dataset. They are evaluated and integrated in the exploration process to improve the quality of the result
APA, Harvard, Vancouver, ISO, and other styles
9

Croset, Samuel. "Drug repositioning and indication discovery using description logics." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/246260.

Full text
Abstract:
Drug repositioning is the discovery of new indications for approved or failed drugs. This practice is commonly done within the drug discovery process in order to adjust or expand the application line of an active molecule. Nowadays, an increasing number of computational methodologies aim at predicting repositioning opportunities in an automated fashion. Some approaches rely on the direct physical interaction between molecules and protein targets (docking) and some methods consider more abstract descriptors, such as a gene expression signature, in order to characterise the potential pharmacological action of a drug (Chapter 1). On a fundamental level, repositioning opportunities exist because drugs perturb multiple biological entities, (on and off-targets) themselves involved in multiple biological processes. Therefore, a drug can play multiple roles or exhibit various mode of actions responsible for its pharmacology. The work done for my thesis aims at characterising these various modes and mechanisms of action for approved drugs, using a mathematical framework called description logics. In this regard, I first specify how living organisms can be compared to complex black box machines and how this analogy can help to capture biomedical knowledge using description logics (Chapter 2). Secondly, the theory is implemented in the Functional Therapeutic Chemical Classification System (FTC - https://www.ebi.ac.uk/chembl/ftc/), a resource defining over 20,000 new categories representing the modes and mechanisms of action of approved drugs. The FTC also indexes over 1,000 approved drugs, which have been classified into the mode of action categories using automated reasoning. The FTC is evaluated against a gold standard, the Anatomical Therapeutic Chemical Classification System (ATC), in order to characterise its quality and content (Chapter 3). Finally, from the information available in the FTC, a series of drug repositioning hypotheses were generated and made publicly available via a web application (https://www.ebi.ac.uk/chembl/research/ftc-hypotheses). A subset of the hypotheses related to the cardiovascular hypertension as well as for Alzheimer’s disease are further discussed in more details, as an example of an application (Chapter 4). The work performed illustrates how new valuable biomedical knowledge can be automatically generated by integrating and leveraging the content of publicly available resources using description logics and automated reasoning. The newly created classification (FTC) is a first attempt to formally and systematically characterise the function or role of approved drugs using the concept of mode of action. The open hypotheses derived from the resource are available to the community to analyse and design further experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Langer, André. "SemProj: Ein Semantic Web – basiertes System zur Unterstützung von Workflow- und Projektmanagement." Master's thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800307.

Full text
Abstract:
Mit mehr als 120 Millionen registrierten Internetadressen (Stand: März 2007) symbolisiert das Internet heutzutage das größte Informationsmedium unserer Zeit. Täglich wächst das Internet um eine unüberschaubare Menge an Informationen. Diese Informationen sind häufig in Dokumenten hinterlegt, welche zur Auszeichnung die Hypertext Markup Language verwenden. Seit Beginn der Neunziger Jahre hat sich dieses System bewährt, da dadurch der einzelne Nutzer in die Lage versetzt wird, auf einfache und effiziente Weise Dokumentinhalte mit Darstellungsanweisungen zu versehen und diese eigenständig im Internet zu veröffentlichen. Diese Layoutinformationen können bei Abruf der entsprechenden Ressource durch ein Computerprogramm leicht ausgewertet und zur Darstellung der Inhalte genutzt werden. Obwohl sowohl die Layoutinformationen als auch die eigentlichen Dokumentinhalte in einem textuellen Format vorliegen, konnten die Nutzertextinhalte durch eine Maschine bisher nur sehr eingeschränkt verarbeitet werden. Während es menschlichen Nutzern keinerlei Probleme bereitet, die Bedeutung einzelner Texte auf einer Webseite zu identifizieren, stellen diese für einen Rechner prinzipiell nur eine Aneinanderreihung von ASCII-Zeichen dar. Sobald es möglich werden würde, die Bedeutung von Informationen durch ein Computerprogramm effizient zu erfassen und weiterzuverarbeiten, wären völlig neue Anwendungen mit qualitativ hochwertigeren Ergebnissen im weltweiten Datennetz möglich. Nutzer könnten Anfragen an spezielle Agenten stellen, welche sich selbstständig auf die Suche nach passenden Resultaten begeben; Informationen verschiedener Informationsquellen könnten nicht nur auf semantischer Ebene verknüpft, sondern daraus sogar neue, nicht explizit enthaltene Informationen abgeleitet werden. Ansätze dazu, wie Dokumente mit semantischen Metadaten versehen werden können, gibt es bereits seit einiger Zeit. Lange umfasste dies jedoch die redundante Bereitstellung der Informationen in einem eigenen Dokumentenformat, weswegen sich keines der Konzepte bis in den Privatbereich durchsetzen konnte und als Endkonsequenz in den vergangenen Monaten besonderes Forschungsinteresse darin aufkam, Möglichkeiten zu finden, wie semantische Informationen ohne großen Zusatzaufwand direkt in bestehende HTML-Dokumente eingebettet werden können. Die vorliegende Diplomarbeit möchte diese neuen Möglichkeiten im Bereich des kollaborativen Arbeitens näher untersuchen. Ziel ist es dazu, eine Webapplikation zur Abwicklung typischer Projektmanagement-Aufgaben zu entwickeln, welche jegliche Informationen unter einem semantischen Gesichtspunkt analysieren, aufbereiten und weiterverarbeiten kann und unabhängig von der konkreten Anwendungsdomain und Plattform systemübergreifend eingesetzt werden kann. Die Konzepte Microformats und RDFa werden dabei besonders herausgestellt und nach Schwächen und zukünftigen Potentialen hin untersucht
The World Wide Web supposably symbolizes with currently more than 120 million registered internet domains (March 2007) the most comprehensive information reference of all times. The amount of information available increases by a storming bulk of data ever day. Those information is often embedded in documents which utilize the Hypertext Markup Language. This enables the user to mark out certain layout properties of a text in an easy and efficient fashion and to publish the final document containing both layout and data information. A computer application is then able to extract style information from the document resource and to use it in order to render the resulting website. Although layout information and data are both equally represented in a textual manner, a machine was hardly capable of processing user content so far. Whereas human consumers have no problem to identify and understand the sense of several paragraphs on a website, they basically represent only a concatenation of ASCII characters for a machine. If it were possible to efficiently disclose the sense of a word or phrase to a computer program in order to process it, new astounding applications with output results of high quality would be possible. Users could create queries for specialized agents which autonomously start to search the web for adequate result matches. Moreover, the data of multiple information sources could be linked and processed together on a semantic level so that above all new, not explicitly stated information could be inferred. Approaches already exist, how documents could be enhanced with semantic metadata, however, many of these involve the redundant provision of those information in a specialized document format. As a consequence none of these concepts succeeded in becoming a widely used method and research started again to find possibilities how to embed semantic annotations without huge additional efforts in an ordinary HTML document. The present thesis focuses on an analysis of these new concepts and possibilities in the area of collaborative work. The objective is to develop the prototype of a web application with which it is possible to manage typical challenges in the realm of project and workflow management. Any information available should be processable under a semantic viewpoint which includes analysis, conditioning and reuse independently from a specific application domain and a certain system platform. Microformats and RDFa are two of those relatively new concepts which enable an application to extract semantic information from a document resource and are therefore particularly exposed and compared with respect to advantages and disadvantages in the context of a “Semantic Web”
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Semantic Web RDF OWL Datenintegration"

1

A, Hendler James, ed. Semantic web for the working ontologist: Modeling in RDF, RDFS and OWL. Amsterdam: Morgan Kaufmann Publishers/Elsevier, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Segaran, Toby. Programming the Semantic Web. Beijing: O'Reilly, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Evans, Colin. Programming the Semantic Web: Build Flexible Applications with Graph Data. Sebastopol, USA: O'Reilly, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alesso, H. P. Thinking on the Web. John Wiley & Sons Inc, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alesso, H. P. Developing Semantic Web Services. CRC Press LLC, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alesso, H. Peter, and Craig F. Smith. Developing Semantic Web Services. AK Peters, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Developing Semantic Web Services. John Wiley & Sons, Inc., 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Semantische Technologien: Grundlagen - Konzepte - Anwendungen. Heidelberg, Germany: Spektrum Akademischer Verlag, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Semantic Web RDF OWL Datenintegration"

1

Allemang, Dean, Irene Polikoff, and Ralph Hodgson. "Enterprise Architecture Reference Modeling in OWL/RDF." In The Semantic Web – ISWC 2005, 844–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11574620_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

van Assem, Mark, Maarten R. Menken, Guus Schreiber, Jan Wielemaker, and Bob Wielinga. "A Method for Converting Thesauri to RDF/OWL." In The Semantic Web – ISWC 2004, 17–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30475-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ter Horst, Herman J. "Combining RDF and Part of OWL with Rules: Semantics, Decidability, Complexity." In The Semantic Web – ISWC 2005, 668–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11574620_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paris, Pierre-Henri, Fayçal Hamdi, and Samira Si-said Cherfi. "A Study About the Use of OWL 2 Semantics in RDF-Based Knowledge Graphs." In The Semantic Web: ESWC 2020 Satellite Events, 181–85. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62327-2_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Altamirano Di Luca, Marlon A., and Neilys González Benítez. "Comparative Study of RDF and OWL Ontology Languages as Support for the Semantic Web." In Communications in Computer and Information Science, 3–12. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-42517-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arenas, Marcelo, Georg Gottlob, and Andreas Pieris. "Querying the Semantic Web via Rules." In Applications and Practices in Ontology Design, Extraction, and Reasoning. IOS Press, 2020. http://dx.doi.org/10.3233/ssw200044.

Full text
Abstract:
The problem of querying RDF data is a central issue for the development of the Semantic Web. The query language SPARQL has become the standard language for querying RDF since its W3C standardization in 2008. However, the 2008 version of this language missed some important functionalities: reasoning capabilities to deal with RDFS and OWL vocabularies, navigational capabilities to exploit the graph structure of RDF data, and a general form of recursion much needed to express some natural queries. To overcome those limitations, a new version of SPARQL, called SPARQL 1.1, was released in 2013, which includes entailment regimes for RDFS and OWL vocabularies, and a mechanism to express navigation patterns through regular expressions. Nevertheless, there are useful navigation patterns that cannot be expressed in SPARQL 1.1, and the language lacks a general mechanism to express recursive queries. This chapter is a gentle introduction to a tractable rule-based query language, in fact, an extension of Datalog with value invention, stratified negation, and falsum, that is powerful enough to define SPARQL queries enhanced with the desired functionalities focussing on a core fragment of the OWL 2 QL profile of OWL 2.
APA, Harvard, Vancouver, ISO, and other styles
7

Barzdins, Guntis. "From Databases to Ontologies." In Semantic Web Engineering in the Knowledge Society, 242–66. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-112-4.ch010.

Full text
Abstract:
This chapter introduces the UML profile for OWL as an essential instrument for bridging the gap between the legacy relational databases and OWL ontologies. We address one of the long-standing relational database design problems where initial conceptual model (a semantically clear domain conceptualization ontology) gets “lost” during conversion into the normalized database schema. The problem is that such “loss” makes database inaccessible for direct query by domain experts familiar with the conceptual model only. This problem can be avoided by exporting the database into RDF according to the original conceptual model (OWL ontology) and formulating semantically clear queries in SPARQL over the RDF database. Through a detailed example we show how UML/OWL profile is facilitating this new and promising approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Farkas, Csilla. "Data Confidentiality on the Semantic Web." In Web and Information Security, 73–90. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-588-7.ch004.

Full text
Abstract:
This chapter investigates the threat of unwanted Semantic Web inferences. We survey the current efforts to detect and remove unwanted inferences, identify research gaps, and recommend future research directions. We begin with a brief overview of Semantic Web technologies and reasoning methods, followed by a description of the inference problem in traditional databases. In the context of the Semantic Web, we study two types of inferences: (1) entailments defined by the formal semantics of the Resource Description Framework (RDF) and the RDF Schema (RDFS) and (2) inferences supported by semantic languages like the Web Ontology Language (OWL). We compare the Semantic Web inferences to the inferences studied in traditional databases. We show that the inference problem exists on the Semantic Web and that existing security methods do not fully prevent indirect data disclosure via inference channels.
APA, Harvard, Vancouver, ISO, and other styles
9

Farkas, Csilla. "Data Confidentiality on the Semantic Web." In Information Security and Ethics, 3309–20. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-937-3.ch221.

Full text
Abstract:
This chapter investigates the threat of unwanted Semantic Web inferences. We survey the current efforts to detect and remove unwanted inferences, identify research gaps, and recommend future research directions. We begin with a brief overview of Semantic Web technologies and reasoning methods, followed by a description of the inference problem in traditional databases. In the context of the Semantic Web, we study two types of inferences: (1) entailments defined by the formal semantics of the Resource Description Framework (RDF) and the RDF Schema (RDFS) and (2) inferences supported by semantic languages like the Web Ontology Language (OWL). We compare the Semantic Web inferences to the inferences studied in traditional databases. We show that the inference problem exists on the Semantic Web and that existing security methods do not fully prevent indirect data disclosure via inference channels.
APA, Harvard, Vancouver, ISO, and other styles
10

Hogan, Aidan, Andreas Harth, and Axel Polleres. "Scalable Authoritative OWL Reasoning for the Web*." In Semantic Services, Interoperability and Web Applications, 131–77. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-593-3.ch006.

Full text
Abstract:
In this chapter, the authors discuss the challenges of performing reasoning on large scale RDF datasets from the Web. Using ter-Horst’s pD* fragment of OWL as a base, the authors compose a rule-based framework for application to Web data: they argue their decisions using observations of undesirable examples taken directly from the Web. The authors further temper their OWL fragment through consideration of “authoritative sources” which counter-acts an observed behaviour which they term “ontology hijacking”: new ontologies published on the Web re-defining the semantics of existing entities resident in other ontologies. They then present their system for performing rule-based forward-chaining reasoning which they call SAOR: Scalable Authoritative OWL Reasoner. Based upon observed characteristics of Web data and reasoning in general, they design their system to scale: the system is based upon a separation of terminological data from assertional data and comprises of a lightweight in-memory index, on-disk sorts and file-scans. The authors evaluate their methods on a dataset in the order of a hundred million statements collected from real-world Web sources and present scale-up experiments on a dataset in the order of a billion statements collected from the Web. In this republished version, the authors also present extended discussion reflecting upon recent developments in the area of scalable RDFS/OWL reasoning, some of which has drawn inspiration from the original publication (Hogan, et al., 2009).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography