Dissertations / Theses on the topic 'Semantic knowledge representation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Semantic knowledge representation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Kachintseva, Dina (Dina D. ). "Semantic knowledge representation and analysis." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76983.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 103).
Natural language is the means through which humans convey meaning to each other - each word or phrase is a label, or name, for an internal representation of a concept. This internal representation is built up from repeated exposure to particular examples, or instances, of a concept. The way in which we learn that a particular entity in our environment is a "bird" comes from seeing countless examples of different kinds of birds. and combining these experiences to form a menial representation of the concept. Consequently, each individual's understanding of a concept is slightly different, depending on their experiences. A person living in a place where the predominant types of birds are ostriches and emus will have a different representation birds than a person who predominantly sees penguins, even if the two people speak the same language. This thesis presents a semantic knowledge representation that incorporates this fuzziness and context-dependence of concepts. In particular, this thesis provides several algorithms for learning the meaning behind text by using a dataset of experiences to build up an internal representation of the underlying concepts. Furthermore, several methods are proposed for learning new concepts by discovering patterns in the dataset and using them to compile representations for unnamed ideas. Essentially, these methods learn new concepts without knowing the particular label - or word - used to refer to them. Words are not the only way in which experiences can be described - numbers can often communicate a situation more precisely than words. In fact, many qualitative concepts can be characterized using a set of numeric values. For instance, the qualitative concepts of "young" or "strong" can be characterized using a range of ages or strengths that are equally context-specific and fuzzy. A young adult corresponds to a different range of ages from a young child or a young puppy. By examining the sorts of numeric values that are associated with a particular word in a given context, a person can build up an understanding of the concept. This thesis presents algorithms that use a combination of qualitative and numeric data to learn the meanings of concepts. Ultimately, this thesis demonstrates that this combination of qualitative and quantitative data enables more accurate and precise learning of concepts.
by Dina Kachintseva.
M.Eng.
Robinson, Sally Jane. "Semantic knowledge representation and access in children with genetic disorders." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435580.
Full textBarb, Adrian S. "Knowledge representation and exchange of visual patterns using semantic abstractions." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/6674.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on July 21, 2009) Includes bibliographical references.
Alirezaie, Marjan. "Semantic Analysis Of Multi Meaning Words Using Machine Learning And Knowledge Representation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70086.
Full textMatikainen, Tiina Johanna. "Semantic Representation of L2 Lexicon in Japanese University Students." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/133319.
Full textEd.D.
In a series of studies using semantic relatedness judgment response times, Jiang (2000, 2002, 2004a) has claimed that L2 lexical entries fossilize with their equivalent L1 content or something very close to it. In another study using a more productive test of lexical knowledge (Jiang 2004b), however, the evidence for this conclusion was less clear. The present study is a partial replication of Jiang (2004b) with Japanese learners of English. The aims of the study are to investigate the influence of the first language (L1) on second language (L2) lexical knowledge, to investigate whether lexical knowledge displays frequency-related, emergent properties, and to investigate the influence of the L1 on the acquisition of L2 word pairs that have a common L1 equivalent. Data from a sentence completion task was completed by 244 participants, who were shown sentence contexts in which they chose between L2 word pairs sharing a common equivalent in the students' first language, Japanese. The data were analyzed using the statistical analyses available in the programming environment R to quantify the participants' ability to discriminate between synonymous and non-synonymous use of these L2 word pairs. The results showed a strong bias against synonymy for all word pairs; the participants tended to make a distinction between the two synonymous items by assigning each word a distinct meaning. With the non-synonymous items, lemma frequency was closely related to the participants' success in choosing the correct word in the word pair. In addition, lemma frequency and the degree of similarity between the words in the word pair were closely related to the participants' overall knowledge of the non-synonymous meanings of the vocabulary items. The results suggest that the participants had a stronger preference for non-synonymous options than for the synonymous option. This suggests that the learners might have adopted a one-word, one-meaning learning strategy (Willis, 1998). The reasonably strong relationship between several of the usage-based statistics and the item measures from R suggest that with exposure learners are better able to use words in ways that are similar to native speakers of English, to differentiate between appropriate and inappropriate contexts and to recognize the boundary separating semantic overlap and semantic uniqueness. Lexical similarity appears to play a secondary role, in combination with frequency, in learners' ability to differentiate between appropriate and inappropriate contexts when using L2 word pairs that have a single translation in the L1.
Temple University--Theses
Figueiras, Paulo Alves. "A framework for supporting knowledge representation – an ontological based approach." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7576.
Full textThe World Wide Web has had a tremendous impact on society and business in just a few years by making information instantly available. During this transition from physical to electronic means for information transport, the content and encoding of information has remained natural language and is only identified by its URL. Today, this is perhaps the most significant obstacle to streamlining business processes via the web. In order that processes may execute without human intervention, knowledge sources, such as documents, must become more machine understandable and must contain other information besides their main contents and URLs. The Semantic Web is a vision of a future web of machine-understandable data. On a machine understandable web, it will be possible for programs to easily determine what knowledge sources are about. This work introduces a conceptual framework and its implementation to support the classification and discovery of knowledge sources, supported by the above vision, where such sources’ information is structured and represented through a mathematical vector that semantically pinpoints the relevance of those knowledge sources within the domain of interest of each user. The presented work also addresses the enrichment of such knowledge representations, using the statistical relevance of keywords based on the classical vector space model concept, and extending it with ontological support, by using concepts and semantic relations, contained in a domain-specific ontology, to enrich knowledge sources’ semantic vectors. Semantic vectors are compared against each other, in order to obtain the similarity between them, and better support end users with knowledge source retrieval capabilities.
Alirezaie, Marjan. "Bridging the Semantic Gap between Sensor Data and Ontological Knowledge." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-45908.
Full textBabalola, Olubi Oluyomi. "A model based framework for semantic interpretation of architectural construction drawings." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47553.
Full textChee, Tahir Aidid. "A framework for the semantic representation of energy policies related to electricity generation." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:2c1f7a3c-4464-4bd0-b40b-67a0ad419529.
Full textLister, Kendall. "Toward semantic interoperability for software systems." Connect to thesis, 2008. http://repository.unimelb.edu.au/10187/3594.
Full textIn order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application.
The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data.
The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed.
Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems.
In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
Chungoora, Nitishal. "A framework to support semantic interoperability in product design and manufacture." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/5897.
Full textAssefa, Shimelis G. "Human concept cognition and semantic relations in the unified medical language system: A coherence analysis." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc4008/.
Full textNguyen, Vinh Thi Kim. "Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1516147861789615.
Full textMagka, Despoina. "Foundations and applications of knowledge representation for structured entities." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:4a3078cc-5770-4a9b-81d4-8bc52b41e294.
Full textAssefa, Shimelis G. O'Connor Brian C. "Human concept cognition and semantic relations in the unified medical language system a coherence analysis /." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-4008.
Full textMadhavan, Jayant. "Using known schemas and mappings to construct new semantic mappings /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6852.
Full textSudre, Gustavo. "Characterizing the Spatiotemporal Neural Representation of Concrete Nouns Across Paradigms." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/315.
Full textCeroni, Samuele. "Time-evolving knowledge graphs based on Poirot: dynamic representation of patients' voices." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23095/.
Full textHargreaves, Nigel. "Novel processes for smart grid information exchange and knowledge representation using the IEC common information model." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7671.
Full textQu, Xiaoyan Angela. "Discovery and Prioritization of Drug Candidates for Repositioning Using Semantic Web-based Representation of Integrated Diseasome-Pharmacome Knowledge." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1254403900.
Full textCastles, Ricky Thomas. "A Knowledge Map-Centric Feedback-Based Approach to Information Modeling and Academic Assessment." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/26069.
Full textPh. D.
Charbel, Nathalie. "Semantic Representation of a Heterogeneous Document Corpus for an Innovative Information Retrieval Model : Application to the Construction Industry." Thesis, Pau, 2018. http://www.theses.fr/2018PAUU3025/document.
Full textThe recent advances of Information and Communication Technology (ICT) have resulted in the development of several industries. Adopting semantic technologies has proven several benefits for enabling a better representation of the data and empowering reasoning capabilities over it, especially within an Information Retrieval (IR) application. This has, however, few applications in the industries as there are still unresolved issues, such as the shift from heterogeneous interdependent documents to semantic data models and the representation of the search results while considering relevant contextual information. In this thesis, we address two main challenges. The first one focuses on the representation of the collective knowledge embedded in a heterogeneous document corpus covering both the domain-specific content of the documents, and other structural aspects such as their metadata, their dependencies (e.g., references), etc. The second one focuses on providing users with innovative search results, from the heterogeneous document corpus, helping the users in interpreting the information that is relevant to their inquiries and tracking cross document dependencies.To cope with these challenges, we first propose a semantic representation of a heterogeneous document corpus that generates a semantic graph covering both the structural and the domain-specific dimensions of the corpus. Then, we introduce a novel data structure for query answers, extracted from this graph, which embeds core information together with structural-based and domain-specific context. In order to provide such query answers, we propose an innovative query processing pipeline, which involves query interpretation, search, ranking, and presentation modules, with a focus on the search and ranking modules.Our proposal is generic as it can be applicable in different domains. However, in this thesis, it has been experimented in the Architecture, Engineering and Construction (AEC) industry using real-world construction projects
Guizol, Léa. "Partitioning semantics for entity resolution and link repairs in bibliographic knowledge bases." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20188/document.
Full textWe propose a qualitative entity resolution approach to repair links in a bibliographicknowledge base. Our research question is: "How to detect and repair erroneouslinks in a bibliographic knowledge base using qualitative methods?" Theproposed approach is decomposed into two major parts. The first contributionconsists in a partitioning semantics using symbolic criteria used in order to detecterroneous links. The second one consists in a repair algorithm restoring link quality.We implemented our approach and proposed qualitative and quantitative evaluationfor the partitioning semantics as well as proving certain properties for the repairalgorithms
Ren, Yuan. "Tractable reasoning with quality guarantee for expressive description logics." Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=217884.
Full textSjöö, Kristoffer. "Functional understanding of space : Representing spatial knowledge using concepts grounded in an agent's purpose." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48400.
Full textQC 20111125
Breux, Yohan. "Du capteur à la sémantique : contribution à la modélisation d'environnement pour la robotique autonome en interaction avec l'humain." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS059/document.
Full textAutonomous robotics is successfully used in controled industrial environments where instructions follow predetermined implementation plans.Domestic robotics is the challenge of years to come and involve several new problematics : we have to move from a closed bounded world to an open one. A robot can no longer only rely on its raw sensor data as they merely show the absence or presence of things. It should also understand why objects are in its environment as well as the meaning of its tasks. Besides, it has to interact with human beings and therefore has to share their conceptualization through natural language. Indeed, each language is in its own an abstract and compact representation of the world which links up variety of concrete and abstract concepts. However, real observations are more complex than our simplified semantical representation. Thus they can come into conflict : this is the price for a finite representation of an "infinite" world.To address those challenges, we propose in this thesis a global architecture bringing together different modalities of environment representation. It allows to relate a physical representation to abstract concepts expressed in natural language. The inputs of our system are two-fold : sensor data feed the perception modality whereas textual information and human interaction are linked to the semantic modality. The novelty of our approach is in the introduction of an intermediate modality based on instances (physical realization of semantic concepts). Among other things, it allows to connect indirectly and without contradiction perceptual data to knowledge in natural langage.We propose in this context an original method to automatically generate an ontology for the description of physical objects. On the perception side, we investigate some properties of image descriptor extracted from intermediate layers of convolutional neural networks. In particular, we show their relevance for instance representation as well as their use for estimation of similarity transformation. We also propose a method to relate instances to our object-oriented ontology which, in the assumption of an open world, can be seen as an alternative to classical classification methods. Finally, the global flow of our system is illustrated through the description of user request management processes
Le, Pendu Paea Jean-Francois 1974. "Ontology databases." Thesis, University of Oregon, 2010. http://hdl.handle.net/1794/10575.
Full textOn the one hand, ontologies provide a means of formally specifying complex descriptions and relationships about information in a way that is expressive yet amenable to automated processing and reasoning. When data are annotated using terms from an ontology, the instances inhere in formal semantics. Compared to an ontology, which may have as few as a dozen or as many as tens of thousands of terms, the annotated instances for the ontology are often several orders of magnitude larger, from millions to possibly trillions of instances. Unfortunately, existing reasoning techniques cannot scale to these sizes. On the other hand, relational database management systems provide mechanisms for storing, retrieving, and maintaining the integrity of large amounts of data. Relational database management systems are well known for scaling to extremely large sizes of data, some claiming to manage over a quadrillion data. This dissertation defines ontology databases as a mapping from ontologies to relational databases in order to combine the expressiveness of ontologies with the scalability of relational databases. This mapping is sound and, under certain conditions, complete. That is, the database behaves like a knowledge base which is faithful to the semantics of a given ontology. What distinguishes this work is the treatment of the relational database management system as an active reasoning component rather than as a passive storage and retrieval system. The main contributions this dissertation will highlight include: (i) the theory and implementation particulars for mapping ontologies to databases, (ii) subsumption based reasoning, (iii) inconsistency detection, (iv) scalability studies, and (v) information integration (specifically, information exchange). This work is novel because it is the first attempt to embed a logical reasoning system, specified by a Semantic Web ontology, into a plain relational database management system using active database technologies. This work also introduces the not-gadget , which relaxes the closed-world assumption and increases the expressive power of the logical system without significant cost. This work also demonstrates how to deploy the same framework as an information integration system for data exchange scenarios, which is an important step toward semantic information integration over distributed data repositories.
Committee in charge: Dejing Dou, Chairperson, Computer & Information Science; Zena Ariola, Member, Computer & Information Science; Christopher Wilson, Member, Computer & Information Science; Monte Westerfield, Outside Member, Biology
Qadeer, Shahab. "Integration of Recommendation and Partial Reference Alignment Algorithms in a Session based Ontology Alignment System." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-73135.
Full textFranco, Salvador Marc. "A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/84285.
Full textEl Procesamiento del Lenguaje Natural (PLN) es un campo de la informática, la inteligencia artificial y la lingüística computacional centrado en las interacciones entre las máquinas y el lenguaje de los humanos. Uno de sus mayores desafíos implica capacitar a las máquinas para inferir el significado del lenguaje natural humano. Con este propósito, diversas representaciones del significado y el contexto han sido propuestas obteniendo un rendimiento competitivo. Sin embargo, estas representaciones todavía tienen un margen de mejora en escenarios transdominios y translingües. En esta tesis estudiamos el uso de grafos de conocimiento como una representación transdominio y translingüe del texto y su significado. Un grafo de conocimiento es un grafo que expande y relaciona los conceptos originales pertenecientes a un conjunto de palabras. Sus propiedades se consiguen gracias al uso como base de conocimiento de una red semántica multilingüe de amplia cobertura. Esto permite tener una cobertura de cientos de lenguajes y millones de conceptos generales y específicos del ser humano. Como punto de partida de nuestra investigación empleamos características basadas en grafos de conocimiento - junto con otras tradicionales y meta-aprendizaje - para la tarea de PLN de clasificación de la polaridad mono- y transdominio. El análisis y conclusiones de ese trabajo muestra evidencias de que los grafos de conocimiento capturan el significado de una forma independiente del dominio. La siguiente parte de nuestra investigación aprovecha la capacidad de la red semántica multilingüe y se centra en tareas de Recuperación de Información (RI). Primero proponemos un modelo de análisis de similitud completamente basado en grafos de conocimiento para detección de plagio translingüe. A continuación, mejoramos ese modelo para cubrir palabras fuera de vocabulario y tiempos verbales, y lo aplicamos a las tareas translingües de recuperación de documentos, clasificación, y detección de plagio. Por último, estudiamos el uso de grafos de conocimiento para las tareas de PLN de respuesta de preguntas en comunidades, identificación del lenguaje nativo, y identificación de la variedad del lenguaje. Las contribuciones de esta tesis ponen de manifiesto el potencial de los grafos de conocimiento como representación transdominio y translingüe del texto y su significado en tareas de PLN y RI. Estas contribuciones han sido publicadas en diversas revistas y conferencias internacionales.
El Processament del Llenguatge Natural (PLN) és un camp de la informàtica, la intel·ligència artificial i la lingüística computacional centrat en les interaccions entre les màquines i el llenguatge dels humans. Un dels seus majors reptes implica capacitar les màquines per inferir el significat del llenguatge natural humà. Amb aquest propòsit, diverses representacions del significat i el context han estat proposades obtenint un rendiment competitiu. No obstant això, aquestes representacions encara tenen un marge de millora en escenaris trans-dominis i trans-llenguatges. En aquesta tesi estudiem l'ús de grafs de coneixement com una representació trans-domini i trans-llenguatge del text i el seu significat. Un graf de coneixement és un graf que expandeix i relaciona els conceptes originals pertanyents a un conjunt de paraules. Les seves propietats s'aconsegueixen gràcies a l'ús com a base de coneixement d'una xarxa semàntica multilingüe d'àmplia cobertura. Això permet tenir una cobertura de centenars de llenguatges i milions de conceptes generals i específics de l'ésser humà. Com a punt de partida de la nostra investigació emprem característiques basades en grafs de coneixement - juntament amb altres tradicionals i meta-aprenentatge - per a la tasca de PLN de classificació de la polaritat mono- i trans-domini. L'anàlisi i conclusions d'aquest treball mostra evidències que els grafs de coneixement capturen el significat d'una forma independent del domini. La següent part de la nostra investigació aprofita la capacitat\hyphenation{ca-pa-ci-tat} de la xarxa semàntica multilingüe i se centra en tasques de recuperació d'informació (RI). Primer proposem un model d'anàlisi de similitud completament basat en grafs de coneixement per a detecció de plagi trans-llenguatge. A continuació, vam millorar aquest model per cobrir paraules fora de vocabulari i temps verbals, i ho apliquem a les tasques trans-llenguatges de recuperació de documents, classificació, i detecció de plagi. Finalment, estudiem l'ús de grafs de coneixement per a les tasques de PLN de resposta de preguntes en comunitats, identificació del llenguatge natiu, i identificació de la varietat del llenguatge. Les contribucions d'aquesta tesi posen de manifest el potencial dels grafs de coneixement com a representació trans-domini i trans-llenguatge del text i el seu significat en tasques de PLN i RI. Aquestes contribucions han estat publicades en diverses revistes i conferències internacionals.
Franco Salvador, M. (2017). A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84285
TESIS
Chen, Jieying. "Knowledge Extraction from Description Logic Terminologies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS531.
Full textAn increasing number of ontologies of large sizes have been developed and made available in repositories such as the NCBO Bioportal. Ensuring access to the most relevant knowledge contained in large ontologies has been identified as an important challenge. To this end, in this thesis, we propose three different notions: minimal ontology modules (sub-ontologies that preserve all entailments over a given vocabulary), best ontology excerpts (certain, small number of axioms that best capture the knowledge regarding the vocabulary by allowing for a degree of semantic loss) and projection module (sub-ontologies of a target ontology that entail subsumption, instance and conjunctive queries that follow from a reference ontology). For computing minimal module and best excerpt, we introduce the notion of subsumption justification as an extension of justification (a minimal set of axioms needed to preserve a logical consequence) to capture the subsumption knowledge between a term and all other terms in the vocabulary. Similarly, we introduce the notion of projection justifications that entail consequence for three different queries in order to computing projection module. Finally, we evaluate our approaches by applying a prototype implementation on large ontologies
Reul, Quentin H. "Role of description logic reasoning in ontology matching." Thesis, University of Aberdeen, 2012. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=186278.
Full textMünnich, Stefan. "Ontologien als semantische Zündstufe für die digitale Musikwissenschaft?" De Gruyter, Berlin / Boston, 2018. https://slub.qucosa.de/id/qucosa%3A36849.
Full textOntologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
Bate, Andrew. "Consequence-based reasoning for SRIQ ontologies." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6b35e7d0-199c-4db9-ac8a-7f78256e5fb8.
Full textArmas, Romero Ana. "Ontology module extraction and applications to ontology classification." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:4ec888f4-b7c0-4080-9d9a-3c46c91f67e3.
Full textGängler, Thomas. "Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-72434.
Full textBotha, Antonie Christoffel. "A new framework for a technological perspective of knowledge management." Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-06262008-123525/.
Full textHughes, Tracey D. "Visualizing Epistemic Structures of Interrogative Domain Models." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1227294380.
Full textHarkouken, Saiah Kenza. "Etude et définition de mécanismes sémantiques dans les environnements virtuels pour améliorer la crédibilité comportementale des agents : utilisation d'ontologies de services." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066690/document.
Full textThis work is part of the Terra Dynamica project whose objective was to populate a virtual city with agents that simulate pedestrians and vehicles. The aim of our work is to make agents which understand their environment so they can produce credible behaviors The first proposed solutions for the semantic modeling of virtual environments still keep a link with the pre-existing graphic representation of the environment. However, the semantic information represented in this kind of approach is difficult to use by the agents to perform complex reasoning procedures outside the navigation algorithms. In this thesis we present a semantic representation model of the environment that provides the agents with data on the use of environmental objects in order to allow the decision mechanism to produce credible behaviors. Furthermore, in response to the constraints that are inherent to the urban simulation, our approach is capable of handling a large number of agents in real time. Our model is based on the principle that environmental objects provide services for performing actions with different qualities. We have therefore represented the semantic information of the objects related to their use, as services in an ontology of services. We used this ontology of services to calculate a QoS which allows us to sort the different objects which all perform the same action. Thus, we can compare between the services offered by different objects in order to provide the agents with the best objects that allow them to carry out their actions and exhibit behavioral credibility. To assess the impact of our model on the credibility of the produced behaviors, we defined an evaluation protocol for the semantic representation of virtual environment models. In this protocol, observers must assess the credibility of behaviors produced by the simulator using a semantic model of the environment. Through this evaluation, we show that our model can simulate agents whose behavior is deemed credible by human observers. We also present a qualitative assessment of the ability of our model to scale and meet the constraints of a real-time simulation. This evaluation allowed us to show that the characteristics of the architecture of our model allow us to respond in a reasonable amount of time to requests from a large number of agents
Palazzo, Luiz Antonio Moro. "Representação de conhecimento : programação em lógica e o modelo das hiperredes." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1991. http://hdl.handle.net/10183/24180.
Full textIn spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
Hughes, Cameron A. "Epistemic Structures of Interrogative Domains." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1227285777.
Full textGarcía, González Roberto. "A semantic web approach to digital rights management." Doctoral thesis, Universitat Pompeu Fabra, 2006. http://hdl.handle.net/10803/7538.
Full textLa contribució d'aquesta tesi és aplicar una aproximació semàntica basada en ontologies web a la gestió de drets digitals. Es desenvolupa una Ontologia del Copyright on les peces bàsiques són un model de creació, el drets de copyright i les accions que és poden dur a terme sobre els continguts. Aquesta ontologia facilita el desenvolupament de sistemes de gestió de drets.
També s'ha aplicat l'enfocament semàntic als principals llenguatges d'expressió de drets. S'han integrat amb l'ontologia per tal d'avaluar-la i a la vegada s'han enriquit amb la seva base semàntica. Finalment, tot això s'ha posat en pràctica en un sistema semàntic de gestió de drets.
Uno de los principales requerimientos de la gestión de derechos digitales en la Web es un lenguaje compartido para la representación del copyright. Las aproximaciones actuales se basan en soluciones puramente sintácticas, simples y difíciles de poner en práctica.
La contribución de esta tesis es aplicar una aproximación semántica basada en ontologías Web a la gestión de derechos digitales. Se desarrolla una Ontología del Copyright cuyas piezas básicas son un modelo de creación, los derechos de copyright y las acciones que se pueden llevar a cabo sobre los contenidos. Esta ontología facilita el desarrollo de sistemas de gestión de derechos.
También se ha aplicado el enfoque semántico a los principales lenguajes de expresión de derechos. Se han integrado con la ontología para evaluarla y a la vez se han enriquecido con su base semántica. Finalmente, todo esto se ha puesto en práctica en un sistema semántico de gestión de derechos.
One of the main requirements of web digital rights management is a shared language for copyright representation. The current approaches are based on syntactic solutions, which are simple and difficult to put into practice.
The contribution of this thesis is to apply a semantic approach based on web ontologies to digital rights management. It develops a Copyright Ontology whose basic pieces are a creation model, the copyrights and the actions that can be carried out on the content. This ontology facilitates rights management systems development.
The semantic approach has also been applied to the main rights expression languages. They have been integrated with the ontology in order to evaluate it and, at the same time, they have been enriched with their base semantics. Finally, all this has been put into practice in a semantic digital rights management system.
Lacroix, Timothée. "Décompositions tensorielles pour la complétion de bases de connaissance." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1002.
Full textIn this thesis, we focus on the problem of link prediction in binary tensors of order three and four containing positive observations only. Tensors of this type appear in web recommender systems, in bio-informatics for the completion of protein interaction databases, or more generally for the completion of knowledge bases. We benchmark our completion methods on knowledge bases which represent a variety of relationnal data and scales.Our approach is parallel to that of matrix completion. We optimize a non-convex regularised empirical risk objective over low-rank tensors. Our method is empirically validated on several databases, performing better than the state of the art.These performances however can only be reached for ranks that would not scale to full modern knowledge bases such as Wikidata. We focus on the Tucker decomposition which is more expressive than the Canonical decomposition but also harder to optimize. By fixing the adaptive algorithm Adagrad, we obtain a method to efficiently optimize Tucker decompositions with a fixed random core tensor. With these method, we obtain improved performances on several benchmarks for limited parameters per entities.Finally, we study the case of temporal knowledge bases, in which the predicates are only valid over certain time intervals. We propose a low-rank formulation and regularizer adapted to the temporal structure of the problem and obtain better performances than the state of the art
Cori, Marcel. "Modèles pour la représentation et l'interrogation de données textuelles et de connaissances." Paris 7, 1987. http://www.theses.fr/1987PA077047.
Full textBaring-Gould, Sengan. "SemNet : the knowledge representation of LOLITA." Thesis, Durham University, 2000. http://etheses.dur.ac.uk/4284/.
Full textBénard, Jeremy. "Import, export et traduction sémantiques génériques basés sur une ontologie de langages de représentation de connaissances." Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0021/document.
Full textKnowledge Representation Languages (KRLs) are languages enabling to represent and share information in a logical form. There are many KRLs. Each KRL has one abstract structural model and can have multiple notations. These models and notations were designed to meet different modeling or computational needs, as well as different preferences. Current tools managing or translating knowledge representations (KRs) allow the use of only one or few KRLs and do not enable – or hardly enable – their end-users to adapt the models and notations of these KRLs. This thesis helps to solve these practical problems and this original research problem: “Can a KR import function and a KR export function be specified in a generic way and, if so, how can their resources be Specified ?”. This thesis is part of a larger project the overall objective of which is to facilitate i) the sharing and reuse of knowledge related to software components, and ii) knowledge presentations. The approach followed in this thesis is based on an ontology of KRLs named KRLO, and therefore on a formal representation of these KRLs.KRLO has three important and original features to which this thesis contributed: i) it represents KRL models of different families in a uniform way, ii) it includes an ontology of KRLs notations, and iii) it specifies generic functions for KR import and export in various KRLs. This thesis has contributed to the improvement of the first version of KRLO (KRLO_2014) and to the creation of its second version. KRLO_2014 contained modeling inaccuracies that made it difficult or inconvenient to use. This thesis has also contributed to the specification and the operationalization of “Structure_map”, a function enabling to write any other function that uses a loop, in a modular and configurable way. Its use makes it possible to create and organize these functions into an ontology of software components. To implement a generic export function based on KRLO, I developed SRS (Structure_map based Request Solver), a KR retrieval tool enabling the use of KR path expressions. SRS interprets all functions. SRS thus provides an experimental validation for both the use of this primitive (Structure_map) and the use of KRLO.Directly or indirectly, SRS and KRLO may be used by GTH (Global Technologies Holding), the partner company of this thesis
Shahwan, Ahmad. "Processing Geometric Models of Assemblies to Structure and Enrich them with Functional Information." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM023/document.
Full textThe digital mock-up (DMU) of a product has taken a central position in the product development process (PDP). It provides the geometric reference of the product assembly, as it defines the shape of each individual component, as well as the way components are put together. However, observations show that this geometric model is no more than a conventional representation of what the real product is. Additionally, and because of its pivotal role, the DMU is more and more required to provide information beyond mere geometry to be used in different stages of the PDP. An increasingly urging demand is functional information at different levels of the geometric representation of the assembly. This information is shown to be essential in phases such as geometric pre-processing for finite element analysis (FEA) purposes. In this work, an automated method is put forward that enriches a geometric model, which is the product DMU, with function information needed for FEA preparations. To this end, the initial geometry is restructured at different levels according to functional annotation needs. Prevailing industrial practices and representation conventions are taken into account in order to functionally interpret the pure geometric model that provides a start point to the proposed method
Sjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.
Full textIn order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:
* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.
* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.
Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.
Bandyopadhyay, Bortik. "Querying Structured Data via Informative Representations." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595447189545086.
Full textGuérin, Clément. "Proposition d'un cadre pour l'analyse automatique, l'interprétation et la recherche interactive d'images de bande dessinée." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS024/document.
Full textSince the beginning of the twenty-first century, the cultural industry, both in France and worldwide, has been through a massive and historical mutation. They have had to adapt to the emerging digital technology represented by the Internet and the new handheld devices such as smartphones and tablets. Although some industries successfully transfered a piece of their activity to the digital market and are about to find a sound business model, the comic books industry keeps looking for the right solution and has not yet produce anything as convincing as the music or movie offers. While many new young authors and writers use their creativity to produce specifically digital designed pieces of art, some other minds are focused on the preservation and the development of the already existing heritage. So far, efforts have been concentrated on the transfer from printed to digital support, with a special attention given to their specific features and how they can be used to create new reading conventions. There has also been some concerns about the content indexing, which is a hard task regarding the large amount of data created since the very beginning of the comics history. From a scientific point of view, there are several issues related to these goals. First, it implies to be able to identify the underlying structure of a comic books page. This comes through the extraction of the page's components, their validation and their correction based on the representation and reasoning capacities of two ontologies. The first one focus on the representation of the image analysis concepts and the second one represents the comic books domain knowledge. Secondly, a special attention is given to the semantic enhancement of the extracted elements, based on their spatial relations to each others and on their own characteristics. These annotations can be related to elements only (e.g. the position of a panel in the reading sequence), or to the bound between several elements (e.g. the text pronounced by a character)
Suarez, John Freddy Garavito. "Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-26072017-113829/.
Full textDecision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.