To see the other types of publications on this topic, follow the link: Semantic knowledge representation.

Dissertations / Theses on the topic 'Semantic knowledge representation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Semantic knowledge representation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kachintseva, Dina (Dina D. ). "Semantic knowledge representation and analysis." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76983.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 103).
Natural language is the means through which humans convey meaning to each other - each word or phrase is a label, or name, for an internal representation of a concept. This internal representation is built up from repeated exposure to particular examples, or instances, of a concept. The way in which we learn that a particular entity in our environment is a "bird" comes from seeing countless examples of different kinds of birds. and combining these experiences to form a menial representation of the concept. Consequently, each individual's understanding of a concept is slightly different, depending on their experiences. A person living in a place where the predominant types of birds are ostriches and emus will have a different representation birds than a person who predominantly sees penguins, even if the two people speak the same language. This thesis presents a semantic knowledge representation that incorporates this fuzziness and context-dependence of concepts. In particular, this thesis provides several algorithms for learning the meaning behind text by using a dataset of experiences to build up an internal representation of the underlying concepts. Furthermore, several methods are proposed for learning new concepts by discovering patterns in the dataset and using them to compile representations for unnamed ideas. Essentially, these methods learn new concepts without knowing the particular label - or word - used to refer to them. Words are not the only way in which experiences can be described - numbers can often communicate a situation more precisely than words. In fact, many qualitative concepts can be characterized using a set of numeric values. For instance, the qualitative concepts of "young" or "strong" can be characterized using a range of ages or strengths that are equally context-specific and fuzzy. A young adult corresponds to a different range of ages from a young child or a young puppy. By examining the sorts of numeric values that are associated with a particular word in a given context, a person can build up an understanding of the concept. This thesis presents algorithms that use a combination of qualitative and numeric data to learn the meanings of concepts. Ultimately, this thesis demonstrates that this combination of qualitative and quantitative data enables more accurate and precise learning of concepts.
by Dina Kachintseva.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Robinson, Sally Jane. "Semantic knowledge representation and access in children with genetic disorders." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barb, Adrian S. "Knowledge representation and exchange of visual patterns using semantic abstractions." Diss., Columbia, Mo. : University of Missouri-Columbia, 2008. http://hdl.handle.net/10355/6674.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2008.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on July 21, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
4

Alirezaie, Marjan. "Semantic Analysis Of Multi Meaning Words Using Machine Learning And Knowledge Representation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70086.

Full text
Abstract:
The present thesis addresses machine learning in a domain of naturallanguage phrases that are names of universities. It describes two approaches to this problem and a software implementation that has made it possible to evaluate them and to compare them. In general terms, the system's task is to learn to 'understand' the significance of the various components of a university name, such as the city or region where the university is located, the scienti c disciplines that are studied there, or the name of a famous person which may be part of the university name. A concrete test for whether the system has acquired this understanding is when it is able to compose a plausible university name given some components that should occur in the name. In order to achieve this capability, our system learns the structure of available names of some universities in a given data set, i.e. it acquires a grammar for the microlanguage of university names. One of the challenges is that the system may encounter ambiguities due to multi meaning words. This problem is addressed using a small ontology that is created during the training phase. Both domain knowledge and grammatical knowledge is represented using decision trees, which is an ecient method for concept learning. Besides for inductive inference, their role is to partition the data set into a hierarchical structure which is used for resolving ambiguities. The present report also de nes some modi cations in the de nitions of parameters, for example a parameter for entropy, which enable the system to deal with cognitive uncertainties. Our method for automatic syntax acquisition, ADIOS, is an unsupervised learning method. This method is described and discussed here, including a report on the outcome of the tests using our data set. The software that has been implemented and used in this project has been implemented in C.
APA, Harvard, Vancouver, ISO, and other styles
5

Matikainen, Tiina Johanna. "Semantic Representation of L2 Lexicon in Japanese University Students." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/133319.

Full text
Abstract:
CITE/Language Arts
Ed.D.
In a series of studies using semantic relatedness judgment response times, Jiang (2000, 2002, 2004a) has claimed that L2 lexical entries fossilize with their equivalent L1 content or something very close to it. In another study using a more productive test of lexical knowledge (Jiang 2004b), however, the evidence for this conclusion was less clear. The present study is a partial replication of Jiang (2004b) with Japanese learners of English. The aims of the study are to investigate the influence of the first language (L1) on second language (L2) lexical knowledge, to investigate whether lexical knowledge displays frequency-related, emergent properties, and to investigate the influence of the L1 on the acquisition of L2 word pairs that have a common L1 equivalent. Data from a sentence completion task was completed by 244 participants, who were shown sentence contexts in which they chose between L2 word pairs sharing a common equivalent in the students' first language, Japanese. The data were analyzed using the statistical analyses available in the programming environment R to quantify the participants' ability to discriminate between synonymous and non-synonymous use of these L2 word pairs. The results showed a strong bias against synonymy for all word pairs; the participants tended to make a distinction between the two synonymous items by assigning each word a distinct meaning. With the non-synonymous items, lemma frequency was closely related to the participants' success in choosing the correct word in the word pair. In addition, lemma frequency and the degree of similarity between the words in the word pair were closely related to the participants' overall knowledge of the non-synonymous meanings of the vocabulary items. The results suggest that the participants had a stronger preference for non-synonymous options than for the synonymous option. This suggests that the learners might have adopted a one-word, one-meaning learning strategy (Willis, 1998). The reasonably strong relationship between several of the usage-based statistics and the item measures from R suggest that with exposure learners are better able to use words in ways that are similar to native speakers of English, to differentiate between appropriate and inappropriate contexts and to recognize the boundary separating semantic overlap and semantic uniqueness. Lexical similarity appears to play a secondary role, in combination with frequency, in learners' ability to differentiate between appropriate and inappropriate contexts when using L2 word pairs that have a single translation in the L1.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Figueiras, Paulo Alves. "A framework for supporting knowledge representation – an ontological based approach." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7576.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
The World Wide Web has had a tremendous impact on society and business in just a few years by making information instantly available. During this transition from physical to electronic means for information transport, the content and encoding of information has remained natural language and is only identified by its URL. Today, this is perhaps the most significant obstacle to streamlining business processes via the web. In order that processes may execute without human intervention, knowledge sources, such as documents, must become more machine understandable and must contain other information besides their main contents and URLs. The Semantic Web is a vision of a future web of machine-understandable data. On a machine understandable web, it will be possible for programs to easily determine what knowledge sources are about. This work introduces a conceptual framework and its implementation to support the classification and discovery of knowledge sources, supported by the above vision, where such sources’ information is structured and represented through a mathematical vector that semantically pinpoints the relevance of those knowledge sources within the domain of interest of each user. The presented work also addresses the enrichment of such knowledge representations, using the statistical relevance of keywords based on the classical vector space model concept, and extending it with ontological support, by using concepts and semantic relations, contained in a domain-specific ontology, to enrich knowledge sources’ semantic vectors. Semantic vectors are compared against each other, in order to obtain the similarity between them, and better support end users with knowledge source retrieval capabilities.
APA, Harvard, Vancouver, ISO, and other styles
7

Alirezaie, Marjan. "Bridging the Semantic Gap between Sensor Data and Ontological Knowledge." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-45908.

Full text
Abstract:
The rapid growth of sensor data can potentially enable a better awareness of the environment for humans. In this regard, interpretation of data needs to be human-understandable. For this, data interpretation may include semantic annotations that hold the meaning of numeric data. This thesis is about bridging the gap between quantitative data and qualitative knowledge to enrich the interpretation of data. There are a number of challenges which make the automation of the interpretation process non-trivial. Challenges include the complexity of sensor data, the amount of available structured knowledge and the inherent uncertainty in data. Under the premise that high level knowledge is contained in ontologies, this thesis investigates the use of current techniques in ontological knowledge representation and reasoning to confront these challenges. Our research is divided into three phases, where the focus of the first phase is on the interpretation of data for domains which are semantically poor in terms of available structured knowledge. During the second phase, we studied publicly available ontological knowledge for the task of annotating multivariate data. Our contribution in this phase is about applying a diagnostic reasoning algorithm to available ontologies. Our studies during the last phase have been focused on the design and development of a domain-independent ontological representation model equipped with a non-monotonic reasoning approach with the purpose of annotating time-series data. Our last contribution is related to coupling the OWL-DL ontology with a non-monotonic reasoner. The experimental platforms used for validation consist of a network of sensors which include gas sensors whose generated data is complex. A secondary data set includes time series medical signals representing physiological data, as well as a number of publicly available ontologies such as NCBO Bioportal repository.
APA, Harvard, Vancouver, ISO, and other styles
8

Babalola, Olubi Oluyomi. "A model based framework for semantic interpretation of architectural construction drawings." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47553.

Full text
Abstract:
The study addresses the automated translation of architectural drawings from 2D Computer Aided Drafting (CAD) data into a Building Information Model (BIM), with emphasis on the nature, possible role, and limitations of a drafting language Knowledge Representation (KR) on the problem and process. The central idea is that CAD to BIM translation is a complex diagrammatic interpretation problem requiring a domain (drafting language) KR to render it tractable and that such a KR can take the form of an information model. Formal notions of drawing-as-language have been advanced and studied quite extensively for close to 25 years. The analogy implicitly encourages comparison between problem structures in both domains, revealing important similarities and offering guidance from the more mature field of Natural Language Understanding (NLU). The primary insight we derive from NLU involves the central role that a formal language description plays in guiding the process of interpretation (inferential reasoning), and the notable absence of a comparable specification for architectural drafting. We adopt a modified version of Engelhard's approach which expresses drawing structure in terms of a symbol set, a set of relationships, and a set of compositional frameworks in which they are composed. We further define an approach for establishing the features of this KR, drawing upon related work on conceptual frameworks for diagrammatic reasoning systems. We augment this with observation of human subjects performing a number of drafting interpretation exercises and derive some understanding of its inferential nature therefrom. We consider this indicative of the potential range of inferential processes a computational drafting model should ideally support. The KR is implemented as an information model using the EXPRESS language because it is in the public domain and is the implementation language of the target Industry Foundation Classes (IFC) model. We draw extensively from the IFC library to demonstrate that it can be applied in this manner, and apply the MVD methodology in defining the scope and interface of the DOM and IFC. This simplifies the IFC translation process significantly and minimizes the need for mapping. We conclude on the basis of selective implementations that a model reflecting the principles and features we define can indeed provide needed and otherwise unavailable support in drafting interpretation and other problems involving reasoning with this class of diagrammatic representations.
APA, Harvard, Vancouver, ISO, and other styles
9

Chee, Tahir Aidid. "A framework for the semantic representation of energy policies related to electricity generation." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:2c1f7a3c-4464-4bd0-b40b-67a0ad419529.

Full text
Abstract:
Energy models are optimisation tools which aid in the formulation of energy policies. Built on mathematics, the strength of these models lie in their ability to process numerical data which in turn allows for the generation of an electricity generation mix that incorporates economic and the environmental aspects. Nevertheless, a comprehensive formulation of an electricity generation mix should include aspects associated with politics and society, an evaluation of which requires the consideration of non-numerical qualitative information. Unfortunately, the use of energy models for optimisation coupled with the evaluation of information other than numerical data is a complicated task. Two prerequisites must be fulfilled for energy models to consider political and societal aspects. First, the information associated with politics and society in the context of energy policies must be identified and defined. Second, a software tool which automatically converts both quantitative and qualitative data into mathematical expressions for optimisation is required. We propose a software framework which uses a semantic representation based on ontologies. Our semantic representation contains both qualitative and quantitative data. The semantic representation is integrated into an Optimisation Modelling System which outputs a model consisting of a set of mathematical expressions. The system uses ontologies, engineering models, logic inference and linear programming. To demonstrate our framework, a Prototype Energy Modelling System which accepts energy policy goals and targets as inputs and outputs an optimised electricity generation mix has been developed. To validate the capabilities of our prototype, a case study has been conducted. This thesis discusses the framework, prototype and case study.
APA, Harvard, Vancouver, ISO, and other styles
10

Lister, Kendall. "Toward semantic interoperability for software systems." Connect to thesis, 2008. http://repository.unimelb.edu.au/10187/3594.

Full text
Abstract:
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57]
In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application.
The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data.
The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed.
Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems.
In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
APA, Harvard, Vancouver, ISO, and other styles
11

Chungoora, Nitishal. "A framework to support semantic interoperability in product design and manufacture." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/5897.

Full text
Abstract:
It has been recognised that the ability to communicate the meaning of concepts and their intent within and across system boundaries, for supporting key decisions in product design and manufacture, is impaired by the semantic interoperability issues that are presently encountered. This work contributes to the field of semantic interoperability in product design and manufacture. An attribution is made to the understanding and application of relevant concepts coming from the computer science world, notably ontology-based approaches, to help resolve semantic interoperability problems. A novel ontological approach, identified as the Semantic Manufacturing Interoperability Framework (SMIF), has been proposed following an exploration of the important requirements to be satisfied. The framework, built on top of a Common Logic-based ontological formalism, consists of a manufacturing foundation to capture the semantics of core feature-based design and manufacture concepts, over which the specialisation of domain models can take place. Furthermore, the framework supports the mechanisms for allowing the reconciliation of semantics, thereby improving the knowledge sharing capability between heterogeneous domains that need to interoperate and have been based on the same manufacturing foundation. This work also analyses a number of test case scenarios, where the framework has been deployed for fostering knowledge representation and reconciliation of models involving products with standard hole features and their related machining process sequences. The test cases have shown that the Semantic Manufacturing Interoperability Framework (SMIF) provides effective support towards achieving semantic interoperability in product design and manufacture. Proposed extensions to the framework are additionally identified so as to provide a view on imminent future work.
APA, Harvard, Vancouver, ISO, and other styles
12

Assefa, Shimelis G. "Human concept cognition and semantic relations in the unified medical language system: A coherence analysis." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc4008/.

Full text
Abstract:
There is almost a universal agreement among scholars in information retrieval (IR) research that knowledge representation needs improvement. As core component of an IR system, improvement of the knowledge representation system has so far involved manipulation of this component based on principles such as vector space, probabilistic approach, inference network, and language modeling, yet the required improvement is still far from fruition. One promising approach that is highly touted to offer a potential solution exists in the cognitive paradigm, where knowledge representation practice should involve, or start from, modeling the human conceptual system. This study based on two related cognitive theories: the theory-based approach to concept representation and the psychological theory of semantic relations, ventured to explore the connection between the human conceptual model and the knowledge representation model (represented by samples of concepts and relations from the unified medical language system, UMLS). Guided by these cognitive theories and based on related and appropriate data-analytic tools, such as nonmetric multidimensional scaling, hierarchical clustering, and content analysis, this study aimed to conduct an exploratory investigation to answer four related questions. Divided into two groups, a total of 89 research participants took part in two sets of cognitive tasks. The first group (49 participants) sorted 60 food names into categories followed by simultaneous description of the derived categories to explain the rationale for category judgment. The second group (40 participants) performed sorting 47 semantic relations (the nonhierarchical associative types) into 5 categories known a priori. Three datasets resulted as a result of the cognitive tasks: food-sorting data, relation-sorting data, and free and unstructured text of category descriptions. Using the data analytic tools mentioned, data analysis was carried out and important results and findings were obtained that offer plausible explanations to the 4 research questions. Major results include the following: (a) through discriminant analysis category members were predicted consistently in 70% of the time; (b) the categorization bases are largely simplified rules, naïve explanations, and feature-based; (c) individuals theoretical explanation remains valid and stays stable across category members; (d) the human conceptual model can be fairly reconstructed in a low-dimensional space where 93% of the variance in the dimensional space is accounted for by the subjects performance; (e) participants consistently classify 29 of the 47 semantic relations; and, (f) individuals perform better in the functional and spatial dimensions of the semantic relations classification task and perform poorly in the conceptual dimension.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Vinh Thi Kim. "Semantic Web Foundations for Representing, Reasoning, and Traversing Contextualized Knowledge Graphs." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1516147861789615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Magka, Despoina. "Foundations and applications of knowledge representation for structured entities." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:4a3078cc-5770-4a9b-81d4-8bc52b41e294.

Full text
Abstract:
Description Logics form a family of powerful ontology languages widely used by academics and industry experts to capture and intelligently manage knowledge about the world. A key advantage of Description Logics is their amenability to automated reasoning that enables the deduction of knowledge that has not been explicitly stated. However, in order to ensure decidability of automated reasoning algorithms, suitable restrictions are usually enforced on the shape of structures that are expressible using Description Logics. As a consequence, Description Logics fall short of expressive power when it comes to representing cyclic structures, which abound in life sciences and other disciplines. The objective of this thesis is to explore ontology languages that are better suited for the representation of structured objects. It is suggested that an alternative approach which relies on nonmonotonic existential rules can provide a promising candidate for modelling such domains. To this end, we have built a comprehensive theoretical and practical framework for the representation of structured entities along with a surface syntax designed to allow the creation of ontological descriptions in an intuitive way. Our formalism is based on nonmonotonic existential rules and exhibits a favourable balance between expressive power and computational as well as empirical tractability. In order to ensure decidability of reasoning, we introduce a number of acyclicity criteria that strictly generalise many of the existing ones. We also present a novel stratification condition that properly extends `classical' stratification and allows for capturing both definitional and conditional aspects of complex structures. The applicability of our formalism is supported by a prototypical implementation, which is based on an off-the-shelf answer set solver and is tested over a realistic knowledge base. Our experimental results demonstrate improvement of up to three orders of magnitude in comparison with previous evaluation efforts and also expose numerous modelling errors of a manually curated biochemical knowledge base. Overall, we believe that our work lays the practical and theoretical foundations of an ontology language that is well-suited for the representation of structured objects. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm for which robustly engineered mature reasoners are available; it could thus pave the way for the representation of a broader spectrum of knowledge. At the same time, our theoretical contributions reveal useful insights into logic-based knowledge representation and reasoning. Therefore, our results should be of value to ontology engineers and knowledge representation researchers alike.
APA, Harvard, Vancouver, ISO, and other styles
15

Assefa, Shimelis G. O'Connor Brian C. "Human concept cognition and semantic relations in the unified medical language system a coherence analysis /." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-4008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Madhavan, Jayant. "Using known schemas and mappings to construct new semantic mappings /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sudre, Gustavo. "Characterizing the Spatiotemporal Neural Representation of Concrete Nouns Across Paradigms." Research Showcase @ CMU, 2012. http://repository.cmu.edu/dissertations/315.

Full text
Abstract:
Most of the work investigating the representation of concrete nouns in the brain has focused on the locations that code the information. We present a model to study the contributions of perceptual and semantic features to the neural code representing concepts over time and space. The model is evaluated using magnetoencephalography data from different paradigms and not only corroborates previous findings regarding a distributed code, but provides further details about how the encoding of different subcomponents varies in the space-time spectrum. The model also successfully generalizes to novel concepts that it has never seen during training, which argues for the combination of specific properties in forming the meaning of concrete nouns in the brain. The results across paradigms are in agreement when the main differences among the experiments (namely, the number of repetitions of the stimulus, the task the subjects performed, and the type of stimulus provided) were taken into consideration. More specifically, these results suggest that features specific to the physical properties of the stimuli, such as word length and right-diagonalness, are encoded in posterior regions of the brain in the first hundreds of milliseconds after stimulus onset. Then, properties inherent to the nouns, such as is it alive? and can you pick it up?, are represented in the signal starting at about 250 ms, focusing on more anterior parts of the cortex. The code for these different features was found to be distributed over time and space, and it was common for several regions to simultaneously code for a particular property. Moreover, most anterior regions were found to code for multiple features, and a complex temporal profile could be observed for the majority of properties. For example, some features inherent to the nouns were encoded earlier than others, and the extent of time in which these properties could be decoded varied greatly among them. These findings complement much of the work previously described in the literature, and offer new insights about the temporal aspects of the neural encoding of concrete nouns. This model provides a spatiotemporal signature of the representation of objects in the brain. Paired with data from carefully-designed paradigms, the model is an important tool with which to analyze the commonalities of the neural code across stimulus modalities and tasks performed by the subjects.
APA, Harvard, Vancouver, ISO, and other styles
18

Ceroni, Samuele. "Time-evolving knowledge graphs based on Poirot: dynamic representation of patients' voices." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23095/.

Full text
Abstract:
Nowadays people are spending more and more time online: this is a permanent change that leads to a huge amount of diversified data like never before which needs to be managed to extrapolate knowledge from it. This also involves social media which produces free textual information very difficult to process, but occasionally very useful. For instance, in the field of rare diseases, our specific testing context could lead to the possibility to organize the voice of patients and of caregivers, difficult to gather otherwise. People who are affected by a rare disease often strive to find enough information about it. Indeed, not much material is available online and the number of doctors qualified for those specific diseases is quite limited. Social networks become then the best place to exchange ideas and opinions. The main difficulty in finding useful information on social networks though is that text gets lost quickly and it's not straightforward to give a semantic structure to it and dynamically evolve this representation over time. In literature, there are some techniques that manage to transform unstructured data into useful information, extracting them using artificial intelligence. These techniques are often well expressive and are able to precisely convert data into knowledge, but they are not directly connected to text sources nor to a system that stores and allows to update the extrapolated information. Consequently, they are not well automated in incrementally keeping information up-to-date as new text is provided, resulting in the need for a mechanical process to do it. The contribution proposed in this thesis focuses on how to use these technologies to maintain information in order over time, enhancing their usability and freshness. It consists of a system that connects the text source providers to the built knowledge graph, which contains the knowledge acquired and updated.
APA, Harvard, Vancouver, ISO, and other styles
19

Hargreaves, Nigel. "Novel processes for smart grid information exchange and knowledge representation using the IEC common information model." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7671.

Full text
Abstract:
The IEC Common Information Model (CIM) is of central importance in enabling smart grid interoperability. Its continual development aims to meet the needs of the smart grid for semantic understanding and knowledge representation for a widening domain of resources and processes. With smart grid evolution the importance of information and data management has become an increasingly pressing issue not only because far more data is being generated using modern sensing, control and measuring devices but also because information is now becoming recognised as the ‘integral component’ that facilitates the optimal flexibility required of the smart grid. This thesis looks at the impacts of CIM implementation upon the landscape of smart grid issues and presents research from within National Grid contributing to three key areas in support of further CIM deployment. Taking the issue of Enterprise Information Management first, an information management framework is presented for CIM deployment at National Grid. Following this the development and demonstration of a novel secure cloud computing platform to handle such information is described. Power system application (PSA) models of the grid are partial knowledge representations of a shared reality. To develop the completeness of our understanding of this reality it is necessary to combine these representations. The second research contribution reports on a novel methodology for a CIM-based model repository to align PSA representations and provide a knowledge resource for building utility business intelligence of the grid. The third contribution addresses the need for greater integration of information relating to energy storage, an essential aspect of smart energy management. It presents the strategic rationale for integrated energy modeling and a novel extension to the existing CIM standards for modeling grid-scale energy storage. Significantly, this work has already contributed to a larger body of work on modeling Distributed Energy Resources currently under development at the Electric Power Research Institute (EPRI) in the USA.
APA, Harvard, Vancouver, ISO, and other styles
20

Qu, Xiaoyan Angela. "Discovery and Prioritization of Drug Candidates for Repositioning Using Semantic Web-based Representation of Integrated Diseasome-Pharmacome Knowledge." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1254403900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Castles, Ricky Thomas. "A Knowledge Map-Centric Feedback-Based Approach to Information Modeling and Academic Assessment." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/26069.

Full text
Abstract:
The structure of education has changed dramatically in the last few decades. Despite major changes in how students are learning, there has not been as dramatic of a shift in how student learning is assessed. Standard letter grades are still the paradigm for evaluating a studentâ s mastery of course content and the grade point average is still one of the largest determining factors in judging a graduateâ s academic aptitude. This research presents a modern approach to modeling knowledge and evaluating students. Based upon the model of a closed-loop feedback controller it considers education as a system with an instructor determining the set of knowledge he or she wishes to impart to students, the instruction method as a transfer function, and evaluation methods serving as sensors to provide feedback determining the subset of the information students have learned. This method uses comprehensive concept maps to depict all of the concepts and relationships an educator intends to cover and student maps to depict the subset of knowledge that students have mastered. Concept inventories are used as an assessment tool to determine, at the conceptual level, what students have learned. Each question in the concept inventory is coupled with one or more components of a comprehensive concept map and based upon the answers students give to concept inventory questions those components may or may not appear in a studentâ s knowledge map. The level of knowledge a student demonstrates of each concept and relationship is presented in his or her student map using a color scheme tied to the levels of learning in Bloomâ s taxonomy. Topological principles are used to establish metrics to quantify the distance between two studentsâ knowledge maps and the distance between a studentâ s knowledge map and the corresponding comprehensive concept map. A method is also developed for forming aggregate maps representative of the knowledge of a group of students. Aggregate maps can be formed for entire classes of students or based upon various demographics including race and gender. XML schemas have been used throughout this research to encapsulate the information in both comprehensive maps and student maps and to store correlations between concept inventory questions and corresponding comprehensive map components. Three software packages have been developed to store concept inventories into an XML Schema, to process student responses to concept inventory questions and generate student maps as a result, and to generate aggregate maps. The methods presented herein have been applied to two learning units that are part of two freshman engineering courses at Virginia Tech. Example student maps and aggregate maps are included for these course units.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Charbel, Nathalie. "Semantic Representation of a Heterogeneous Document Corpus for an Innovative Information Retrieval Model : Application to the Construction Industry." Thesis, Pau, 2018. http://www.theses.fr/2018PAUU3025/document.

Full text
Abstract:
Les avancées récentes des Technologies de l'Information et de la Communication (TIC) ont entraîné des transformations radicales de plusieurs secteurs de l'industrie. L'adoption des technologies du Web Sémantique a démontré plusieurs avantages, surtout dans une application de Recherche d'Information (RI) : une meilleure représentation des données et des capacités de raisonnement sur celles-ci. Cependant, il existe encore peu d’applications industrielles car il reste encore des problèmes non résolus, tels que la représentation de documents hétérogènes interdépendants à travers des modèles de données sémantiques et la représentation des résultats de recherche accompagnés d'informations contextuelles.Dans cette thèse, nous abordons deux défis principaux. Le premier défi porte sur la représentation de la connaissance relative à un corpus de documents hétérogènes couvrant à la fois le contenu des documents fortement lié à un domaine métier ainsi que d'autres aspects liés à la structure de ces documents tels que leurs métadonnées, les relations inter et intra-documentaires (p. ex., les références entre documents ou parties de documents), etc. Le deuxième défi porte sur la construction des résultats de RI, à partir de ce corpus de documents hétérogènes, aidant les utilisateurs à mieux interpréter les informations pertinentes de leur recherche surtout quand il s'agit d'exploiter les relations inter/intra-documentaires.Pour faire face à ces défis, nous proposons tout d'abord une représentation sémantique du corpus de documents hétérogènes à travers un modèle de graphe sémantique couvrant à la fois les dimensions structurelle et métier du corpus. Ensuite, nous définissons une nouvelle structure de données pour les résultats de recherche, extraite à partir de ce graphe, qui incorpore les informations pertinentes directes ainsi qu'un contexte structurel et métier. Afin d'exploiter cette nouvelle structure dans un modèle de RI novateur, nous proposons une chaine de traitement automatique de la requête de l'utilisateur, allant du module d'interprétation de requête, aux modules de recherche, de classement et de présentation des résultats. Bien que nous proposions une chaine de traitement complète, nos contributions se focalisent sur les modules de recherche et de classement.Nous proposons une solution générique qui peut être appliquée dans différents domaines d'applications métiers. Cependant, dans cette thèse, les expérimentations ont été appliquées au domaine du Bâtiment et Travaux Publics (BTP), en s'appuyant sur des projets de construction
The recent advances of Information and Communication Technology (ICT) have resulted in the development of several industries. Adopting semantic technologies has proven several benefits for enabling a better representation of the data and empowering reasoning capabilities over it, especially within an Information Retrieval (IR) application. This has, however, few applications in the industries as there are still unresolved issues, such as the shift from heterogeneous interdependent documents to semantic data models and the representation of the search results while considering relevant contextual information. In this thesis, we address two main challenges. The first one focuses on the representation of the collective knowledge embedded in a heterogeneous document corpus covering both the domain-specific content of the documents, and other structural aspects such as their metadata, their dependencies (e.g., references), etc. The second one focuses on providing users with innovative search results, from the heterogeneous document corpus, helping the users in interpreting the information that is relevant to their inquiries and tracking cross document dependencies.To cope with these challenges, we first propose a semantic representation of a heterogeneous document corpus that generates a semantic graph covering both the structural and the domain-specific dimensions of the corpus. Then, we introduce a novel data structure for query answers, extracted from this graph, which embeds core information together with structural-based and domain-specific context. In order to provide such query answers, we propose an innovative query processing pipeline, which involves query interpretation, search, ranking, and presentation modules, with a focus on the search and ranking modules.Our proposal is generic as it can be applicable in different domains. However, in this thesis, it has been experimented in the Architecture, Engineering and Construction (AEC) industry using real-world construction projects
APA, Harvard, Vancouver, ISO, and other styles
23

Guizol, Léa. "Partitioning semantics for entity resolution and link repairs in bibliographic knowledge bases." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20188/document.

Full text
Abstract:
Nous proposons une approche qualitative pour la résolution d'entités et la réparation de liens dans une base de connaissances bibliographiques. Notre question de recherche est : "Comment détecter et réparer les liens erronés dans une base de connaissances bibliographiques en utilisant des méthodes qualitatives ?". L'approche proposée se décompose en deux grandes parties. La première contribution est une sémantique de partitionnement utilisant des critères symboliques et servant à détecter les liens erronés. La seconde contribution est un algorithme réparant les liens erronés. Nous avons implémenté notre approche et proposé une évaluation qualitative et quantitative pour la sémantique de partitionnement ainsi que prouvé les propriétés des algorithmes utilisés pour la réparation de liens
We propose a qualitative entity resolution approach to repair links in a bibliographicknowledge base. Our research question is: "How to detect and repair erroneouslinks in a bibliographic knowledge base using qualitative methods?" Theproposed approach is decomposed into two major parts. The first contributionconsists in a partitioning semantics using symbolic criteria used in order to detecterroneous links. The second one consists in a repair algorithm restoring link quality.We implemented our approach and proposed qualitative and quantitative evaluationfor the partitioning semantics as well as proving certain properties for the repairalgorithms
APA, Harvard, Vancouver, ISO, and other styles
24

Ren, Yuan. "Tractable reasoning with quality guarantee for expressive description logics." Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=217884.

Full text
Abstract:
DL-based ontologies have been widely used as knowledge infrastructures in knowledge management systems and on the Semantic Web. The development of efficient, sound and complete reasoning technologies has been a central topic in DL research. Recently, the paradigm shift from professional to novice users, and from standalone and static to inter-linked and dynamic applications raises new challenges: Can users build and evolve ontologies, both static and dynamic, with features provided by expressive DLs, while still enjoying e cient reasoning as in tractable DLs, without worrying too much about the quality (soundness and completeness) of results? To answer these challenges, this thesis investigates the problem of tractable and quality-guaranteed reasoning for ontologies in expressive DLs. The thesis develops syntactic approximation, a consequence-based reasoning procedure with worst-case PTime complexity, theoretically sound and empirically high-recall results, for ontologies constructed in DLs more expressive than any tractable DL. The thesis shows that a set of semantic completeness-guarantee conditions can be identifed to efficiently check if such a procedure is complete. Many ontologies tested in the thesis, including difficult ones for an off-the-shelf reasoner, satisfy such conditions. Furthermore, the thesis presents a stream reasoning mechanism to update reasoning results on dynamic ontologies without complete re-computation. Such a mechanism implements the Delete-and-Re-derive strategy with a truth maintenance system, and can help to reduce unnecessary over-deletion and re-derivation in stream reasoning and to improve its efficiency. As a whole, the thesis develops a worst-case tractable, guaranteed sound, conditionally complete and empirically high-recall reasoning solution for both static and dynamic ontologies in expressive DLs. Some techniques presented in the thesis can also be used to improve the performance and/or completeness of other existing reasoning solutions. The results can further be generalised and extended to support a wider range of knowledge representation formalisms, especially when a consequence-based algorithm is available.
APA, Harvard, Vancouver, ISO, and other styles
25

Sjöö, Kristoffer. "Functional understanding of space : Representing spatial knowledge using concepts grounded in an agent's purpose." Doctoral thesis, KTH, Datorseende och robotik, CVAP, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48400.

Full text
Abstract:
This thesis examines the role of function in representations of space by robots - that is, dealing directly and explicitly with those aspects of space and objects in space that serve some purpose for the robot. It is suggested that taking function into account helps increase the generality and robustness of solutions in an unpredictable and complex world, and the suggestion is affirmed by several instantiations of functionally conceived spatial models. These include perceptual models for the "on" and "in" relations based on support and containment; context-sensitive segmentation of 2-D maps into regions distinguished by functional criteria; and, learned predictive models of the causal relationships between objects in physics simulation. Practical application of these models is also demonstrated in the context of object search on a mobile robotic platform.
QC 20111125
APA, Harvard, Vancouver, ISO, and other styles
26

Breux, Yohan. "Du capteur à la sémantique : contribution à la modélisation d'environnement pour la robotique autonome en interaction avec l'humain." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS059/document.

Full text
Abstract:
La robotique autonome est employée avec succès dans des environnements industriels contrôlés, où les instructions suivent des plans d’action prédéterminés.La robotique domestique est le challenge des années à venir et comporte un certain nombre de nouvelles difficultés : il faut passer de l'hypothèse d'un monde fermé borné à un monde ouvert. Un robot ne peut plus compter seulement sur ses données capteurs brutes qui ne font qu'indiquer la présence ou l'absence d'objets. Il lui faut aussi comprendre les relations implicites entre les objets de son environnement ainsi que le sens des tâches qu'on lui assigne. Il devra également pouvoir interagir avec des humains et donc partager leur conceptualisation à travers le langage. En effet, chaque langue est une représentation abstraite et compacte du monde qui relie entre eux une multitude de concepts concrets et purement abstraits. Malheureusement, les observations réelles sont plus complexes que nos représentations sémantiques simplifiées. Elles peuvent donc rentrer en contradiction, prix à payer d'une représentation finie d'un monde "infini". Pour répondre à ces difficultés, nous proposons dans cette thèse une architecture globale combinant différentes modalités de représentation d'environnement. Elle permet d'interpréter une représentation physique en la rattachant aux concepts abstraits exprimés en langage naturel. Le système est à double entrée : les données capteurs vont alimenter la modalité de perception tandis que les données textuelles et les interactions avec l'humain seront reliées à la modalité sémantique. La nouveauté de notre approche se situe dans l'introduction d'une modalité intermédiaire basée sur la notion d'instance (réalisation physique de concepts sémantiques). Cela permet notamment de connecter indirectement et sans contradiction les données perceptuelles aux connaissances en langage naturel.Nous présentons dans ce cadre une méthode originale de création d'ontologie orientée vers la description d'objets physiques. Du côté de la perception, nous analysons certaines propriétés des descripteurs image génériques extraits de couches intermédiaires de réseaux de neurones convolués. En particulier, nous montrons leur adéquation à la représentation d'instances ainsi que leur usage dans l'estimation de transformation de similarité. Nous proposons aussi une méthode de rattachement d'instance à une ontologie, alternative aux méthodes de classification classique dans l'hypothèse d'un monde ouvert. Enfin nous illustrons le fonctionnement global de notre modèle par la description de nos processus de gestion de requête utilisateur
Autonomous robotics is successfully used in controled industrial environments where instructions follow predetermined implementation plans.Domestic robotics is the challenge of years to come and involve several new problematics : we have to move from a closed bounded world to an open one. A robot can no longer only rely on its raw sensor data as they merely show the absence or presence of things. It should also understand why objects are in its environment as well as the meaning of its tasks. Besides, it has to interact with human beings and therefore has to share their conceptualization through natural language. Indeed, each language is in its own an abstract and compact representation of the world which links up variety of concrete and abstract concepts. However, real observations are more complex than our simplified semantical representation. Thus they can come into conflict : this is the price for a finite representation of an "infinite" world.To address those challenges, we propose in this thesis a global architecture bringing together different modalities of environment representation. It allows to relate a physical representation to abstract concepts expressed in natural language. The inputs of our system are two-fold : sensor data feed the perception modality whereas textual information and human interaction are linked to the semantic modality. The novelty of our approach is in the introduction of an intermediate modality based on instances (physical realization of semantic concepts). Among other things, it allows to connect indirectly and without contradiction perceptual data to knowledge in natural langage.We propose in this context an original method to automatically generate an ontology for the description of physical objects. On the perception side, we investigate some properties of image descriptor extracted from intermediate layers of convolutional neural networks. In particular, we show their relevance for instance representation as well as their use for estimation of similarity transformation. We also propose a method to relate instances to our object-oriented ontology which, in the assumption of an open world, can be seen as an alternative to classical classification methods. Finally, the global flow of our system is illustrated through the description of user request management processes
APA, Harvard, Vancouver, ISO, and other styles
27

Le, Pendu Paea Jean-Francois 1974. "Ontology databases." Thesis, University of Oregon, 2010. http://hdl.handle.net/1794/10575.

Full text
Abstract:
xi, 89 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
On the one hand, ontologies provide a means of formally specifying complex descriptions and relationships about information in a way that is expressive yet amenable to automated processing and reasoning. When data are annotated using terms from an ontology, the instances inhere in formal semantics. Compared to an ontology, which may have as few as a dozen or as many as tens of thousands of terms, the annotated instances for the ontology are often several orders of magnitude larger, from millions to possibly trillions of instances. Unfortunately, existing reasoning techniques cannot scale to these sizes. On the other hand, relational database management systems provide mechanisms for storing, retrieving, and maintaining the integrity of large amounts of data. Relational database management systems are well known for scaling to extremely large sizes of data, some claiming to manage over a quadrillion data. This dissertation defines ontology databases as a mapping from ontologies to relational databases in order to combine the expressiveness of ontologies with the scalability of relational databases. This mapping is sound and, under certain conditions, complete. That is, the database behaves like a knowledge base which is faithful to the semantics of a given ontology. What distinguishes this work is the treatment of the relational database management system as an active reasoning component rather than as a passive storage and retrieval system. The main contributions this dissertation will highlight include: (i) the theory and implementation particulars for mapping ontologies to databases, (ii) subsumption based reasoning, (iii) inconsistency detection, (iv) scalability studies, and (v) information integration (specifically, information exchange). This work is novel because it is the first attempt to embed a logical reasoning system, specified by a Semantic Web ontology, into a plain relational database management system using active database technologies. This work also introduces the not-gadget , which relaxes the closed-world assumption and increases the expressive power of the logical system without significant cost. This work also demonstrates how to deploy the same framework as an information integration system for data exchange scenarios, which is an important step toward semantic information integration over distributed data repositories.
Committee in charge: Dejing Dou, Chairperson, Computer & Information Science; Zena Ariola, Member, Computer & Information Science; Christopher Wilson, Member, Computer & Information Science; Monte Westerfield, Outside Member, Biology
APA, Harvard, Vancouver, ISO, and other styles
28

Qadeer, Shahab. "Integration of Recommendation and Partial Reference Alignment Algorithms in a Session based Ontology Alignment System." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-73135.

Full text
Abstract:
SAMBO is a system to assist users for alignment and merging of two ontologies (i.e. to find inter-ontology relationship). The user performs an alignment process with the help of mapping suggestions. The objective of the thesis work is to extend the existing system with new components; multiple sessions, integration of an ontology alignment strategy, recommendation system, integration of a system that can use results from previous sessions, and integration of partial reference alignment (PRA) that can be used to filter mapping suggestions. Most of the theoretical work existed, but it was important to study and implement, how these components can be integrated in the system, and how they can work together.
APA, Harvard, Vancouver, ISO, and other styles
29

Franco, Salvador Marc. "A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/84285.

Full text
Abstract:
Natural Language Processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human languages. One of its most challenging aspects involves enabling computers to derive meaning from human natural language. To do so, several meaning or context representations have been proposed with competitive performance. However, these representations still have room for improvement when working in a cross-domain or cross-language scenario. In this thesis we study the use of knowledge graphs as a cross-domain and cross-language representation of text and its meaning. A knowledge graph is a graph that expands and relates the original concepts belonging to a set of words. We obtain its characteristics using a wide-coverage multilingual semantic network as knowledge base. This allows to have a language coverage of hundreds of languages and millions human-general and -specific concepts. As starting point of our research we employ knowledge graph-based features - along with other traditional ones and meta-learning - for the NLP task of single- and cross-domain polarity classification. The analysis and conclusions of that work provide evidence that knowledge graphs capture meaning in a domain-independent way. The next part of our research takes advantage of the multilingual semantic network and focuses on cross-language Information Retrieval (IR) tasks. First, we propose a fully knowledge graph-based model of similarity analysis for cross-language plagiarism detection. Next, we improve that model to cover out-of-vocabulary words and verbal tenses and apply it to cross-language document retrieval, categorisation, and plagiarism detection. Finally, we study the use of knowledge graphs for the NLP tasks of community questions answering, native language identification, and language variety identification. The contributions of this thesis manifest the potential of knowledge graphs as a cross-domain and cross-language representation of text and its meaning for NLP and IR tasks. These contributions have been published in several international conferences and journals.
El Procesamiento del Lenguaje Natural (PLN) es un campo de la informática, la inteligencia artificial y la lingüística computacional centrado en las interacciones entre las máquinas y el lenguaje de los humanos. Uno de sus mayores desafíos implica capacitar a las máquinas para inferir el significado del lenguaje natural humano. Con este propósito, diversas representaciones del significado y el contexto han sido propuestas obteniendo un rendimiento competitivo. Sin embargo, estas representaciones todavía tienen un margen de mejora en escenarios transdominios y translingües. En esta tesis estudiamos el uso de grafos de conocimiento como una representación transdominio y translingüe del texto y su significado. Un grafo de conocimiento es un grafo que expande y relaciona los conceptos originales pertenecientes a un conjunto de palabras. Sus propiedades se consiguen gracias al uso como base de conocimiento de una red semántica multilingüe de amplia cobertura. Esto permite tener una cobertura de cientos de lenguajes y millones de conceptos generales y específicos del ser humano. Como punto de partida de nuestra investigación empleamos características basadas en grafos de conocimiento - junto con otras tradicionales y meta-aprendizaje - para la tarea de PLN de clasificación de la polaridad mono- y transdominio. El análisis y conclusiones de ese trabajo muestra evidencias de que los grafos de conocimiento capturan el significado de una forma independiente del dominio. La siguiente parte de nuestra investigación aprovecha la capacidad de la red semántica multilingüe y se centra en tareas de Recuperación de Información (RI). Primero proponemos un modelo de análisis de similitud completamente basado en grafos de conocimiento para detección de plagio translingüe. A continuación, mejoramos ese modelo para cubrir palabras fuera de vocabulario y tiempos verbales, y lo aplicamos a las tareas translingües de recuperación de documentos, clasificación, y detección de plagio. Por último, estudiamos el uso de grafos de conocimiento para las tareas de PLN de respuesta de preguntas en comunidades, identificación del lenguaje nativo, y identificación de la variedad del lenguaje. Las contribuciones de esta tesis ponen de manifiesto el potencial de los grafos de conocimiento como representación transdominio y translingüe del texto y su significado en tareas de PLN y RI. Estas contribuciones han sido publicadas en diversas revistas y conferencias internacionales.
El Processament del Llenguatge Natural (PLN) és un camp de la informàtica, la intel·ligència artificial i la lingüística computacional centrat en les interaccions entre les màquines i el llenguatge dels humans. Un dels seus majors reptes implica capacitar les màquines per inferir el significat del llenguatge natural humà. Amb aquest propòsit, diverses representacions del significat i el context han estat proposades obtenint un rendiment competitiu. No obstant això, aquestes representacions encara tenen un marge de millora en escenaris trans-dominis i trans-llenguatges. En aquesta tesi estudiem l'ús de grafs de coneixement com una representació trans-domini i trans-llenguatge del text i el seu significat. Un graf de coneixement és un graf que expandeix i relaciona els conceptes originals pertanyents a un conjunt de paraules. Les seves propietats s'aconsegueixen gràcies a l'ús com a base de coneixement d'una xarxa semàntica multilingüe d'àmplia cobertura. Això permet tenir una cobertura de centenars de llenguatges i milions de conceptes generals i específics de l'ésser humà. Com a punt de partida de la nostra investigació emprem característiques basades en grafs de coneixement - juntament amb altres tradicionals i meta-aprenentatge - per a la tasca de PLN de classificació de la polaritat mono- i trans-domini. L'anàlisi i conclusions d'aquest treball mostra evidències que els grafs de coneixement capturen el significat d'una forma independent del domini. La següent part de la nostra investigació aprofita la capacitat\hyphenation{ca-pa-ci-tat} de la xarxa semàntica multilingüe i se centra en tasques de recuperació d'informació (RI). Primer proposem un model d'anàlisi de similitud completament basat en grafs de coneixement per a detecció de plagi trans-llenguatge. A continuació, vam millorar aquest model per cobrir paraules fora de vocabulari i temps verbals, i ho apliquem a les tasques trans-llenguatges de recuperació de documents, classificació, i detecció de plagi. Finalment, estudiem l'ús de grafs de coneixement per a les tasques de PLN de resposta de preguntes en comunitats, identificació del llenguatge natiu, i identificació de la varietat del llenguatge. Les contribucions d'aquesta tesi posen de manifest el potencial dels grafs de coneixement com a representació trans-domini i trans-llenguatge del text i el seu significat en tasques de PLN i RI. Aquestes contribucions han estat publicades en diverses revistes i conferències internacionals.
Franco Salvador, M. (2017). A Cross-domain and Cross-language Knowledge-based Representation of Text and its Meaning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84285
TESIS
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Jieying. "Knowledge Extraction from Description Logic Terminologies." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS531.

Full text
Abstract:
Un nombre croissant d'ontologies de grandes tailles ont été développées et mises à disposition dans des référentiels tels que le NCBO Bioportal. L'accès aux connaissances les plus pertinentes contenues dans les grandes ontologies a été identifié comme un défi important. À cette fin, nous proposons dans cette thèse trois notions différentes : modules d’ontologie minimale (sous-ontologies conservant toutes les implications sur un vocabulaire donné), meilleurs extraits ontologiques (certains petits nombres d’axiomes qui capturent le mieux les connaissances sur le vocabulaire permettant un degré de perte sémantique) et un module de projection (sous-ontologies d'une ontologie cible qui impliquent la subsomption, les requêtes d'instance et les requêtes conjonctives issues d'une ontologie de référence). Pour calculer le module minimal et le meilleur extrait, nous introduisons la notion de justification de subsomption en tant qu'extension de la justification (ensemble minimal d'axiomes nécessaires pour conserver une conséquence logique) pour capturer la connaissance de subsomption entre un terme et tous les autres termes du vocabulaire. De même, nous introduisons la notion de justifications de projection qui impliquent une conséquence pour trois requêtes différentes afin de calculer le module de projection. Enfin, nous évaluons nos approches en appliquant une implémentation prototype sur de grandes ontologies
An increasing number of ontologies of large sizes have been developed and made available in repositories such as the NCBO Bioportal. Ensuring access to the most relevant knowledge contained in large ontologies has been identified as an important challenge. To this end, in this thesis, we propose three different notions: minimal ontology modules (sub-ontologies that preserve all entailments over a given vocabulary), best ontology excerpts (certain, small number of axioms that best capture the knowledge regarding the vocabulary by allowing for a degree of semantic loss) and projection module (sub-ontologies of a target ontology that entail subsumption, instance and conjunctive queries that follow from a reference ontology). For computing minimal module and best excerpt, we introduce the notion of subsumption justification as an extension of justification (a minimal set of axioms needed to preserve a logical consequence) to capture the subsumption knowledge between a term and all other terms in the vocabulary. Similarly, we introduce the notion of projection justifications that entail consequence for three different queries in order to computing projection module. Finally, we evaluate our approaches by applying a prototype implementation on large ontologies
APA, Harvard, Vancouver, ISO, and other styles
31

Reul, Quentin H. "Role of description logic reasoning in ontology matching." Thesis, University of Aberdeen, 2012. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=186278.

Full text
Abstract:
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
APA, Harvard, Vancouver, ISO, and other styles
32

Münnich, Stefan. "Ontologien als semantische Zündstufe für die digitale Musikwissenschaft?" De Gruyter, Berlin / Boston, 2018. https://slub.qucosa.de/id/qucosa%3A36849.

Full text
Abstract:
Ontologien spielen eine zentrale Rolle für die formalisierte Repräsentation von Wissen und Informationen sowie für die Infrastruktur des sogenannten semantic web. Trotz früherer Initiativen der Bibliotheken und Gedächtnisinstitutionen hat sich die deutschsprachige Musikwissenschaft insgesamt nur sehr zögerlich dem Thema genähert. Im Rahmen einer Bestandsaufnahme werden neben der Erläuterung grundlegender Konzepte, Herausforderungen und Herangehensweisen bei der Modellierung von Ontologien daher auch vielversprechende Modelle und bereits erprobte Anwendungsbeispiele für eine ‚semantische‘ digitale Musikwissenschaft identifiziert.
Ontologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
APA, Harvard, Vancouver, ISO, and other styles
33

Bate, Andrew. "Consequence-based reasoning for SRIQ ontologies." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6b35e7d0-199c-4db9-ac8a-7f78256e5fb8.

Full text
Abstract:
Description logics (DLs) are knowledge representation formalisms with numerous applications and well-understood model-theoretic semantics and computational properties. SRIQ is a DL that provides the logical underpinning for the semantic web language OWL 2, which is the W3C standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. Consequence-based calculi are a family of reasoning techniques for DLs. Such calculi have proved very effective in practice and enjoy a number of desirable theoretical properties. Up to now, however, they were proposed for either Horn DLs (which do not support disjunctive reasoning), or for DLs without cardinality constraints. In this thesis we present a novel consequence-based algorithm for TBox reasoning in SRIQ - a DL that supports both disjunctions and cardinality constraints. Combining the two features is non-trivial since the intermediate consequences that need to be derived during reasoning cannot be captured using DLs themselves. Furthermore, cardinality constraints require reasoning over equality, which we handle using the framework of ordered paramodulation - a state-of-the-art method for equational theorem proving. We thus obtain a calculus that can handle an expressive DL, while still enjoying all the favourable properties of existing consequence-based algorithms, namely optimal worst-case complexity, one-pass classification, and pay-as-you-go behaviour. To evaluate the practicability of our calculus, we implemented it in Sequoia - a new DL reasoning system. Empirical results show substantial robustness improvements over well-established algorithms and implementations, and performance competitive with closely related work.
APA, Harvard, Vancouver, ISO, and other styles
34

Armas, Romero Ana. "Ontology module extraction and applications to ontology classification." Thesis, University of Oxford, 2015. http://ora.ox.ac.uk/objects/uuid:4ec888f4-b7c0-4080-9d9a-3c46c91f67e3.

Full text
Abstract:
Module extraction is the task of computing a (preferably small) fragment M of an ontology O that preserves a class of entailments over a signature of interest ∑. Existing practical approaches ensure that M preserves all second-order entailments of O over ∑, which is a stronger condition than is required in many applications. In the first part of this thesis, we propose a novel approach to module extraction which, based on a reduction to a datalog reasoning problem, makes it possible to compute modules that are tailored to preserve only specific kinds of entailments. This leads to obtaining modules that are often significantly smaller than those produced by other practical approaches, as shown in an empirical evaluation. In the second part of this thesis, we consider the application of module extraction to the optimisation of ontology classification. Classification is a fundamental reasoning task in ontology design, and there is currently a wide range of reasoners that provide this service. Reasoners aimed at so-called lightweight ontology languages are much more efficient than those aimed at more expressive ones, but they do not offer completeness guarantees for ontologies containing axioms outside the relevant language. We propose an original approach to classification based on exploiting module extraction techniques to divide the workload between a general purpose reasoner and a more efficient reasoner for a lightweight language in such a way that the bulk of the workload is assigned to the latter. We show how the proposed approach can be realised using two particular module extraction techniques, including the one presented in the first part of the thesis. Furthermore, we present the results of an empirical evaluation that shows that this approach can lead to a significant performance improvement in many cases.
APA, Harvard, Vancouver, ISO, and other styles
35

Gängler, Thomas. "Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-72434.

Full text
Abstract:
Music is perceived and described very subjectively by every individual. Nowadays, people often get lost in their steadily growing, multi-placed, digital music collection. Existing music player and management applications get in trouble when dealing with poor metadata that is predominant in personal music collections. There are several music information services available that assist users by providing tools for precisely organising their music collection, or for presenting them new insights into their own music library and listening habits. However, it is still not the case that music consumers can seamlessly interact with all these auxiliary services directly from the place where they access their music individually. To profit from the manifold music and music-related knowledge that is or can be available via various information services, this information has to be gathered up, semantically federated, and integrated into a uniform knowledge base that can personalised represent this data in an appropriate visualisation to the users. This personalised semantic aggregation of music metadata from several sources is the gist of this thesis. The outlined solution particularly concentrates on users’ needs regarding music collection management which can strongly alternate between single human beings. The author’s proposal, the personal music knowledge base (PMKB), consists of a client-server architecture with uniform communication endpoints and an ontological knowledge representation model format that is able to represent the versatile information of its use cases. The PMKB concept is appropriate to cover the complete information flow life cycle, including the processes of user account initialisation, information service choice, individual information extraction, and proactive update notification. The PMKB implementation makes use of SemanticWeb technologies. Particularly the knowledge representation part of the PMKB vision is explained in this work. Several new Semantic Web ontologies are defined or existing ones are massively modified to meet the requirements of a personalised semantic federation of music and music-related data for managing personal music collections. The outcome is, amongst others, • a new vocabulary for describing the play back domain, • another one for representing information service categorisations and quality ratings, and • one that unites the beneficial parts of the existing advanced user modelling ontologies. The introduced vocabularies can be perfectly utilised in conjunction with the existing Music Ontology framework. Some RDFizers that also make use of the outlined ontologies in their mapping definitions, illustrate the fitness in practise of these specifications. A social evaluation method is applied to carry out an examination dealing with the reutilisation, application and feedback of the vocabularies that are explained in this work. This analysis shows that it is a good practise to properly publish Semantic Web ontologies with the help of some Linked Data principles and further basic SEO techniques to easily reach the searching audience, to avoid duplicates of such KR specifications, and, last but not least, to directly establish a \"shared understanding\". Due to their project-independence, the proposed vocabularies can be deployed in every knowledge representation model that needs their knowledge representation capacities. This thesis added its value to make the vision of a personal music knowledge base come true.
APA, Harvard, Vancouver, ISO, and other styles
36

Botha, Antonie Christoffel. "A new framework for a technological perspective of knowledge management." Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-06262008-123525/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hughes, Tracey D. "Visualizing Epistemic Structures of Interrogative Domain Models." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1227294380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Harkouken, Saiah Kenza. "Etude et définition de mécanismes sémantiques dans les environnements virtuels pour améliorer la crédibilité comportementale des agents : utilisation d'ontologies de services." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066690/document.

Full text
Abstract:
Ce travail de thèse se situe dans le cadre du projet Terra Dynamica visant à peupler une ville virtuelle avec des agents qui simulent des piétons et des véhicules. L’objectif de notre travail est de rendre l’environnement compréhensible par les agents de la simulation afin qu’ils puissent exhiber des comportements crédibles. Les premiers travaux qui ont été proposés pour la modélisation sémantique des environnements virtuels gardent toujours un lien de dépendance avec la représentation graphique pré-existante de l’environnement. Cependant, l’information sémantique représentée dans ce genre d’approches est difficilement exploitable par les agents pour effectuer des procédures de raisonnement complexes en dehors des algorithmes de navigation. Nous présentons dans cette thèse un modèle de représentation de la sémantique de l’environnement qui fournit aux agents des données sur l’utilisation des objets de l’environnement pour permettre au mécanisme d’aide à la décision de produire des comportements crédibles. Par ailleurs, en réponse à des contraintes inhérentes à la simulation urbaine, notre approche est capable de traiter un grand nombre d’agents, en temps réel. Notre modèle est basé sur le principe que les objets de l’environnement proposent des services permettant de réaliser les actions avec différentes qualités. Nous avons donc représenté les informations sémantiques des objets liées à leur utilisation sous forme de services dans une ontologie de services. Nous avons utilisé cette ontologie de services pour calculer une qualité de service QoS qui nous permet de trier les différents objets permettant de réaliser une même action. Ainsi, nous pouvons comparer entre les services proposés par les objets pour proposer aux agents les meilleurs objets leur permettant de réaliser leurs actions afin d’acquérir une crédibilité comportementale. Afin d’évaluer l’impact de notre modèle sur la crédibilité des comportements produits, nous avons défini un protocole d’évaluation dédié aux modèles de représentation de la sémantique dans les environnements. Dans ce protocole, des observateurs doivent évaluer le caractère crédible des comportements produits par le simulateur à partir d’un modèle sémantique de l’environnement. Grâce à cette évaluation, nous montrons que notre modèle permet de simuler des agents dont le comportement est jugé comme crédible par des observateurs humains. Nous présentons également une évaluation qualitative de la capacité de notre modèle de passer à l’échelle et de répondre aux contraintes d’une simulation temps-réel. Cette évaluation, nous a permis de montrer que les caractéristiques de l’architecture de notre modèle nous permettent de répondre en un temps raisonnable aux demandes d’un grand nombre d’agents
This work is part of the Terra Dynamica project whose objective was to populate a virtual city with agents that simulate pedestrians and vehicles. The aim of our work is to make agents which understand their environment so they can produce credible behaviors The first proposed solutions for the semantic modeling of virtual environments still keep a link with the pre-existing graphic representation of the environment. However, the semantic information represented in this kind of approach is difficult to use by the agents to perform complex reasoning procedures outside the navigation algorithms. In this thesis we present a semantic representation model of the environment that provides the agents with data on the use of environmental objects in order to allow the decision mechanism to produce credible behaviors. Furthermore, in response to the constraints that are inherent to the urban simulation, our approach is capable of handling a large number of agents in real time. Our model is based on the principle that environmental objects provide services for performing actions with different qualities. We have therefore represented the semantic information of the objects related to their use, as services in an ontology of services. We used this ontology of services to calculate a QoS which allows us to sort the different objects which all perform the same action. Thus, we can compare between the services offered by different objects in order to provide the agents with the best objects that allow them to carry out their actions and exhibit behavioral credibility. To assess the impact of our model on the credibility of the produced behaviors, we defined an evaluation protocol for the semantic representation of virtual environment models. In this protocol, observers must assess the credibility of behaviors produced by the simulator using a semantic model of the environment. Through this evaluation, we show that our model can simulate agents whose behavior is deemed credible by human observers. We also present a qualitative assessment of the ability of our model to scale and meet the constraints of a real-time simulation. This evaluation allowed us to show that the characteristics of the architecture of our model allow us to respond in a reasonable amount of time to requests from a large number of agents
APA, Harvard, Vancouver, ISO, and other styles
39

Palazzo, Luiz Antonio Moro. "Representação de conhecimento : programação em lógica e o modelo das hiperredes." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1991. http://hdl.handle.net/10183/24180.

Full text
Abstract:
Apesar de sua inerente indecidibilidade e do problema da negação, extensões da lógica de primeira ordem tem se mostrado capazes de superar a questão da monotonicidade, vindo a constituir esquemas de representação de conhecimento de expressividade virtualmente universal. Resta entretanto solucionar ou pelo menos amenizar as conseqüências do problema do controle, que limitam o seu emprego a aplicações de pequeno a médio porte. Investigações nesse sentido [BOW 85] [MON 88] indicam que a chave para superar a explosão inferencial passa obrigatoriamente pela estruturação do conhecimento, de modo a permitir o exercício de algum controle sobre as possíveis derivações dele decorrentes. O modelo das hiperredes [GEO 85] parece atingir tal objetivo, dado o seu elevado potencial de estruturação e o instrumental que oferece para o tratamento de construções descritivas, operacionais e organizacionais. Além disso, a simplicidade e uniformidade sintática de suas entidades primitivas possibilita uma interpretação semântica bastante clara do modelo original, por exemplo, baseada em grafos. O presente trabalho representa uma tentativa de associar a programação em lógica ao formalismo das hiperredes, visando obter um novo modelo capaz de preservar as expressividade da primeira, beneficiando-se simultaneamente do potencial heurístico e estrutura do segundo. Inicialmente procura-se obter uma noção clara da natureza do conhecimento e de seus mecanismos com o objetivo de caracterizar o problema da representação de conhecimento. Diferentes esquemas correntemente empregados para esse fim (sistemas de produções, redes semânticas, sistemas de frames, programação em lógica e a linguagem Krypton) são estudados e caracterizados do ponto de vista de sua expressividade, potencial heurístico e conveniência notacional. A programação em lógica é objeto de um estudo em maior profundidade, sob os enfoques modelo-teorético e prova-teorético. Sistemas de programação em lógica - particularmente a linguagem Prolog e extensões em nível meta - são investigados como esquemas de representação de conhecimento, considerando seus aspectos sintáticos e semânticos e a sua retação com Sistemas Gerenciadores de Bases de Dados. O modelo das hiperredes é apresentado introduzindo-se, entre outros, os conceitos de hipernodo, hiperrelação e protótipo, assim como as propriedades particutares de tais entidades. A linguagem Hyper, para o tratamento de hiperredes, é formalmente especificada. Emprega-se a linguagem Prolog como formalismo para a representação de Bases de Conhecimento estruturadas segundo o modelo das hiperredes. Sob tal abordagem uma Base de Conhecimento é vista como um conjunto (possivelmente vazio) de objetos estruturados ou peças de conhecimento, que por sua vez são classificados como hipernodos, hiperrelações ou protótipos. Um mecanismo top-down para a produção de inferências em hiperredes é proposto, introduzindo-se os conceitos de aspecto e visão sobre hiperredes, os quais são tomados como objetos de primeira classe, no sentido de poderem ser valores atribuídos a variáveis. Estuda-se os requisitos que um Sistema Gerenciador de Bases de Conhecimento deve apresentar, do ponto de vista da aplicação, da engenharia de conhecimento e da implementação, para suportar efetivamente os conceitos e abstrações (classificação, generalização, associação e agregação) associadas ao modelo proposto. Com base nas conclusões assim obtidas, um Sistema Gerenciador de Bases de Conhecimento (denominado Rhesus em alusão à sua finalidade experimental é proposto e especificado, objetivando confirmar a viabilidade técnica do desenvolvimento de aplicações baseadas em lógica e hiperredes.
In spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
APA, Harvard, Vancouver, ISO, and other styles
40

Hughes, Cameron A. "Epistemic Structures of Interrogative Domains." Youngstown State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1227285777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

García, González Roberto. "A semantic web approach to digital rights management." Doctoral thesis, Universitat Pompeu Fabra, 2006. http://hdl.handle.net/10803/7538.

Full text
Abstract:
Un dels principals requeriments de la gestió de drets digitals a la Web és un llenguatge compartit per a la representació del copyright. Les aproximacions actuals es basen en solucions purament sintàctiques, simples i difícils de posar en pràctica.

La contribució d'aquesta tesi és aplicar una aproximació semàntica basada en ontologies web a la gestió de drets digitals. Es desenvolupa una Ontologia del Copyright on les peces bàsiques són un model de creació, el drets de copyright i les accions que és poden dur a terme sobre els continguts. Aquesta ontologia facilita el desenvolupament de sistemes de gestió de drets.

També s'ha aplicat l'enfocament semàntic als principals llenguatges d'expressió de drets. S'han integrat amb l'ontologia per tal d'avaluar-la i a la vegada s'han enriquit amb la seva base semàntica. Finalment, tot això s'ha posat en pràctica en un sistema semàntic de gestió de drets.
Uno de los principales requerimientos de la gestión de derechos digitales en la Web es un lenguaje compartido para la representación del copyright. Las aproximaciones actuales se basan en soluciones puramente sintácticas, simples y difíciles de poner en práctica.

La contribución de esta tesis es aplicar una aproximación semántica basada en ontologías Web a la gestión de derechos digitales. Se desarrolla una Ontología del Copyright cuyas piezas básicas son un modelo de creación, los derechos de copyright y las acciones que se pueden llevar a cabo sobre los contenidos. Esta ontología facilita el desarrollo de sistemas de gestión de derechos.

También se ha aplicado el enfoque semántico a los principales lenguajes de expresión de derechos. Se han integrado con la ontología para evaluarla y a la vez se han enriquecido con su base semántica. Finalmente, todo esto se ha puesto en práctica en un sistema semántico de gestión de derechos.
One of the main requirements of web digital rights management is a shared language for copyright representation. The current approaches are based on syntactic solutions, which are simple and difficult to put into practice.

The contribution of this thesis is to apply a semantic approach based on web ontologies to digital rights management. It develops a Copyright Ontology whose basic pieces are a creation model, the copyrights and the actions that can be carried out on the content. This ontology facilitates rights management systems development.

The semantic approach has also been applied to the main rights expression languages. They have been integrated with the ontology in order to evaluate it and, at the same time, they have been enriched with their base semantics. Finally, all this has been put into practice in a semantic digital rights management system.
APA, Harvard, Vancouver, ISO, and other styles
42

Lacroix, Timothée. "Décompositions tensorielles pour la complétion de bases de connaissance." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1002.

Full text
Abstract:
Dans cette thèse, nous abordons le problème de prédiction de liens dans des tenseurs binaires d'ordre trois et quatre contenant des observations positives uniquement. Ce type de tenseur apparaît dans les problèmes de recommandations sur le web, en bio-informatique pour compléter des bases d'interactions entre protéines, ou plus généralement pour la complétion bases de connaissances. Ces dernières nous permettent d'évaluer nos méthodes de complétion à grande échelle et sur des types de graphes relationnels variés.Notre approche est parallèle à celle de la complétion de matrice. Nous résolvons de manière non-convexe un problème de minimisation empirique régularisé sur des tenseurs de faible rangs. Dans un premier temps, nous validons empiriquement notre approche en obtenant des performances supérieures à l'état de l'art sur de nombreux jeux de données.Ces performances ne peuvent être atteintes que pour des rangs trop élevés pour que cette méthode soit applicable à l'échelle de bases de connaissances complètes. Nous nous intéressons dans un second temps à la décomposition Tucker, plus expressive que la décomposition Canonique, mais plus difficile à optimiser. En corrigeant l'algorithme adaptatif Adagrad, nous arrivons à optimiser efficacement des décompositions Tucker dont le cœur est aléatoire et fixé. Ces méthodes nous permettent d'améliorer les performances en complétion pour une quantité faible de paramètres par entités.Finalement, nous étudions le cas de base de connaissances temporelles, dans lesquels les prédicats ne sont valides que sur certains intervalles de temps. Nous proposons une formulation faible rang et une régularisation adaptée à la structure du problème, qui nous permet d'obtenir des performances supérieures à l'état de l'art
In this thesis, we focus on the problem of link prediction in binary tensors of order three and four containing positive observations only. Tensors of this type appear in web recommender systems, in bio-informatics for the completion of protein interaction databases, or more generally for the completion of knowledge bases. We benchmark our completion methods on knowledge bases which represent a variety of relationnal data and scales.Our approach is parallel to that of matrix completion. We optimize a non-convex regularised empirical risk objective over low-rank tensors. Our method is empirically validated on several databases, performing better than the state of the art.These performances however can only be reached for ranks that would not scale to full modern knowledge bases such as Wikidata. We focus on the Tucker decomposition which is more expressive than the Canonical decomposition but also harder to optimize. By fixing the adaptive algorithm Adagrad, we obtain a method to efficiently optimize Tucker decompositions with a fixed random core tensor. With these method, we obtain improved performances on several benchmarks for limited parameters per entities.Finally, we study the case of temporal knowledge bases, in which the predicates are only valid over certain time intervals. We propose a low-rank formulation and regularizer adapted to the temporal structure of the problem and obtain better performances than the state of the art
APA, Harvard, Vancouver, ISO, and other styles
43

Cori, Marcel. "Modèles pour la représentation et l'interrogation de données textuelles et de connaissances." Paris 7, 1987. http://www.theses.fr/1987PA077047.

Full text
Abstract:
Ces modèles combinent à des réseaux sémantiques des bases de connaissances formées de règles. Les données sont représentées par des graphes sans circuit, ordonnés ou semi-ordonnés, ainsi que par des grammaires de graphes. La recherche de la réponse à une question se ramène à la recherche de morphismes entre structures. Les réprésentations sont construites automatiquement par l'appel à des règles de réécriture de graphes
APA, Harvard, Vancouver, ISO, and other styles
44

Baring-Gould, Sengan. "SemNet : the knowledge representation of LOLITA." Thesis, Durham University, 2000. http://etheses.dur.ac.uk/4284/.

Full text
Abstract:
Many systems of Knowledge Representation exist, but none were designed specifically for general purpose large scale natural language processing. This thesis introduces a set of metrics to evaluate the suitability of representations for this purpose, derived from an analysis of the problems such processing introduces. These metrics address three broad categories of question: Is the representation sufficiently expressive to perform its task? What implications has its design on the architecture of the system using it? What inefficiencies are intrinsic to its design? An evaluation of existing Knowledge Representation systems reveals that none of them satisfies the needs of general purpose large scale natural language processing. To remedy this lack, this thesis develops a new representation: SemNet. SemNet benefits not only from the detailed requirements analysis but also from insights gained from its use as the core representation of the large scale general purpose system LOLITA (Large-scale Object-based Linguistic Interactor, Translator, and Analyser). The mapping process between Natural language and representation is presented in detail, showing that the representation achieves its goals in practice.
APA, Harvard, Vancouver, ISO, and other styles
45

Bénard, Jeremy. "Import, export et traduction sémantiques génériques basés sur une ontologie de langages de représentation de connaissances." Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0021/document.

Full text
Abstract:
Les langages de représentation de connaissances (LRCs) sont des langages qui permettent de représenter et partager des informations sous une forme logique. Il y a de nombreux LRCs. Chaque LRC a un modèle structurel abstrait et peut avoir plusieurs notations. Ces modèles et notations ont été conçus pour répondre à des besoins de modélisation ou de calculabilité différents, ainsi qu'à des préférences différentes. Les outils actuels gérant ou traduisant des RCs ne travaillent qu'avec quelques LRCs et ne permettent pas – ou très peu – à leurs utilisateurs finaux d'adapter les modèles et notations de ces LRCs. Cette thèse contribue à résoudre ces problèmes pratiques et le problème de recherche original suivant : “une fonction d'import et une fonction d'export de RCs peuvent-elle être spécifiées de façon générique et, si oui, comment leurs ressources peuvent-elles êtres spécifiées ?”. Cette thèse s'inscrit dans un projet plus vaste dont l'objectif général est de faciliter le partage et la réutilisation des connaissances liées aux composants logiciels et à leurs présentations. L'approche suivie dans cette thèse est basée sur une ontologie de LRCs nommée KRLO, et donc sur une représentation formelle de ces LRCs.KRLO a trois caractéristiques importantes et originales auxquelles cette thèse à contribué : i) elle représente des modèles de LRCs de différentes familles de façon uniforme, ii) elle inclut une ontologie de notations de LRCs, et iii) elle spécifie des fonctions génériques pour l'import et l'export de RCs dans divers LRCs. Cette thèse a contribué à améliorer la première version de KRLO (KRLO_2014) et à donner naissance à sa seconde version. KRLO_2014 contenait des imprécisions de modélisation qui rendaient son exploitation difficile ou peu pratique. Cette thèse a aussi contribué à la spécification et l'opérationnalisation de “Structure_map”, une fonction permettant d'écrire de façon modulaire et paramétrable toute autre fonction utilisant une boucle. Son utilisation permet de créer et d'organiser les fonctions en une ontologie de composants logiciels. Pour implémenter une fonction générique d'export basée sur KRLO, j'ai développé SRS (Structure_map based Request Solver), un résolveur d'expressions de chemins sur des RCs. SRS interprète toutes les fonctions. SRS apporte ainsi une validation expérimentale à la fois à l'utilisation de cette primitive (Structure_map) et à l'utilisation de KRLO. Directement ou indirectement, SRS et KRLO pourront être utilisés par GTH (Global Technologies Holding), l'entreprise partenaire de cette thèse
Knowledge Representation Languages (KRLs) are languages enabling to represent and share information in a logical form. There are many KRLs. Each KRL has one abstract structural model and can have multiple notations. These models and notations were designed to meet different modeling or computational needs, as well as different preferences. Current tools managing or translating knowledge representations (KRs) allow the use of only one or few KRLs and do not enable – or hardly enable – their end-users to adapt the models and notations of these KRLs. This thesis helps to solve these practical problems and this original research problem: “Can a KR import function and a KR export function be specified in a generic way and, if so, how can their resources be Specified ?”. This thesis is part of a larger project the overall objective of which is to facilitate i) the sharing and reuse of knowledge related to software components, and ii) knowledge presentations. The approach followed in this thesis is based on an ontology of KRLs named KRLO, and therefore on a formal representation of these KRLs.KRLO has three important and original features to which this thesis contributed: i) it represents KRL models of different families in a uniform way, ii) it includes an ontology of KRLs notations, and iii) it specifies generic functions for KR import and export in various KRLs. This thesis has contributed to the improvement of the first version of KRLO (KRLO_2014) and to the creation of its second version. KRLO_2014 contained modeling inaccuracies that made it difficult or inconvenient to use. This thesis has also contributed to the specification and the operationalization of “Structure_map”, a function enabling to write any other function that uses a loop, in a modular and configurable way. Its use makes it possible to create and organize these functions into an ontology of software components. To implement a generic export function based on KRLO, I developed SRS (Structure_map based Request Solver), a KR retrieval tool enabling the use of KR path expressions. SRS interprets all functions. SRS thus provides an experimental validation for both the use of this primitive (Structure_map) and the use of KRLO.Directly or indirectly, SRS and KRLO may be used by GTH (Global Technologies Holding), the partner company of this thesis
APA, Harvard, Vancouver, ISO, and other styles
46

Shahwan, Ahmad. "Processing Geometric Models of Assemblies to Structure and Enrich them with Functional Information." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM023/document.

Full text
Abstract:
La maquette numérique d'un produit occupe une position centrale dans le processus de développement de produit. Elle est utilisée comme représentation de référence des produits, en définissant la forme géométrique de chaque composant, ainsi que les représentations simplifiées des liaisons entre composants. Toutefois, les observations montrent que ce modèle géométrique n'est qu'une représentation simplifiée du produit réel. De plus, et grâce à son rôle clé, la maquette numérique est de plus en plus utilisée pour structurer les informations non-géométriques qui sont ensuite utilisées dans diverses étapes du processus de développement de produits. Une demande importante est d'accéder aux informations fonctionnelles à différents niveaux de la représentation géométrique d'un assemblage. Ces informations fonctionnelles s'avèrent essentielles pour préparer des analyses éléments finis. Dans ce travail, nous proposons une méthode automatisée afin d'enrichir le modèle géométrique extrait d'une maquette numérique avec les informations fonctionnelles nécessaires pour la préparation d'un modèle de simulation par éléments finis. Les pratiques industrielles et les représentations géométriques simplifiées sont prises en compte lors de l'interprétation d'un modèle purement géométrique qui constitue le point de départ de la méthode proposée
The digital mock-up (DMU) of a product has taken a central position in the product development process (PDP). It provides the geometric reference of the product assembly, as it defines the shape of each individual component, as well as the way components are put together. However, observations show that this geometric model is no more than a conventional representation of what the real product is. Additionally, and because of its pivotal role, the DMU is more and more required to provide information beyond mere geometry to be used in different stages of the PDP. An increasingly urging demand is functional information at different levels of the geometric representation of the assembly. This information is shown to be essential in phases such as geometric pre-processing for finite element analysis (FEA) purposes. In this work, an automated method is put forward that enriches a geometric model, which is the product DMU, with function information needed for FEA preparations. To this end, the initial geometry is restructured at different levels according to functional annotation needs. Prevailing industrial practices and representation conventions are taken into account in order to functionally interpret the pure geometric model that provides a start point to the proposed method
APA, Harvard, Vancouver, ISO, and other styles
47

Sjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.

Full text
Abstract:

In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:

* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.

* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.

Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.

APA, Harvard, Vancouver, ISO, and other styles
48

Bandyopadhyay, Bortik. "Querying Structured Data via Informative Representations." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595447189545086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Guérin, Clément. "Proposition d'un cadre pour l'analyse automatique, l'interprétation et la recherche interactive d'images de bande dessinée." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS024/document.

Full text
Abstract:
Le paysage numérique de la culture française et mondiale subit de grands bouleversements depuis une quinzaine d’années avec des mutations historiques des médias, de leur format traditionnel au format numérique, tirant avantageusement parti des nouveaux moyens de communication et des dispositifs mobiles aujourd’hui popularisés. Aux côtés de formes culturelles ayant achevé, ou étant en passe d’achever, leur transition vers le numérique, la bande dessinée tâtonne encore pour trouver sa place dans l’espace du tout dématérialisé. En parallèle de l’émergence de jeunes auteurs créant spécifiquement pour ces nouveaux supports de lecture que sont ordinateurs, tablettes et smartphones, plusieurs acteurs du monde socio-économique s’intéressent à la valorisation du patrimoine existant. Les efforts se concentrent autant sur une démarche d’adaptation des œuvres aux nouveaux paradigmes de lecture que sur celle d’une indexation de leur contenu facilitant la recherche d’informations dans des bases d’albums numérisés ou dans des collections d’œuvres rares. La problématique est double, il s’agit premièrement d’être en mesure d’identifier la structure d’une planche de bande dessinée en se basant sur des extractions de primitives, issues d’une analyse d’image, validées et corrigées grâce à l’action conjointe de deux ontologies, la première manipulant les extractions d’images bas-niveau, la deuxième modélisant les règles de composition classiques de la bande dessinée franco-belge. Dans un second temps l’accent est mis sur l’enrichissement sémantique des éléments identifiés comme composants individuels d’une planche en s’appuyant sur les relations spatiales qu’ils entretiennent les uns avec les autres ainsi que sur leurs caractéristiques physiques intrinsèques. Ces annotations peuvent porter sur des éléments seuls (place d’une case dans la séquence de lecture) ou sur des liens entre éléments (texte prononcé par un personnage)
Since the beginning of the twenty-first century, the cultural industry, both in France and worldwide, has been through a massive and historical mutation. They have had to adapt to the emerging digital technology represented by the Internet and the new handheld devices such as smartphones and tablets. Although some industries successfully transfered a piece of their activity to the digital market and are about to find a sound business model, the comic books industry keeps looking for the right solution and has not yet produce anything as convincing as the music or movie offers. While many new young authors and writers use their creativity to produce specifically digital designed pieces of art, some other minds are focused on the preservation and the development of the already existing heritage. So far, efforts have been concentrated on the transfer from printed to digital support, with a special attention given to their specific features and how they can be used to create new reading conventions. There has also been some concerns about the content indexing, which is a hard task regarding the large amount of data created since the very beginning of the comics history. From a scientific point of view, there are several issues related to these goals. First, it implies to be able to identify the underlying structure of a comic books page. This comes through the extraction of the page's components, their validation and their correction based on the representation and reasoning capacities of two ontologies. The first one focus on the representation of the image analysis concepts and the second one represents the comic books domain knowledge. Secondly, a special attention is given to the semantic enhancement of the extracted elements, based on their spatial relations to each others and on their own characteristics. These annotations can be related to elements only (e.g. the position of a panel in the reading sequence), or to the bound between several elements (e.g. the text pronounced by a character)
APA, Harvard, Vancouver, ISO, and other styles
50

Suarez, John Freddy Garavito. "Ontologias e DSLs na geração de sistemas de apoio à decisão, caso de estudo SustenAgro." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-26072017-113829/.

Full text
Abstract:
Os Sistemas de Apoio à Decisão (SAD) organizam e processam dados e informações para gerar resultados que apoiem a tomada de decisão em um domínio especifico. Eles integram conhecimento de especialistas de domínio em cada um de seus componentes: modelos, dados, operações matemáticas (que processam os dados) e resultado de análises. Nas metodologias de desenvolvimento tradicionais, esse conhecimento deve ser interpretado e usado por desenvolvedores de software para implementar os SADs. Isso porque especialistas de domínio não conseguem formalizar esse conhecimento em um modelo computável que possa ser integrado aos SADs. O processo de modelagem de conhecimento é realizado, na prática, pelos desenvolvedores, parcializando o conhecimento do domínio e dificultando o desenvolvimento ágil dos SADs (já que os especialistas não modificam o código diretamente). Para solucionar esse problema, propõe-se um método e ferramenta web que usa ontologias, na Web Ontology Language (OWL), para representar o conhecimento de especialistas, e uma Domain Specific Language (DSL), para modelar o comportamento dos SADs. Ontologias, em OWL, são uma representação de conhecimento computável, que permite definir SADs em um formato entendível e accessível a humanos e máquinas. Esse método foi usado para criar o Framework Decisioner para a instanciação de SADs. O Decisioner gera automaticamente SADs a partir de uma ontologia e uma descrição naDSL, incluindo a interface do SAD (usando uma biblioteca de Web Components). Um editor online de ontologias, que usa um formato simplificado, permite que especialistas de domínio possam modificar aspectos da ontologia e imediatamente ver as consequência de suasmudanças no SAD.Uma validação desse método foi realizada, por meio da instanciação do SAD SustenAgro no Framework Decisioner. O SAD SustenAgro avalia a sustentabilidade de sistemas produtivos de cana-de-açúcar na região centro-sul do Brasil. Avaliações, conduzidas por especialistas em sustentabilidade da Embrapa Meio ambiente (parceiros neste projeto), mostraram que especialistas são capazes de alterar a ontologia e DSL usadas, sem a ajuda de programadores, e que o sistema produz análises de sustentabilidade corretas.
Decision Support Systems (DSSs) organize and process data and information to generate results to support decision making in a specific domain. They integrate knowledge from domain experts in each of their components: models, data, mathematical operations (that process the data) and analysis results. In traditional development methodologies, this knowledge must be interpreted and used by software developers to implement DSSs. That is because domain experts cannot formalize this knowledge in a computable model that can be integrated into DSSs. The knowledge modeling process is carried out, in practice, by the developers, biasing domain knowledge and hindering the agile development of DSSs (as domain experts cannot modify code directly). To solve this problem, a method and web tool is proposed that uses ontologies, in the Web Ontology Language (OWL), to represent experts knowledge, and a Domain Specific Language (DSL), to model DSS behavior. Ontologies, in OWL, are a computable knowledge representations, which allow the definition of DSSs in a format understandable and accessible to humans and machines. This method was used to create the Decisioner Framework for the instantiation of DSSs. Decisioner automatically generates DSSs from an ontology and a description in its DSL, including the DSS interface (using a Web Components library). An online ontology editor, using a simplified format, allows that domain experts change the ontology and immediately see the consequences of their changes in the in the DSS. A validation of this method was done through the instantiation of the SustenAgro DSS, using the Decisioner Framework. The SustenAgro DSS evaluates the sustainability of sugarcane production systems in the center-south region of Brazil. Evaluations, done by by sustainability experts from Embrapa Environment (partners in this project), showed that domain experts are capable of changing the ontology and DSL program used, without the help of software developers, and that the system produced correct sustainability analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography