Добірка наукової літератури з теми "Semi-Semantic knowledge base"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Semi-Semantic knowledge base".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Semi-Semantic knowledge base":

1

Willmes, Christian, Finn Viehberg, Sarah Esteban Lopez, and Georg Bareth. "CRC806-KB: A Semantic MediaWiki Based Collaborative Knowledge Base for an Interdisciplinary Research Project." Data 3, no. 4 (October 25, 2018): 44. http://dx.doi.org/10.3390/data3040044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the frame of an interdisciplinary research project that is concerned with data from heterogeneous domains, such as archaeology, cultural sciences, and the geosciences, a web-based Knowledge Base system was developed to facilitate and improve research collaboration between the project participants. The presented system is based on a Wiki that was enhanced with a semantic extension, which enables to store and query structured data within the Wiki. Using an additional open source tool for Schema–Driven Development of the data model, and the structure of the Knowledge Base, improved the collaborative data model development process, as well as semi-automation of data imports and updates. The paper presents the system architecture, as well as some example applications of a collaborative Wiki based Knowledge Base infrastructure.
2

Monteiro, Luciane Lena Pessanha, and Mark Douglas de Azevedo Jacyntho. "Use of Linked Data principles for semantic management of scanned documents." Transinformação 28, no. 2 (August 2016): 241–51. http://dx.doi.org/10.1590/2318-08892016000200010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.
3

Juan and Faber. "Extraction of Terms Related to Named Rivers." Languages 4, no. 3 (June 27, 2019): 46. http://dx.doi.org/10.3390/languages4030046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
EcoLexicon is a terminological knowledge base on environmental science, whose design permits the geographic contextualization of data. For the geographic contextualization of landform concepts, this paper presents a semi-automatic method for extracting terms associated with named rivers (e.g., Mississippi River). Terms were extracted from a specialized corpus, where named rivers were automatically identified. Statistical procedures were applied for selecting both terms and rivers in distributional semantic models to construct the conceptual structures underlying the usage of named rivers. The rivers sharing associated terms were also clustered and represented in the same conceptual network. The results showed that the method successfully described the semantic frames of named rivers with explanatory adequacy, according to the premises of Frame-Based Terminology.
4

Celik, Duygu, and Atilla Elci. "Semantic composition of business processes using Armstrong's Axioms." Knowledge Engineering Review 29, no. 2 (March 2014): 248–64. http://dx.doi.org/10.1017/s0269888914000083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLack of sufficient semantic description in the content of Web services makes it difficult to find and compose suitable Web services during analysis, search, and matching processes. Semantic Web Services are Web services that have been enhanced with formal semantic description, which provides well-defined meaning. Due to insertion of semantics, meeting user demands will be made possible through logical deductions achieving resolutions automatically. We have developed an inference-based semantic business process composition agent (SCA) that employs inference techniques. The semantic composition agent system is responsible for the synthesis of new services from existing ones in a semi-automatic fashion. SCA System composes available Web Ontology Language for Web services atomic processes utilizing Revised Armstrong's Axioms (RAAs) in inferring functional dependencies. RAAs are embedded in the knowledge base ontologies of SCA System. Experiments show that the proposed SCA System produces process sequences as a composition plan that satisfies user's requirement for a complex task. The novelty of the SCA System is that for the first time Armstrong's Axioms are revised and used for semantic-based planning and inferencing of Web services.
5

Rangel, Carlos Ramón, Junior Altamiranda, Mariela Cerrada, and Jose Aguilar. "Procedure Based on Semantic Similarity for Merging Ontologies by Non-Redundant Knowledge Enrichment." International Journal of Knowledge Management 14, no. 2 (April 2018): 16–36. http://dx.doi.org/10.4018/ijkm.2018040102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The merging procedures of two ontologies are mostly related to the enrichment of one of the input ontologies, i.e. the knowledge of the aligned concepts from one ontology are copied into the other ontology. As a consequence, the resulting new ontology extends the original knowledge of the base ontology, but the unaligned concepts of the other ontology are not considered in the new extended ontology. On the other hand, there are experts-aided semi-automatic approaches to accomplish the task of including the knowledge that is left out from the resulting merged ontology and debugging the possible concept redundancy. With the aim of facing the posed necessity of including all the knowledge of the ontologies to be merged without redundancy, this article proposes an automatic approach for merging ontologies, which is based on semantic similarity measures and exhaustive searching along of the closest concepts. The authors' approach was compared to other merging algorithms, and good results are obtained in terms of completeness, relationships and properties, without creating redundancy.
6

Zhou, Lu-jie, Zhi-peng Zhao, and Jian-wu Dang. "Combining BERT Model with Semi-Supervised Incremental Learning for Heterogeneous Knowledge Fusion of High-Speed Railway On-Board System." Computational Intelligence and Neuroscience 2022 (May 31, 2022): 1–15. http://dx.doi.org/10.1155/2022/9948218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On-board system fault knowledge base (KB) is a collection of fault causes, maintenance methods, and interrelationships among on-board modules and components of high-speed railways, which plays a crucial role in knowledge-driven dynamic operation and maintenance (O&M) decisions for on-board systems. To solve the problem of multi-source heterogeneity of on-board system O&M data, an entity matching (EM) approach using the BERT model and semi-supervised incremental learning is proposed. The heterogeneous knowledge fusion task is formulated as a pairwise binary classification task of entities in the knowledge units. Firstly, the deep semantic features of fault knowledge units are obtained by BERT. We also investigate the effectiveness of knowledge unit features extracted from different hidden layers of the model on heterogeneous knowledge fusion during model fine-tuning. To further improve the utilization of unlabeled test samples, a semi-supervised incremental learning strategy based on pseudo labels is devised. By selecting entity pairs with high confidence to generate pseudo labels, the label sample set is expanded to realize incremental learning and enhance the knowledge fusion ability of the model. Furthermore, the model’s robustness is strengthened by embedding-based adversarial training in the fine-tuning stage. Based on the on-board system’s O&M data, this paper constructs the fault KB and compares the model with other solutions developed for related matching tasks, which verifies the effectiveness of this model in the heterogeneous knowledge fusion task of the on-board system.
7

Wang, Tiexin, Jingwen Cao, Chuanqi Tao, Zhibin Yang, Yi Wu, and Bohan Li. "A Configurable Semantic-Based Transformation Method towards Conceptual Models." Discrete Dynamics in Nature and Society 2020 (September 27, 2020): 1–14. http://dx.doi.org/10.1155/2020/6718087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Conceptual models are built to depict and analyze complex systems. They are made of concepts and relationships among these concepts. In a particular domain, conceptual models are helpful for different stakeholders to reach a clear and unified view of domain problems. However, the process of building conceptual models is time-consuming, tedious, and expertise required. To improve the efficiency of the building process, this paper proposes a configurable semantic-based (semi-) automatic conceptual model transformation methodology (SbACMT) that tries to reuse existing conceptual models to generate new models. SbACMT contains three parts: (i) a configurable semantic relatedness computing method building on the structured linguistic knowledge base “ConceptNet” (SRCM-CNet), (ii) a specific meta-model, which follows the Ecore standard, defines the rules of applying SRCM-CNet to different conceptual models to automatically detect transformation mappings, and (iii) a multistep matching and transformation process that employs SRCM-CNet. A case study is carried out to detail the working mechanism of SbACMT. Furthermore, through a systematically analysis of this case study, we validate the performance of SbACMT. We prove that SbACMT can support the automatic transformation process of conceptual models (e.g., class diagrams). The scalability of SbACMT can be improved by adapting the meta-model and predefined syntax transformation rules.
8

León-Paredes, Gabriel Alejandro, Liliana Ibeth Barbosa-Santillán, Juan Jaime Sánchez-Escobar, and Antonio Pareja-Lora. "Ship-SIBISCaS: A First Step towards the Identification of Potential Maritime Law Infringements by means of LSA-Based Image." Scientific Programming 2019 (March 3, 2019): 1–14. http://dx.doi.org/10.1155/2019/1371328.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Maritime safety and security are being constantly jeopardized. Therefore, identifying maritime flow irregularities (semi-)automatically may be crucial to ensure maritime security in the future. This paper presents a Ship Semantic Information-Based, Image Similarity Calculation System (Ship-SIBISCaS), which constitutes a first step towards the automatic identification of this kind of maritime irregularities. In particular, the main goal of Ship-SIBISCaS is to automatically identify the type of ship depicted in a given image (such as abandoned, cargo, container, hospital, passenger, pirate, submersible, three-decker, or warship) and, thus, classify it accordingly. This classification is achieved in Ship-SIBISCaS by finding out the similarity of the ship image and/or description with other ship images and descriptions included in its knowledge base. This similarity is calculated by means of an LSA algorithm implementation that is run on a parallel architecture consisting of CPUs and GPUs (i.e., a heterogeneous system). This implementation of the LSA algorithm has been trained with a collection of texts, extracted from Wikipedia, that associate some semantic information to ImageNet ship images. Thanks to its parallel architecture, the indexing process of this image retrieval system has been accelerated 10 times (for the 1261 ships included in ImageNet). The range of the precision of the image similarity method is 46% to 92% with 100% recall (that is, a 100% coverage of the database).
9

Laukaitis, Algirdas, and Neda Laukaitytė. "SEMI-AUTOMATIC ONTOLOGICAL ALIGNMENT OF DIGITIZED BOOKS PARALLEL CORPORA." Mokslas - Lietuvos ateitis 13 (July 2, 2021): 1–8. http://dx.doi.org/10.3846/mla.2021.15034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we present a method for general ontology management integration with an alignment of digitized books paraphrase corpus, which have been compiled from bilingual parallel corpus. We show that our method can improve ontology development and consistency checking when we add semantic parsing and machine translation to the process of general knowledge management. Additionally, we argue that the focus on one’s favorite books gives a factor of gamification for knowledge management process. A new formalism of semantic parsing ontological alignments is introduced and its use for ontology development and consistency checking is discussed. It is shown that existing general ontologies requires much more axioms than it is currently available in order to explain unaligned content of books. Proactive learning approach is suggested as part of the solution to improve development of ontology predicates and axioms. WordNet, FrameNet and SUMO ontologies are used as a starting knowledge base of paraphrase corpus semantic alignment method.
10

García-Manotas, Ignacio, Eduardo Lupiani, Francisco García-Sánchez, and Rafael Valencia-García. "Populating Knowledge Based Decision Support Systems." International Journal of Decision Support System Technology 2, no. 1 (January 2010): 1–20. http://dx.doi.org/10.4018/jdsst.2010101601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge-based decision support systems (KBDSS) hold up business and organizational decision-making activities on the basis of the knowledge available concerning the domain under question. One of the main problems with knowledge bases is that their construction is a time-consuming task. A number of methodologies have been proposed in the context of the Semantic Web to assist in the development of ontology-based knowledge bases. In this paper, we present a technique for populating knowledge bases from semi-structured text which take advantage of the semantic underpinnings provided by ontologies. This technique has been tested and evaluated in the financial domain

Дисертації з теми "Semi-Semantic knowledge base":

1

Mrabet, Yassine. "Approches hybrides pour la recherche sémantique de l'information : intégration des bases de connaissances et des ressources semi-structurées." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00737282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La recherche sémantique de l'information a connu un nouvel essor avec les nouvelles technologies du Web sémantique. Des langages standards permettent aujourd'hui aux logiciels de communiquer par le biais de données écrites dans le vocabulaire d'ontologies de domaine décrivant une sémantique explicite. Cet accès ''sémantique'' à l'information requiert la disponibilité de bases de connaissances décrivant les instances des ontologies de domaine. Cependant, ces bases de connaissances, bien que de plus en plus riches, contiennent relativement peu d'information par comparaison au volume des informations contenu dans les documents du Web.La recherche sémantique de l'information atteint ainsi certaines limites par comparaison à la recherche classique de l'information qui exploite plus largement ces documents. Ces limites se traduisent explicitement par l'absence d'instances de concepts et de relations dans les bases de connaissances construites à partir des documents du Web. Dans cette thèse nous étudions deux directions de recherche différentes afin de permettre de répondre à des requêtes sémantiques dans de tels cas. Notre première étude porte sur la reformulation des requêtes sémantiques des utilisateurs afin d'atteindre des parties de document pertinentes à la place des faits recherchés et manquants dans les bases de connaissances. La deuxième problématique que nous étudions est celle de l'enrichissement des bases de connaissances par des instances de relations.Nous proposons deux solutions pour ces problématiques en exploitant des documents semi-structurés annotés par des concepts ou des instances de concepts. Un des points clés de ces solutions est qu'elles permettent de découvrir des instances de relations sémantiques sans s'appuyer sur des régularités lexico-syntaxiques ou structurelles dans les documents. Nous situons ces deux approches dans la littérature et nous les évaluons avec plusieurs corpus réels extraits du Web. Les résultats obtenus sur des corpus de citations bibliographiques, des corpus d'appels à communication et des corpus géographiques montrent que ces solutions permettent effectivement de retrouver de nouvelles instances relations à partir de documents hétérogènes tout en contrôlant efficacement leur précision.
2

Ben, marzouka Wissal. "Traitement possibiliste d'images, application au recalage d'images." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2022. http://www.theses.fr/2022IMTA0271.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans ce travail, nous proposons un système de recalage géométrique possibiliste qui fusionne les connaissances sémantiques et les connaissances au niveau du gris des images à recaler. Les méthodes de recalage géométrique existantes se reposent sur une analyse des connaissances au niveau des capteurs lors de la détection des primitives ainsi que lors de la mise en correspondance. L'évaluation des résultats de ces méthodes de recalage géométrique présente des limites au niveau de la perfection de la précision causées par le nombre important de faux amers. L’idée principale de notre approche proposée est de transformer les deux images à recaler en un ensemble de projections issues des images originales (source et cible). Cet ensemble est composé des images nommées « cartes de possibilité », dont chaque carte comporte un seul contenu et présente une distribution possibiliste d’une classe sémantique des deux images originales. Le système de recalage géométrique basé sur la théorie de possibilités proposé présente deux contextes : un contexte supervisé et un contexte non supervisé. Pour le premier cas de figure nous proposons une méthode de classification supervisée basée sur la théorie des possibilités utilisant les modèles d'apprentissage. Pour le contexte non supervisé, nous proposons une méthode de clustering possibiliste utilisant la méthode FCM-multicentroide. Les deux méthodes proposées fournissent en résultat les ensembles de classes sémantiques des deux images à recaler. Nous créons par la suite, les bases de connaissances pour le système de recalage possibiliste proposé. Nous avons amélioré la qualité du recalage géométrique existant en termes de perfection de précision, de diminution du nombre de faux amers et d'optimisation de la complexité temporelle
In this work, we propose a possibilistic geometric registration system that merges the semantic knowledge and the gray level knowledge of the images to be registered. The existing geometric registration methods are based on an analysis of the knowledge at the level of the sensors during the detection of the primitives as well as during the matching. The evaluation of the results of these geometric registration methods has limits in terms of the perfection of the precision caused by the large number of outliers. The main idea of our proposed approach is to transform the two images to be registered into a set of projections from the original images (source and target). This set is composed of images called “possibility maps”, each map of which has a single content and presents a possibilistic distribution of a semantic class of the two original images. The proposed geometric registration system based on the possibility theory presents two contexts: a supervised context and an unsupervised context. For the first case, we propose a supervised classification method based on the theory of possibilities using learning models. For the unsupervised context, we propose a possibilistic clustering method using the FCM-multicentroid method. The two proposed methods provide as a result the sets of semantic classes of the two images to be registered. We then create the knowledge bases for the proposed possibilistic registration system. We have improved the quality of the existing geometric registration in terms of precision perfection, reductionin the number of false landmarks and optimization of time complexity

Частини книг з теми "Semi-Semantic knowledge base":

1

Gorroñogoitia, Jesús, Dragan Radolović, Zoe Vasileiou, Georgios Meditskos, Anastasios Karakostas, Stefanos Vrochidis, and Michail Bachras. "The SODALITE Model-Driven Approach." In Deployment and Operation of Complex Software in Heterogeneous Execution Environments, 23–52. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04961-3_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThe specification of deployment topologies for complex applications distributed across multiple heterogeneous infrastructures is a difficult process that encompasses multiple modeling tasks, engaging several actors, including application ops experts, resource experts on the specification of the target infrastructure resources, quality experts on the application optimization, and application administrators on the deployment governance. SODALITE proposes a novel infrastructure as a code (IaC) modeling framework that provides a model driven engineering approach for the authoring of application- and infrastructure-level specifications, realizing an instantiation of an infrastructure as a code (IaC) modeling framework. This chapter introduces the SODALITE IDE and the IaC services. The IDE enables SODALITE expert roles to model (conforming to the SODALITE DSMLs) and generate IaC artefacts facilitating the app deployment. Experts are assisted in the modeling phase by the semantic knowledge inference and validation capabilities of a Knowledge Base (KB), which is populated with IaC descriptions for resources semi-automatically discovered from target heterogeneous infrastructures. The IDE leverages the SODALITE IaC services for automatic target image preparation and IaC artifacts generation upon deployment.
2

Reeve, Lawrence, and Hyoil Han. "A Comparison of Semantic Annotation Systems for Text-Based Web Documents." In Web Semantics & Ontology, 165–88. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-905-2.ch006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Semantic Web promises new as well as extended applications, such as concept searching, custom Web page generation, and question-answering systems. Semantic annotation is a key component for the realization of the Semantic Web. The volume of existing and new documents on the Web makes manual annotation problematic. Semi-automatic semantic annotation systems, which we call platforms because of their extensibility and composability of services, have been designed to alleviate this burden for text-based Web documents. These semantic annotation platforms provide services supporting annotation, including ontology and knowledge base access and storage, information extraction, programming interfaces, and end-user interfaces. This chapter defines a framework for examining semantic annotation platform differences based on platform characteristics, surveys several recent platform implementations, defines a classification scheme based on information extraction method used, and discusses general platform architecture.
3

Tsou, Ming-Cheng. "Geographic Information Retrieval and Text Mining on Chinese Tourism Web Pages." In Models for Capitalizing on Web Engineering Advancements, 219–39. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0023-2.ch012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The World Wide Web (WWW) offers an enormous wealth of information and data, and assembles a tremendous amount of knowledge. Much of this knowledge, however, comprises either non-structured data or semi-structured data. To make use of these unexploited or underexploited resources more efficiently, the management of information and data gathering has become an essential task for research and development. In this paper, the author examines the task of researching a hostel or homestay using the Google search web service as a base search engine. From the search results, mining, retrieving and sorting out location and semantic data were carried out by combining the Chinese Word Segmentation System with text mining technology to find geographic information gleaned from web pages. The results obtained from this particular searching method allowed users to get closer to the answers they sought and achieve greater accuracy, as the results included graphics and textual geographic information. In the future, this method may be suitable for and applicable to various types of queries, analyses, geographic data collection, and in managing spatial knowledge related to different keywords within a document.
4

García-Manotas, Ignacio, Eduardo Lupiani, Francisco García-Sánchez, and Rafael Valencia-García. "Populating Knowledge Based Decision Support Systems." In Integrated and Strategic Advancements in Decision Making Support Systems, 1–20. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1746-9.ch001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge-based decision support systems (KBDSS) hold up business and organizational decision-making activities on the basis of the knowledge available concerning the domain under question. One of the main problems with knowledge bases is that their construction is a time-consuming task. A number of methodologies have been proposed in the context of the Semantic Web to assist in the development of ontology-based knowledge bases. In this paper, we present a technique for populating knowledge bases from semi-structured text which take advantage of the semantic underpinnings provided by ontologies. This technique has been tested and evaluated in the financial domain
5

Sanchez-Alonso, Salvador, Miguel-Ángel Sicilia, and Elena Garcia-Barriocanal. "Ontologies and Contracts in the Automation of Learning Object Management Systems." In Web-Based Intelligent E-Learning Systems, 216–34. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-729-4.ch011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Current standardized e-learning systems are centred on the concept of learning object. Unfortunately, specifications and standards in the field do not provide details about the use of well-known knowledge representations for the sake of automating some processes, like selection and composition of learning objects, or adaptation to the user or platform. Precise usage specifications for ontologies in e-learning would foster automation in learning systems, but this requires concrete, machine-oriented interpretations for metadata elements. This chapter focuses on ontologies as shared knowledge representations that can be used to obtain enhanced learning object metadata records in order to enable automated or semi-automated consistent processes inside Learning Management Systems. In particular, two efforts towards enhancing automation are presented: a contractual approach based on pre- and post-conditions, and the so-called process semantic conformance profiles.
6

Mendes, David, and Irene Pimenta Rodrigues. "A Semantic Web Pragmatic Approach to Develop Clinical Ontologies, and thus Semantic Interoperability, based in HL7 v2.xml Messaging." In Information Systems and Technologies for Enhancing Health and Social Care, 205–14. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3667-5.ch014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The ISO/HL7 27931:2009 standard intends to establish a global interoperability framework for healthcare applications. However, being a messaging related protocol, it lacks a semantic foundation for interoperability at a machine treatable level intended through the Semantic Web. There is no alignment between the HL7 V2.xml message payloads and a meaning service like a suitable ontology. Careful application of Semantic Web tools and concepts can ease the path to the fundamental concept of Shared Semantics. In this chapter, the Semantic Web and Artificial Intelligence tools and techniques that allow aligned ontology population are presented and their applicability discussed. The authors present the coverage of HL7 RIM inadequacy for ontology mapping and how to circumvent it, NLP techniques for semi-automated ontology population, and the current trends about knowledge representation and reasoning that concur to the proposed achievement.
7

Ko, Andrea, and Saira Gillani. "Ontology Maintenance Through Semantic Text Mining." In Innovations, Developments, and Applications of Semantic Web and Information Systems, 350–71. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5042-6.ch013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Manual ontology population and enrichment is a complex task that require professional experience involving a lot of efforts. The authors' paper deals with the challenges and possible solutions for semi-automatic ontology enrichment and population. ProMine has two main contributions; one is the semantic-based text mining approach for automatically identifying domain-specific knowledge elements; the other is the automatic categorization of these extracted knowledge elements by using Wiktionary. ProMine ontology enrichment solution was applied in IT audit domain of an e-learning system. After seven cycles of the application ProMine, the number of automatically identified new concepts are significantly increased and ProMine categorized new concepts with high precision and recall.
8

Fanizzi, Nicola, Claudia d’Amato, and Floriana Esposito. "Evolutionary Conceptual Clustering Based on Induced Pseudo-Metrics." In Advances in Semantic Web and Information Systems, 257–80. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-992-2.ch012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We present a method based on clustering techniques to detect possible/probable novel concepts or concept drift in a Description Logics knowledge base. The method exploits a semi-distance measure defined for individuals, that is based on a finite number of dimensions corresponding to a committee of discriminating features (concept descriptions). A maximally discriminating group of features is obtained with a randomized optimization method. In the algorithm, the possible clusterings are represented as medoids (w.r.t. the given metric) of variable length. The number of clusters is not required as a parameter, the method is able to find an optimal choice by means of evolutionary operators and a proper fitness function. An experimentation proves the feasibility of our method and its effectiveness in terms of clustering validity indices. With a supervised learning phase, each cluster can be assigned with a refined or newly constructed intensional definition expressed in the adopted language.
9

Xue, Xingsi, and Junfeng Chen. "Semi-Automatic Sensor Ontology Matching Based on Interactive Multi-Objective Evolutionary Algorithm." In Handbook of Research on Advancements of Swarm Intelligence Algorithms for Solving Real-World Problems, 27–42. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-3222-5.ch002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Since different sensor ontologies are developed independently and for different requirements, a concept in one sensor ontology could be described with different terminologies or in different context in another sensor ontology, which leads to the ontology heterogeneity problem. To bridge the semantic gap between the sensor ontologies, authors propose a semi-automatic sensor ontology matching technique based on an Interactive MOEA (IMOEA), which can utilize the user's knowledge to direct MOEA's search direction. In particular, authors construct a new multi-objective optimal model for the sensor ontology matching problem, and design an IMOEA with t-dominance rule to solve the sensor ontology matching problem. In experiments, the benchmark track and anatomy track from the Ontology Alignment Evaluation Initiative (OAEI) and two pairs of real sensor ontologies are used to test performance of the authors' proposal. The experimental results show the effectiveness of the approach.
10

Serafini, Luciano, Artur d’Avila Garcez, Samy Badreddine, Ivan Donadello, Michael Spranger, and Federico Bianchi. "Chapter 17. Logic Tensor Networks: Theory and Applications." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia210498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The recent availability of large-scale data combining multiple data modalities has opened various research and commercial opportunities in Artificial Intelligence (AI). Machine Learning (ML) has achieved important results in this area mostly by adopting a sub-symbolic distributed representation. It is generally accepted now that such purely sub-symbolic approaches can be data inefficient and struggle at extrapolation and reasoning. By contrast, symbolic AI is based on rich, high-level representations ideally based on human-readable symbols. Despite being more explainable and having success at reasoning, symbolic AI usually struggles when faced with incomplete knowledge or inaccurate, large data sets and combinatorial knowledge. Neurosymbolic AI attempts to benefit from the strengths of both approaches combining reasoning with complex representation of knowledge and efficient learning from multiple data modalities. Hence, neurosymbolic AI seeks to ground rich knowledge into efficient sub-symbolic representations and to explain sub-symbolic representations and deep learning by offering high-level symbolic descriptions for such learning systems. Logic Tensor Networks (LTN) are a neurosymbolic AI system for querying, learning and reasoning with rich data and abstract knowledge. LTN introduces Real Logic, a fully differentiable first-order language with concrete semantics such that every symbolic expression has an interpretation that is grounded onto real numbers in the domain. In particular, LTN converts Real Logic formulas into computational graphs that enable gradient-based optimization. This chapter presents the LTN framework and illustrates its use on knowledge completion tasks to ground the relational predicates (symbols) into a concrete interpretation (vectors and tensors). It then investigates the use of LTN on semi-supervised learning, learning of embeddings and reasoning. LTN has been applied recently to many important AI tasks, including semantic image interpretation, ontology learning and reasoning, and reinforcement learning, which use LTN for supervised classification, data clustering, semi-supervised learning, embedding learning, reasoning and query answering. The chapter presents some of the main recent applications of LTN before analyzing results in the context of related work and discussing the next steps for neurosymbolic AI and LTN-based AI models.

Тези доповідей конференцій з теми "Semi-Semantic knowledge base":

1

Pasini, Tommaso. "The Knowledge Acquisition Bottleneck Problem in Multilingual Word Sense Disambiguation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/687.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Word Sense Disambiguation (WSD) is the task of identifying the meaning of a word in a given context. It lies at the base of Natural Language Processing as it provides semantic information for words. In the last decade, great strides have been made in this field and much effort has been devoted to mitigate the knowledge acquisition bottleneck problem, i.e., the problem of semantically annotating texts at a large scale and in different languages. This issue is ubiquitous in WSD as it hinders the creation of both multilingual knowledge bases and manually-curated training sets. In this work, we first introduce the reader to the task of WSD through a short historical digression and then take the stock of the advancements to alleviate the knowledge acquisition bottleneck problem. In that, we survey the literature on manual, semi-automatic and automatic approaches to create English and multilingual corpora tagged with sense annotations and present a clear overview over supervised models for WSD. Finally, we provide our view over the future directions that we foresee for the field.
2

Wang, Weizhen. "Semi-supervised Semantic Segmentation Network based on Knowledge Distillation." In 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). IEEE, 2021. http://dx.doi.org/10.1109/imcec51613.2021.9482145.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

"SEMANTIC CLASSIFICATION OF UNKNOWN WORDS BASED ON GRAPH-BASED SEMI-SUPERVISED CLUSTERING." In International Conference on Knowledge Engineering and Ontology Development. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003633100370046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Marquez, Alejandra, and Alex Cuadros. "3D Medical Image Segmentation based on 3D Convolutional Neural Networks." In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai201812031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A neural network is a mathematical model that is able to perform a task automatically or semi-automatically after learning the human knowledge that we provided. Moreover, a Convolutional Neural Network (CNN) is a type of sophisticated neural network that has shown to efficiently learn tasks related to the area of image analysis (among other areas). One example of these tasks is image segmentation, which aims to find regions or separable objects within an image. A more specific type of segmentation called semantic segmentation, makes sure that each region has a semantic meaning by giving it a label or class. Since neural networks can automate the task of semantic segmentation of images, they have been very useful for the medical area, applying them to the segmentation of organs or abnormalities (tumors). Therefore, this thesis project seeks to address the task of semantic segmentation of volumetric medical images obtained by Magnetic Resonance Imaging (MRI). Volumetric images are composed of a set of 2D images that altogether represent a volume. We will use a pre-existing Three-dimensional Convolutional Neural Network (3D CNN) architecture, for the binary semantic segmentation of organs in volumetric images. We will talk about the data preprocessing process, as well as specific aspects of the 3D CNN architecture. Finally, we propose a variation in the formulation of the loss function used for training the 3D CNN, also called objective function, for the improvement of pixel-wise segmentation results. We will present the comparisons in performance we made between the proposed loss function and other pre-existing loss functions using two medical image segmentation datasets.
5

Obradovic, Ines, Mario Milicevic, Boris Vrdoljak, and Krunoslav Zubrinic. "Ontology-based Approaches to Medical Data Integration." In Human Systems Engineering and Design (IHSED 2021) Future Trends and Applications. AHFE International, 2021. http://dx.doi.org/10.54941/ahfe1001111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Medical data come in a variety of forms and are stored in different types of databases. Some are structured relational database stores, while others are semi-structured and unstructured, such as the emerging NoSQL data stores. A lot of valuable data can be found in the form of unstructured text, such as clinical notes and discharge letters. To analyze and discover hidden patterns and extract knowledge from the data, they should be integrated. In the field of medicine, many ontologies have been created to provide a common basis for information exchange and to improve semantic interoperability. In this paper, we provide an overview of ontology-based integration approaches for various sources of medical data. We also identify current challenges and provide directions for future research.
6

Yan, Yuguang, Wen Li, Hanrui Wu, Huaqing Min, Mingkui Tan, and Qingyao Wu. "Semi-Supervised Optimal Transport for Heterogeneous Domain Adaptation." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/412.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Heterogeneous domain adaptation (HDA) aims to exploit knowledge from a heterogeneous source domain to improve the learning performance in a target domain. Since the feature spaces of the source and target domains are different, the transferring of knowledge is extremely difficult. In this paper, we propose a novel semi-supervised algorithm for HDA by exploiting the theory of optimal transport (OT), a powerful tool originally designed for aligning two different distributions. To match the samples between heterogeneous domains, we propose to preserve the semantic consistency between heterogeneous domains by incorporating label information into the entropic Gromov-Wasserstein discrepancy, which is a metric in OT for different metric spaces, resulting in a new semi-supervised scheme. Via the new scheme, the target and transported source samples with the same label are enforced to follow similar distributions. Lastly, based on the Kullback-Leibler metric, we develop an efficient algorithm to optimize the resultant problem. Comprehensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of our proposed method.
7

Chen, Muhao, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. "Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/556.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multilingual knowledge graph (KG) embeddings provide latent semantic representations of entities and structured knowledge with cross-lingual inferences, which benefit various knowledge-driven cross-lingual NLP tasks. However, precisely learning such cross-lingual inferences is usually hindered by the low coverage of entity alignment in many KGs. Since many multilingual KGs also provide literal descriptions of entities, in this paper, we introduce an embedding-based approach which leverages a weakly aligned multilingual KG for semi-supervised cross-lingual learning using entity descriptions. Our approach performs co-training of two embedding models, i.e. a multilingual KG embedding model and a multilingual literal description embedding model. The models are trained on a large Wikipedia-based trilingual dataset where most entity alignment is unknown to training. Experimental results show that the performance of the proposed approach on the entity alignment task improves at each iteration of co-training, and eventually reaches a stage at which it significantly surpasses previous approaches. We also show that our approach has promising abilities for zero-shot entity alignment, and cross-lingual KG completion.

До бібліографії