To see the other types of publications on this topic, follow the link: Algorithmic knowledge.

Dissertations / Theses on the topic 'Algorithmic knowledge'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithmic knowledge.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hartland, Joanne. "The machinery of medicine : an analysis of algorithmic approaches to medical knowledge and practice." Thesis, University of Bath, 1993. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.

Full text
Abstract:

In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:

* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.

* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.

Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.

APA, Harvard, Vancouver, ISO, and other styles
3

Hawasly, Majd. "Policy space abstraction for a lifelong learning agent." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9931.

Full text
Abstract:
This thesis is concerned with policy space abstractions that concisely encode alternative ways of making decisions; dealing with discovery, learning, adaptation and use of these abstractions. This work is motivated by the problem faced by autonomous agents that operate within a domain for long periods of time, hence having to learn to solve many different task instances that share some structural attributes. An example of such a domain is an autonomous robot in a dynamic domestic environment. Such environments raise the need for transfer of knowledge, so as to eliminate the need for long learning trials after deployment. Typically, these tasks would be modelled as sequential decision making problems, including path optimisation for navigation tasks, or Markov Decision Process models for more general tasks. Learning within such models often takes the form of online learning or reinforcement learning. However, handling issues such as knowledge transfer and multiple task instances requires notions of structure and hierarchy, and that raises several questions that form the topic of this thesis – (a) can an agent acquire such hierarchies in policies in an online, incremental manner, (b) can we devise mathematically rigorous ways to abstract policies based on qualitative attributes, (c) when it is inconvenient to employ prolonged trial and error learning, can we devise alternate algorithmic methods for decision making in a lifelong setting? The first contribution of this thesis is an algorithmic method for incrementally acquiring hierarchical policies. Working with the framework of options - temporally extended actions - in reinforcement learning, we present a method for discovering persistent subtasks that define useful options for a particular domain. Our algorithm builds on a probabilistic mixture model in state space to define a generalised and persistent form of ‘bottlenecks’, and suggests suitable policy fragments to make options. In order to continuously update this hierarchy, we devise an incremental process which runs in the background and takes care of proposing and forgetting options. We evaluate this framework in simulated worlds, including the RoboCup 2D simulation league domain. The second contribution of this thesis is in defining abstractions in terms of equivalence classes of trajectories. Utilising recently developed techniques from computational topology, in particular the concept of persistent homology, we show that a library of feasible trajectories could be retracted to representative paths that may be sufficient for reasoning about plans at the abstract level. We present a complete framework, starting from a novel construction of a simplicial complex that describes higher-order connectivity properties of a spatial domain, to methods for computing the homology of this complex at varying resolutions. The resulting abstractions are motion primitives that may be used as topological options, contributing a novel criterion for option discovery. This is validated by experiments in simulated 2D robot navigation, and in manipulation using a physical robot platform. Finally, we develop techniques for solving a family of related, but different, problem instances through policy reuse of a finite policy library acquired over the agent’s lifetime. This represents an alternative approach when traditional methods such as hierarchical reinforcement learning are not computationally feasible. We abstract the policy space using a non-parametric model of performance of policies in multiple task instances, so that decision making is posed as a Bayesian choice regarding what to reuse. This is one approach to transfer learning that is motivated by the needs of practical long-lived systems. We show the merits of such Bayesian policy reuse in simulated real-time interactive systems, including online personalisation and surveillance.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Hsinchun, and Tobun Dorbin Ng. "An Algorithmic Approach to Concept Exploration in a Large Knowledge Network (Automatic Thesaurus Consultation): Symbolic Branch-and-Bound Search vs. Connectionist Hopfield Net Activation." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/105241.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
This paper presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge-based systems and to alleviate the limitations of the manual browsing approach, we have developed two spreading activation-based algorithms for concept exploration in large, heterogeneous networks of concepts (e.g., multiple thesauri). One algorithm, which is based on the symbolic Al paradigm, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The second algorithm, which is based on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify â convergentâ concepts for some initial queries (a parallel, heuristic search process). Both algorithms can be adopted for automatic, multiple-thesauri consultation. We tested these two algorithms on a large text-based knowledge network of about 13,000 nodes (terms) and 80,000 directed links in the area of computing technologies. This knowledge network was created from two external thesauri and one automatically generated thesaurus. We conducted experiments to compare the behaviors and performances of the two algorithms with the hypertext-like browsing process. Our experiment revealed that manual browsing achieved higher-term recall but lower-term precision in comparison to the algorithmic systems. However, it was also a much more laborious and cognitively demanding process. In document retrieval, there were no statistically significant differences in document recall and precision between the algorithms and the manual browsing process. In light of the effort required by the manual browsing process, our proposed algorithmic approach presents a viable option for efficiently traversing largescale, multiple thesauri (knowledge network).
APA, Harvard, Vancouver, ISO, and other styles
5

Goyder, Matthew. "Knowledge Accelerated Algorithms and the Knowledge Cache." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339763385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Harispe, Sébastien. "Knowledge-based Semantic Measures : From Theory to Applications." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20038/document.

Full text
Abstract:
Les notions de proximité, de distance et de similarité sémantiques sont depuis longtemps jugées essentielles dans l'élaboration de nombreux processus cognitifs et revêtent donc un intérêt majeur pour les communautés intéressées au développement d'intelligences artificielles. Cette thèse s'intéresse aux différentes mesures sémantiques permettant de comparer des unités lexicales, des concepts ou des instances par l'analyse de corpus de textes ou de représentations de connaissance (e.g. ontologies). Encouragées par l'essor des technologies liées à l'Ingénierie des Connaissances et au Web sémantique, ces mesures suscitent de plus en plus d'intérêt à la fois dans le monde académique et industriel. Ce manuscrit débute par un vaste état de l'art qui met en regard des travaux publiés dans différentes communautés et souligne l'aspect interdisciplinaire et la diversité des recherches actuelles dans ce domaine. Cela nous a permis, sous l'apparente hétérogénéité des mesures existantes, de distinguer certaines propriétés communes et de présenter une classification générale des approches proposées. Par la suite, ces travaux se concentrent sur les mesures qui s'appuient sur une structuration de la connaissance sous forme de graphes sémantiques, e.g. graphes RDF(S). Nous montrons que ces mesures reposent sur un ensemble réduit de primitives abstraites, et que la plupart d'entre elles, bien que définies indépendamment dans la littérature, ne sont que des expressions particulières de mesures paramétriques génériques. Ce résultat nous a conduits à définir un cadre théorique unificateur pour les mesures sémantiques. Il permet notamment : (i) d'exprimer de nouvelles mesures, (ii) d'étudier les propriétés théoriques des mesures et (iii) d'orienter l'utilisateur dans le choix d'une mesure adaptée à sa problématique. Les premiers cas concrets d'utilisation de ce cadre démontrent son intérêt en soulignant notamment qu'il permet l'analyse théorique et empirique des mesures avec un degré de détail particulièrement fin, jamais atteint jusque-là. Plus généralement, ce cadre théorique permet de poser un regard neuf sur ce domaine et ouvre de nombreuses perspectives prometteuses pour l'analyse des mesures sémantiques. Le domaine des mesures sémantiques souffre d'un réel manque d'outils logiciels génériques et performants ce qui complique à la fois l'étude et l'utilisation de ces mesures. En réponse à ce manque, nous avons développé la Semantic Measures Library (SML), une librairie logicielle dédiée au calcul et à l'analyse des mesures sémantiques. Elle permet d'utiliser des centaines de mesures issues à la fois de la littérature et des fonctions paramétriques étudiées dans le cadre unificateur introduit. Celles-ci peuvent être analysées et comparées à l'aide des différentes fonctionnalités proposées par la librairie. La SML s'accompagne d'une large documentation, d'outils logiciels permettant son utilisation par des non informaticiens, d'une liste de diffusion, et de façon plus large, se propose de fédérer les différentes communautés du domaine afin de créer une synergie interdisciplinaire autour la notion de mesures sémantiques : http://www.semantic-measures-library.org Cette étude a également conduit à différentes contributions algorithmiques et théoriques, dont (i) la définition d'une méthode innovante pour la comparaison d'instances définies dans un graphe sémantique – nous montrons son intérêt pour la mise en place de système de recommandation à base de contenu, (ii) une nouvelle approche pour comparer des concepts représentés dans des taxonomies chevauchantes, (iii) des optimisations algorithmiques pour le calcul de certaines mesures sémantiques, et (iv) une technique d'apprentissage semi-supervisée permettant de cibler les mesures sémantiques adaptées à un contexte applicatif particulier en prenant en compte l'incertitude associée au jeu de test utilisé. Travaux validés par plusieurs publications et communications nationales et internationales
The notions of semantic proximity, distance, and similarity have long been considered essential for the elaboration of numerous cognitive processes, and are therefore of major importance for the communities involved in the development of artificial intelligence. This thesis studies the diversity of semantic measures which can be used to compare lexical entities, concepts and instances by analysing corpora of texts and knowledge representations (e.g., ontologies). Strengthened by the development of Knowledge Engineering and Semantic Web technologies, these measures are arousing increasing interest in both academic and industrial fields.This manuscript begins with an extensive state-of-the-art which presents numerous contributions proposed by several communities, and underlines the diversity and interdisciplinary nature of this domain. Thanks to this work, despite the apparent heterogeneity of semantic measures, we were able to distinguish common properties and therefore propose a general classification of existing approaches. Our work goes on to look more specifically at measures which take advantage of knowledge representations expressed by means of semantic graphs, e.g. RDF(S) graphs. We show that these measures rely on a reduced set of abstract primitives and that, even if they have generally been defined independently in the literature, most of them are only specific expressions of generic parametrised measures. This result leads us to the definition of a unifying theoretical framework for semantic measures, which can be used to: (i) design new measures, (ii) study theoretical properties of measures, (iii) guide end-users in the selection of measures adapted to their usage context. The relevance of this framework is demonstrated in its first practical applications which show, for instance, how it can be used to perform theoretical and empirical analyses of measures with a previously unattained level of detail. Interestingly, this framework provides a new insight into semantic measures and opens interesting perspectives for their analysis.Having uncovered a flagrant lack of generic and efficient software solutions dedicated to (knowledge-based) semantic measures, a lack which clearly hampers both the use and analysis of semantic measures, we consequently developed the Semantic Measures Library (SML): a generic software library dedicated to the computation and analysis of semantic measures. The SML can be used to take advantage of hundreds of measures defined in the literature or those derived from the parametrised functions introduced by the proposed unifying framework. These measures can be analysed and compared using the functionalities provided by the library. The SML is accompanied by extensive documentation, community support and software solutions which enable non-developers to take full advantage of the library. In broader terms, this project proposes to federate the several communities involved in this domain in order to create an interdisciplinary synergy around the notion of semantic measures: http://www.semantic-measures-library.org This thesis also presents several algorithmic and theoretical contributions related to semantic measures: (i) an innovative method for the comparison of instances defined in a semantic graph – we underline in particular its benefits in the definition of content-based recommendation systems, (ii) a new approach to compare concepts defined in overlapping taxonomies, (iii) algorithmic optimisation for the computation of a specific type of semantic measure, and (iv) a semi-supervised learning-technique which can be used to identify semantic measures adapted to a specific usage context, while simultaneously taking into account the uncertainty associated to the benchmark in use. These contributions have been validated by several international and national publications
APA, Harvard, Vancouver, ISO, and other styles
7

何淑瑩 and Shuk-ying Ho. "Knowledge representation with genetic algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ho, Shuk-ying. "Knowledge representation with genetic algorithms /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22030256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Correa, Leonardo de Lima. "Uma proposta de algoritmo memético baseado em conhecimento para o problema de predição de estruturas 3-D de proteínas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156640.

Full text
Abstract:
Algoritmos meméticos são meta-heurísticas evolutivas voltadas intrinsecamente à exploração e incorporação de conhecimentos relacionados ao problema em estudo. Nesta dissertação, foi proposto um algoritmo memético multi populacional baseado em conhecimento para lidar com o problema de predição de estruturas tridimensionais de proteínas voltado à modelagem de estruturas livres de similaridades conformacionais com estruturas de proteínas determinadas experimentalmente. O algoritmo em questão, foi estruturado em duas etapas principais de processamento: (i) amostragem e inicialização de soluções; e (ii) otimização dos modelos estruturais provenientes da etapa anterior. A etapa I objetiva a geração e classificação de diversas soluções, a partir da estratégia Lista de Probabilidades Angulares, buscando a definição de diferentes grupos estruturais e a criação de melhores estruturas a serem incorporadas à meta-heurística como soluções iniciais das multi populações. A segunda etapa consiste no processo de otimização das estruturas oriundas da etapa I, realizado por meio da aplicação do algoritmo memético de otimização, o qual é fundamentado na organização da população de indivíduos em uma estrutura em árvore, onde cada nodo pode ser interpretado como uma subpopulação independente, que ao longo do processo interage com outros nodos por meio de operações de busca global voltadas a características do problema, visando o compartilhamento de informações, a diversificação da população de indivíduos, e a exploração mais eficaz do espaço de busca multimodal do problema O algoritmo engloba ainda uma implementação do algoritmo colônia artificial de abelhas, com o propósito de ser utilizado como uma técnica de busca local a ser aplicada em cada nodo da árvore. O algoritmo proposto foi testado em um conjunto de 24 sequências de aminoácidos, assim como comparado a dois métodos de referência na área de predição de estruturas tridimensionais de proteínas, Rosetta e QUARK. Os resultados obtidos mostraram a capacidade do método em predizer estruturas tridimensionais de proteínas com conformações similares a estruturas determinadas experimentalmente, em termos das métricas de avaliação estrutural Root-Mean-Square Deviation e Global Distance Total Score Test. Verificou-se que o algoritmo desenvolvido também foi capaz de atingir resultados comparáveis ao Rosetta e ao QUARK, sendo que em alguns casos, os superou. Corroborando assim, a eficácia do método.
Memetic algorithms are evolutionary metaheuristics intrinsically concerned with the exploiting and incorporation of all available knowledge about the problem under study. In this dissertation, we present a knowledge-based memetic algorithm to tackle the threedimensional protein structure prediction problem without the explicit use of template experimentally determined structures. The algorithm was divided into two main steps of processing: (i) sampling and initialization of the algorithm solutions; and (ii) optimization of the structural models from the previous stage. The first step aims to generate and classify several structural models for a determined target protein, by the use of the strategy Angle Probability List, aiming the definition of different structural groups and the creation of better structures to initialize the initial individuals of the memetic algorithm. The Angle Probability List takes advantage of structural knowledge stored in the Protein Data Bank in order to reduce the complexity of the conformational search space. The second step of the method consists in the optimization process of the structures generated in the first stage, through the applying of the proposed memetic algorithm, which uses a tree-structured population, where each node can be seen as an independent subpopulation that interacts with others, over global search operations, aiming at information sharing, population diversity, and better exploration of the multimodal search space of the problem The method also encompasses ad-hoc global search operators, whose objective is to increase the exploration capacity of the method turning to the characteristics of the protein structure prediction problem, combined with the Artificial Bee Colony algorithm to be used as a local search technique applied to each node of the tree. The proposed algorithm was tested on a set of 24 amino acid sequences, as well as compared with two reference methods in the protein structure prediction area, Rosetta and QUARK. The results show the ability of the method to predict three-dimensional protein structures with similar foldings to the experimentally determined protein structures, regarding the structural metrics Root-Mean-Square Deviation and Global Distance Total Score Test. We also show that our method was able to reach comparable results to Rosetta and QUARK, and in some cases, it outperformed them, corroborating the effectiveness of our proposal.
APA, Harvard, Vancouver, ISO, and other styles
10

Johnson, Maury E. "Planning Genetic Algorithm: Pursuing Meta-knowledge." NSUWorks, 1999. http://nsuworks.nova.edu/gscis_etd/611.

Full text
Abstract:
This study focuses on improving business planning by proposing a series of artificial intelligence techniques to facilitate the integration of decision support systems and expert system paradigms. The continued evolution of the national information infrastructure, open systems interconnectivity, and electronic data interchange lends toward the future plausibility of the inclusion of a back-end genetic algorithm approach. By using a back-end genetic algorithm, meta-planning knowledge could be collected, extended to external data sources, and utilized to improve business decision making.
APA, Harvard, Vancouver, ISO, and other styles
11

López, Vallverdú Joan Albert. "Knowledge-based incremental induction of clinical algorithms." Doctoral thesis, Universitat Rovira i Virgili, 2012. http://hdl.handle.net/10803/97210.

Full text
Abstract:
The current approaches for the induction of medical procedural knowledge suffer from several drawbacks: the structures produced may not be explicit medical structures, they are only based on statistical measures that do not necessarily respect medical criteria which can be essential to guarantee medical correct structures, or they are not prepared to deal with the incremental arrival of new data. In this thesis we propose a methodology to automatically induce medically correct clinical algorithms (CAs) from hospital databases. These CAs are represented according to the SDA knowledge model. The methodology considers relevant background knowledge and it is able to work in an incremental way. The methodology has been tested in the domains of hypertension, diabetes mellitus and the comborbidity of both diseases. As a result, we propose a repository of background knowledge for these pathologies and provide the SDA diagrams obtained. Later analyses show that the results are medically correct and comprehensible when validated with health care professionals.
APA, Harvard, Vancouver, ISO, and other styles
12

McCallum, Thomas Edward Reid. "Understanding how knowledge is exploited in Ant algorithms." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/880.

Full text
Abstract:
Ant algorithms were first written about in 1991 and since then they have been applied to many problems with great success. During these years the algorithms themselves have been modified for improved performance and also been influenced by research in other fields. Since the earliest Ant algorithms, heuristics and local search have been the primary knowledge sources. This thesis asks the question "how is knowledge used in Ant algorithms?" To answer this question three Ant algorithms are implemented. The first is the Graph based Ant System (GBAS), a theoretical model not yet implemented, and the others are two influential algorithms, the Ant System and Max-Min Ant System. A comparison is undertaken to show that the theoretical model empirically models what happens in the other two algorithms. Therefore, this chapter explores whether different pheromone matrices (representing the internal knowledge) have a significant effect on the behaviour of the algorithm. It is shown that only under extreme parameter settings does the behaviour of Ant System and Max-Min Ant System differ from that of GBAS. The thesis continues by investigating how inaccurate knowledge is used when it is the heuristic that is at fault. This study reveals that Ant algorithms are not good at dealing with this information, and if they do use a heuristic they must rely on it relating valid guidance. An additional benefit of this study is that it shows heuristics may offer more control over the exploration-exploitation trade-off than is afforded by other parameters. The second point where knowledge enters the algorithm is through the local search. The thesis looks at what happens to the performance of the Ant algorithms when a local search is used and how this affects the parameters of the algorithm. It is shown that the addition of a local search method does change the behaviour of the algorithm and that the strength of the method has a strong influence on how the parameters are chosen. The final study focuses on whether Ant algorithms are effective for driving a local search method. The thesis demonstrates that these algorithms are not as effective as some simpler fixed and variable neighbourhood search methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Tomczak, Jakub. "Algorithms for knowledge discovery using relation identification methods." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2563.

Full text
Abstract:
In this work a coherent survey of problems connected with relational knowledge representation and methods for achieving relational knowledge representation were presented. Proposed approach was shown on three applications: economic case, biomedical case and benchmark dataset. All crucial definitions were formulated and three main methods for relation identification problem were shown. Moreover, for specific relational models and observations’ types different identification methods were presented.
Double Diploma Programme, polish supervisor: prof. Jerzy Świątek, Wrocław University of Technology
APA, Harvard, Vancouver, ISO, and other styles
14

Mallen, Jason. "Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithm." Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ghai, Vishal V. "Knowledge Based Approach UsIng Neural Networks for Predicting Corrosion Rate." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1132954243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tosatto, Silvio Carlo Ermanno. "Protein structure prediction improving and automating knowledge-based approaches /." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bauer, Sebastian [Verfasser]. "Algorithms for knowledge integration in biomedical sciences / Sebastian Bauer." Berlin : Freie Universität Berlin, 2012. http://d-nb.info/1029850844/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

MARTINS, ISNARD THOMAS. "KNOWLEDGE DISCOVERY IN POLICE CRIMINAL RECORDS: ALGORITHMS AND SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14011@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta Tese propõe uma metodologia para extração de conhecimento em bases de históricos criminais. A abrangência da metodologia proposta envolve todo o ciclo de tratamento dos históricos criminais, desde a extração de radicais temáticos, passando pela construção de dicionários especializados para apoio à extração de entidades até o desenvolvimento de cenários criminais em formato de uma matriz de relacionamentos. Os cenários são convertidos em Mapas de Inteligência destinados à análise de vínculos criminais e descoberta de conhecimento para investigação e elucidação de delitos. Os Mapas de Inteligência extraídos são representados por redes de vínculos, posteriormente tratados como um grafo capacitado. Análises de associações extraídas serão desenvolvidas, utilizando métodos de caminho mais curto em grafos, mapas neurais autoorganizáveis e indicadores de relacionamentos sociais. O método proposto nesta pesquisa permite a visão de indícios ocultos pela complexidade das informações textuais e a descoberta de conhecimento entre associações criminais aplicando-se algoritmos híbridos. A metodologia proposta foi testada utilizando bases de documentos criminais referentes à quadrilhas de narcotraficantes e casos de crimes de maior comoção social ocorridos no Rio de Janeiro entre 1999 e 2003.
This Dissertation proposes a methodology to extract knowledge from databases of police criminal records. The scope of the proposed methodology comprises the full cycle for treatment of the criminal records, from the extraction of word radicals, including the construction of specialized dictionaries to support entity extraction, up to the development of criminal scenarios shaped into a relationship matrix. The scenarios are converted into intelligence maps for the analysis of criminal connections and the discovery of knowledge aimed at investigating and clarifying crimes. The intelligence maps extracted are represented by grids which are subsequently treated as capacitated graphs. Analyses of the connections extracted are carried out using the shortest path method in graphs, self-organizing neural maps, and indicators of social relationships. The method proposed in this study helps revealing evidence that was concealed by the complexity of textual information, and discovering knowledge based on criminal connections by applying hybrid algorithms. The proposed methodology was tested using databases of criminal police records related to drug traffic organizations and crimes that caused major social disturbances in Rio de Janeiro, Brazil, from 1999 to 2003.
APA, Harvard, Vancouver, ISO, and other styles
19

Lisena, Pasquale. "Knowledge-based music recommendation : models, algorithms and exploratory search." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS614.

Full text
Abstract:
Représenter l'information décrivant la musique est une activité complexe, qui implique différentes sous-tâches. Ce manuscrit de thèse porte principalement sur la musique classique et étudie comment représenter et exploiter ses informations. L'objectif principal est l'étude de stratégies de représentation et de découverte des connaissances appliquées à la musique classique, dans des domaines tels que la production de base de connaissances, la prédiction de métadonnées et les systèmes de recommandation. Nous proposons une architecture pour la gestion des métadonnées de musique à l'aide des technologies du Web Sémantique. Nous introduisons une ontologie spécialisée et un ensemble de vocabulaires contrôlés pour les différents concepts spécifiques à la musique. Ensuite, nous présentons une approche de conversion des données, afin d’aller au-delà de la pratique bibliothécaire actuellement utilisée, en s’appuyant sur des règles de mapping et sur l’interconnexion avec des vocabulaires contrôlés. Enfin, nous montrons comment ces données peuvent être exploitées. En particulier, nous étudions des approches basées sur des plongements calculés sur des métadonnées structurées, des titres et de la musique symbolique pour classer et recommander de la musique. Plusieurs applications de démonstration ont été réalisées pour tester les approches et les ressources précédentes
Representing the information about music is a complex activity that involves different sub-tasks. This thesis manuscript mostly focuses on classical music, researching how to represent and exploit its information. The main goal is the investigation of strategies of knowledge representation and discovery applied to classical music, involving subjects such as Knowledge-Base population, metadata prediction, and recommender systems. We propose a complete workflow for the management of music metadata using Semantic Web technologies. We introduce a specialised ontology and a set of controlled vocabularies for the different concepts specific to music. Then, we present an approach for converting data, in order to go beyond the librarian practice currently in use, relying on mapping rules and interlinking with controlled vocabularies. Finally, we show how these data can be exploited. In particular, we study approaches based on embeddings computed on structured metadata, titles, and symbolic music for ranking and recommending music. Several demo applications have been realised for testing the previous approaches and resources
APA, Harvard, Vancouver, ISO, and other styles
20

Doan, William. "Temporal Closeness in Knowledge Mobilization Networks." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34756.

Full text
Abstract:
In this thesis we study the impact of time in the analysis of social networks. To do that we represent a knowledge mobilization network, Knowledge-Net, both as a standard static graph and a time-varying graph and study both graphs to see their differences. For our study, we implemented some temporal metrics and added them to Gephi, an open source software for graph and network analysis which already contains some static metrics. Then we used that software to obtain our results. Knowledge-Net is a network built using the knowledge mobilization concept. In social science, knowledge mobilization is defined as the use of knowledge towards the achievement of goals. The networks which are built using the knowledge mobilization concept make more visible the relations among heterogeneous human and non-human individuals, organizational actors and non-human mobilization actors. A time-varying graph is a graph with nodes and edges appearing and disappearing over time. A journey in a time-varying graph is equivalent to a path in a static graph. The notion of shortest path in a static graph has three variations in a time-varying graph: the shortest journey is the journey with the least number of temporal hops, the fastest journey is the journey that takes the least amount of time and the foremost journey is the journey that arrives the soonest. Out of those three, we focus on the foremost journey for our analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Ding, Yingjia. "Knowledge retention with genetic algorithms by multiple levels of representation." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-12052009-020026/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Katerinchuk, Valeri. "Heuristic multicast routing algorithms in WSNs with incomplete network knowledge." Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/heuristic-multicast-routing-algorithms-in-wsns-with-incomplete-network-knowledge(91a1331e-b2ef-40ba-91f6-7eb03e6296cb).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Arthur, Kwabena(Kwabena K. ). "On the use of prior knowledge in deep learning algorithms." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127151.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 54-56).
Machine learning algorithms have seen increasing use in the field of computational imaging. In the past few decades, the rapid computing hardware developments such as in GPU, mathematical optimization and the availability of large public domain databases have made these algorithms, increasingly attractive for several imaging problems. While these algorithms have exceeded in tests of generalizability, there is the underlying question of whether these \black-box" approaches are indeed learning the correct tasks. Is there a way for us to incorporate prior knowledge into the underlying framework? In this work, we examine how prior information on a task can be incorporated, to more eciently make use of deep learning algorithms. First, we investigate the case of phase retrieval. We use our prior knowledge of the light propagation, and embed an approximation of the physical model into our training scheme. We test this on imaging in extremely dark conditions with as low as 1 photon per pixel on average. Secondly, we investigate the case of image-enhancement. We take advantage of the composite nature of the task of transform a low-resolution low-dynamic range image, into a higher resolution, higher dynamic range image. We also investigate the application of mixed losses in this multi-task scheme, learning more eciently from the composite tasks.
by Kwabena K. Arthur.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Xiaoyu. "Effective Search in Online Knowledge Communities: A Genetic Algorithm Approach." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/35059.

Full text
Abstract:
Online Knowledge Communities, also known as online forum, are popular web-based tools that allow members to seek and share knowledge. Documents to answer varieties of questions are associated with the process of knowledge exchange. The social network of members in an Online Knowledge Community is an important factor to improve search precision. However, prior ranking functions donâ t handle this kind of document with using this information. In this study, we try to resolve the problem of finding authoritative documents for a user query within an Online Knowledge Community. Unlike prior ranking functions which consider either content based feature, hyperlink based feature, or document structure based feature, we explored the Online Knowledge Community social network structure and members social interaction activities to design features that can gauge the two major factors affecting user knowledge adoption decision: argument quality and source credibility. We then design a customized Genetic Algorithm to adjust the weights for new features we proposed. We compared the performance of our ranking strategy with several others baselines on a real world data www.vbcity.com/forums/. The evaluation results demonstrated that our method could improve the user search satisfaction with an obviously percentage. At the end, we concluded that our approach based on knowledge adoption model and Genetic Algorithm is a better ranking strategy in the Online Knowledge Community.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

Hennessy, Sara Catherine Barnard. "The role of conceptual knowledge in children's acquisition of arithmetic algorithms." Thesis, University College London (University of London), 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Evans, Brian Lawrence. "A knowledge-based environment for the design and analysis of multidimensional multirate signal processing algorithms." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fukuda, Kyoko. "Computer-Enhanced Knowledge Discovery in Environmental Science." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2140.

Full text
Abstract:
Encouraging the use of computer algorithms by developing new algorithms and introducing uncommonly known algorithms for use on environmental science problems is a significant contribution, as it provides knowledge discovery tools to extract new aspects of results and draw new insights, additional to those from general statistical methods. Conducting analysis with appropriately chosen methods, in terms of quality of performance and results, computation time, flexibility and applicability to data of various natures, will help decision making in the policy development and management process for environmental studies. This thesis has three fundamental aims and motivations. Firstly, to develop a flexibly applicable attribute selection method, Tree Node Selection (TNS), and a decision tree assessment tool, Tree Node Selection for assessing decision tree structure (TNS-A), both of which use decision trees pre-generated by the widely used C4.5 decision tree algorithm as their information source, to identify important attributes from data. TNS helps the cost effective and efficient data collection and policy making process by selecting fewer, but important, attributes, and TNS-A provides a tool to assess the decision tree structure to extract information on the relationship of attributes and decisions. Secondly, to introduce the use of new, theoretical or unknown computer algorithms, such as the K-Maximum Subarray Algorithm (K-MSA) and Ant-Miner, by adjusting and maximizing their applicability and practicality to assess environmental science problems to bring new insights. Additionally, the unique advanced statistical and mathematical method, Singular Spectrum Analysis (SSA), is demonstrated as a data pre-processing method to help improve C4.5 results on noisy measurements. Thirdly, to promote, encourage and motivate environmental scientists to use ideas and methods developed in this thesis. The methods were tested with benchmark data and various real environmental science problems: sea container contamination, the Weed Risk Assessment model and weed spatial analysis for New Zealand Biosecurity, air pollution, climate and health, and defoliation imagery. The outcome of this thesis will be to introduce the concept and technique of data mining, a process of knowledge discovery from databases, to environmental science researchers in New Zealand and overseas by collaborating on future research to achieve, together with future policy and management, to maintain and sustain a healthy environment to live in.
APA, Harvard, Vancouver, ISO, and other styles
28

Gwynne, Matthew. "Hierarchies for efficient clausal entailment checking : with applications to satisfiability and knowledge compilation." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa42854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Carral, David. "Efficient Reasoning Algorithms for Fragments of Horn Description Logics." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1491317096530938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

DIETRICH, ERIC STANLEY. "COMPUTER THOUGHT: PROPOSITIONAL ATTITUDES AND META-KNOWLEDGE (ARTIFICIAL INTELLIGENCE, SEMANTICS, PSYCHOLOGY, ALGORITHMS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188116.

Full text
Abstract:
Though artificial intelligence scientists frequently use words such as "belief" and "desire" when describing the computational capacities of their programs and computers, they have completely ignored the philosophical and psychological theories of belief and desire. Hence, their explanations of computational capacities which use these terms are frequently little better than folk-psychological explanations. Conversely, though philosophers and psychologists attempt to couch their theories of belief and desire in computational terms, they have consistently misunderstood the notions of computation and computational semantics. Hence, their theories of such attitudes are frequently inadequate. A computational theory of propositional attitudes (belief and desire) is presented here. It is argued that the theory of propositional attitudes put forth by philosophers and psychologists entails that propositional attitudes are a kind of abstract data type. This refined computational view of propositional attitudes bridges the gap between artificial intelligence, philosophy and psychology. Lastly, it is argued that this theory of propositional attitudes has consequences for meta-processing and consciousness in computers.
APA, Harvard, Vancouver, ISO, and other styles
31

Gheyas, Iffat A. "Novel computationally intelligent machine learning algorithms for data mining and knowledge discovery." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2152.

Full text
Abstract:
This thesis addresses three major issues in data mining regarding feature subset selection in large dimensionality domains, plausible reconstruction of incomplete data in cross-sectional applications, and forecasting univariate time series. For the automated selection of an optimal subset of features in real time, we present an improved hybrid algorithm: SAGA. SAGA combines the ability to avoid being trapped in local minima of Simulated Annealing with the very high convergence rate of the crossover operator of Genetic Algorithms, the strong local search ability of greedy algorithms and the high computational efficiency of generalized regression neural networks (GRNN). For imputing missing values and forecasting univariate time series, we propose a homogeneous neural network ensemble. The proposed ensemble consists of a committee of Generalized Regression Neural Networks (GRNNs) trained on different subsets of features generated by SAGA and the predictions of base classifiers are combined by a fusion rule. This approach makes it possible to discover all important interrelations between the values of the target variable and the input features. The proposed ensemble scheme has two innovative features which make it stand out amongst ensemble learning algorithms: (1) the ensemble makeup is optimized automatically by SAGA; and (2) GRNN is used for both base classifiers and the top level combiner classifier. Because of GRNN, the proposed ensemble is a dynamic weighting scheme. This is in contrast to the existing ensemble approaches which belong to the simple voting and static weighting strategy. The basic idea of the dynamic weighting procedure is to give a higher reliability weight to those scenarios that are similar to the new ones. The simulation results demonstrate the validity of the proposed ensemble model.
APA, Harvard, Vancouver, ISO, and other styles
32

Périssé, Amélie. "Color formulation algorithms improvement through expert knowledge integration for automotive effect paints." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3025.

Full text
Abstract:
Aujourd’hui, le marché de la peinture automobile est gouverné par une demande pour des couleurs profondes et vives avec effets. Dans le domaine de la peinture automobile, l’exigence est très haute car la couleur est associée à un signe de qualité. Dans une collision classique, différentes parties du véhicule peuvent être endommagées avec généralement une partie de la carrosserie qui est touchée. La partie endommagée doit être réparée, poncée et préparée avant d’être repeinte. Pour réduire les coûts, le carrossier doit préparer une peinture avec un bon contretypage de teinte, et ce aussi vite que possible. Il s’agit donc pour la formulation de la peinture de réparation de reproduire les effets, aussi bien colorés que texturés, à partir de pigments absorbants ou à effets (particules d’aluminium, de nacre …). Il est relativement simple de qualifier les effets colorés à partir des courbes de réflectance puis des coordonnées CIELab. Cependant, la définition de la texture engendrée par les particules à effets est assez complexe et n’est encore qu’à ses prémices, avec des paramètres qui souvent ne correspondant pas aux phénomènes réellement perçus par l’œil humain. Dans le cadre de ce travail de thèse, la mobilisation de connaissances expertes à travers différentes sessions de tri libre et de brainstorming a permis la mise en évidence de descripteurs de texture réellement perceptifs. De plus, la mise en place de métriques de texture conçues à partir de préconisations réellement perceptives, a rendu possible l’obtention de valeurs correspondant à un observateur moyen pour chacun de ces paramètres descripteurs. Ces paramètres ayant été élaborés à partir des observations d‘évaluateurs expérimentés. La transposition de ces vérités terrain en descripteurs physiques de texture a permis l’obtention d’une corrélation entre le perceptible et le mesurable. Dans la procédure développée, l’œil humain a été remplacé par un appareil photo numérique agissant en qualité d’intégrateur tristimulaire d’informations radiométriques. En essayant de reproduire les conditions d’observation lors de la phase d’acquisition d’images, il a été ainsi possible de caractériser les phénomènes de texture par analyse d’image et de les corréler aux valeurs de l’observateur moyen préalablement défini
Nowadays, the automotive coating market is governed by a demand for deep and vibrant colors with effects. In this field, the requirement is very high because the color is associated with a sign of quality. In a typical collision, different parts of the vehicle may be damaged. The damaged part must be repaired, sanded and prepared before being painted. To reduce costs, the body shop must then prepare a paint with a good color matching, and thus as fast as possible. It is therefore necessary for the formulation of the repair coating to reproduce the effects, both colored and textured, from absorbent or effect pigments (aluminum particles, pearlescent materials …) from a characterization of the concerned vehicle coating. It is relatively simple to qualify the colored effects from the reflectance curves and then the CIELab coordinates. However, the description of the texturing effect generated by the distribution of effect particles at the microstructure scale is quite complex. The metrological approach of the perceptive properties is still at its beginnings. The parameters used do not necessarily correspond directly to the phenomena actually perceived by the human eye. As part of this thesis work, the mobilization of expert knowledge through various sessions of free sorting and brainstorming on coated samples made it possible to highlight really perceptive texture descriptors. These descriptors have been the subject of "objective" evaluations by experienced observers. They thus made it possible to associate a quantitative evaluation scale with each descriptor. This stage of the present thesis work allowed the establishment of ground truth data materialized by a set of reference samples representing different ordered levels of a descriptor. These ground truth data were then used to design a set of measurable physical texture descriptors that were directly correlated to perceptual scales constructed in the previous step. In the procedure developed, the human eye has been replaced by a digital camera acting as a tristimulus integrator of radiometric information. The image acquisition phase was a decisive step in the process: it was necessary to reproduce the conditions of evaluation of the properties perceived, recognized and retained during the various stages using expert human observers. It was then possible to characterize the texture phenomena by image analysis and to correlate them with the values of the previously defined mean observer
APA, Harvard, Vancouver, ISO, and other styles
33

Honeycutt, Matthew Burton. "Knowledge frontier discovery a thesis presented to the faculty of the Graduate School, Tennessee Technological University /." Click to access online, 2009. http://proquest.umi.com/pqdweb?index=29&did=1908036131&SrchMode=1&sid=1&Fmt=6&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1264775728&clientId=28564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Xiaodong. "Temporal data mining : algorithms, language and system for temporal association rules." Thesis, Manchester Metropolitan University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297977.

Full text
Abstract:
Studies on data mining are being pursued in many different research areas, such as Machine Learning, Statistics, and Databases. The work presented in this thesis is based on the database perspective of data mining. The main focuses are on the temporal aspects of data mining problems, especially association rule discovery, and issues on the integration of data mining and database systems. Firstly, a theoretical framework for temporal data mining is proposed in this thesis. Within this framework, not only potential patterns but also temporal features associated with the patterns are expected to be discovered. Calendar time expressions are suggested to represent temporal features and the minimum frequency of patterns is introduced as a new threshold in the model of temporal data mining. The framework also emphasises the necessary components to support temporal data mining tasks. As a specialisation of the proposed framework, the problem of mining temporal association rules is investigated. The methodology adopted in this thesis is eventually discovering potential temporal rules by alternatively using special search techniques for various restricted problems in an interactive and iterative process. Three forms of interesting mining tasks for temporal association rules with certain constraints are identified. These tasks are the discovery of valid time periods of association rules, the discovery of periodicities of association rules, and the discovery of association rules with temporal features. The search techniques and algorithms for those individual tasks are developed and presented in this thesis. Finally, an integrated query and mining system (IQMS) is presented in this thesis, covering the description of an interactive query and mining interface (IQMI) supplied by the IQMS system, the presentation of an SQL-like temporal mining language (TML) with the ability to express various data mining tasks for temporal association rules, and the suggestion of an IQMI-based interactive data mining process. The implementation of this system demonstrates an alternative approach for the integration of the DBMS and data mining functions.
APA, Harvard, Vancouver, ISO, and other styles
35

Haidar, Ali Doureid. "Equipment selection in opencast mining using a hybrid knowledge base system and genetic algorithms." Thesis, London South Bank University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jiahui, Yu. "Research on collaborative filtering algorithm based on knowledge graph and long tail." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18828.

Full text
Abstract:
Background: With the popularization of the Internet and the development of information technology, the network information data has shown an explosive growth, and the problem of information overload [1] has been highlighted. In order to help users, find the information they are interested in from a large amount of information, and help information producers to let their own information be concerned by the majority of users, the recommendation system came into being.   Objectives: However, the sparseness problem, the neglect of semantic information, and the failure to consider the coverage rate faced by the traditional recommendation system limit the effect of the recommendation system to some extent. So in this paper I want to deal with these problems. Methods: This paper improves the performance of the recommendation system by constructing a knowledge graph in the domain and using knowledge embedding technology (openKE), combined with the collaborative filtering algorithm based on the long tail theory. And I use 3 experiments to verify this proposed approach’s performance of recommendation and the ability to dig the long tail information, I compared it with some other collaborative filtering algorithms.  Results: The results show that the proposed approach improves the precision, recall and coverage and has a better ability to mine the long tail information. Conclusion: The proposed method improves the recommended performance by reducing the sparsity of the matrix and mining the semantic information between the items. At the same time, the long tail theory is considered, so that users can be recommended to more items that may be of interest.
APA, Harvard, Vancouver, ISO, and other styles
37

Florez, Omar Ulises. "Knowledge Extraction in Video Through the Interaction Analysis of Activities Knowledge Extraction in Video Through the Interaction Analysis of Activities." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1720.

Full text
Abstract:
Video is a massive amount of data that contains complex interactions between moving objects. The extraction of knowledge from this type of information creates a demand for video analytics systems that uncover statistical relationships between activities and learn the correspondence between content and labels. However, those are open research problems that have high complexity when multiple actors simultaneously perform activities, videos contain noise, and streaming scenarios are considered. The techniques introduced in this dissertation provide a basis for analyzing video. The primary contributions of this research consist of providing new algorithms for the efficient search of activities in video, scene understanding based on interactions between activities, and the predicting of labels for new scenes.
APA, Harvard, Vancouver, ISO, and other styles
38

Fabregat, Traver Diego [Verfasser]. "Knowledge-based automatic generation of linear algebra algorithms and code / Diego Fabregat Traver." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1052303080/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jun, Chen. "Biologically inspired optimisation algorithms for transparent knowledge extraction allied to engineering materials processing." Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/579/.

Full text
Abstract:
Traditionally, modelling tasks involve the building of mathematical equations which can best describe the underlying process. Such a modelling practice normally requires a deep understanding of the systems under investigation, hence the reason why it is often referred to as knowledge-driven modelling. On the contrary, knowledge extraction from data (or datadriven modelling), inspired principally from artificial intelligence techniques, is based on limited knowledge of the modelling process and relies on the data describing the input and output mappings. Such a process is able to make abstractions and generalisations of the process and plays often a complementary role to knowledge-driven modelling. The Fuzzy Rule-Based System (FRBS) has been found more appealing for such a knowledge extraction process, compared to other ‘black-box' modelling techniques, due to its ability of providing human understandable knowledge. However, such interpretability is only semiinherent in the FRBS. Without a special caution one can easily end up with a FRBS with equally good predictions as those given by the ‘black-box' modelling methods, while on the other hand with equally bad interpretability. Hence, extracting a transparent (interpretative) FRBS is reckoned to be of a multi-objective nature with often conflicting outcomes, which gives the rationale of using bio-inspired optimisation paradigms, more specifically, Artificial Immune Systems, in this research project. In a bid to further improve the overall predictive performance, especially for the scatter and uncertain data set, an error correction scheme is proposed so that one can compensate the original predictive model via the predicted error. The proposed immune optimisation framework was tested extensively using several benchmark problems and was compared with other salient techniques. Consistent better performances were obtained. The immune based modelling approach was tested using a set of benchmark problems, and was further applied to different real data sets, viz. Tensile Strength (TS), Elongation and Reduction of Area (ROA), taken from the steel industry, which are all featured by high dimensional, nonlinear and sparse data spaces. Results show that the ii proposed modelling approach is capable of eliciting not only accurate but also transparent FRBSs. Such a transparent FRBS establishes the required predictions of the mechanical properties of materials, which on the one hand can help metallurgists to further understand the underlying mechanisms of alloys processing, and on the other hand will automate and simplify their design. Charpy toughness (impact energy) as a special data set featured by scatters and uncertainties was used to validate the proposed error correction mechanism and proved its validity. The project is part of the research activities which are currently conducted in the Institute for Microstructural and Mechanical Process Engineering: The University of Sheffield (IMMPETUS).
APA, Harvard, Vancouver, ISO, and other styles
40

Krajča, Petr. "Advanced algorithms for formal concept analysis." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
41

Truong, Quoc Hung. "Knowledge-based 3D point clouds processing." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00977434.

Full text
Abstract:
The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
APA, Harvard, Vancouver, ISO, and other styles
42

Barber, T. J. "Project control strategies using an intelligent knowledge-based system and a heuristic algorithm." Thesis, University of Brighton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.551028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Makai, Matthew Charles. "Incorporating Design Knowledge into Genetic Algorithm-based White-Box Software Test Case Generators." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32029.

Full text
Abstract:
This thesis shows how to incorporate Unified Modeling Language sequence diagrams into genetic algorithm-based automated test case generators to increase the code coverage of their resulting test cases. Automated generation of test data through evolutionary testing was proven feasible in prior research studies. In those previous investigations, the metrics used for determining the test generation method effectiveness were the percentages of testing statement and branch code coverage achieved. However, the code coverage realized in those preceding studies often converged at suboptimal percentages due to a lack of guidance in conditional statements. This study compares the coverage percentages of 16 different Java programs when test cases are automatically generated with and without incorporating associated UML sequence diagrams. It introduces a tool known as the Evolutionary Test Case Generator, or ETCG, an automatic test case generator based on genetic algorithms that provides the ability to incorporate sequence diagrams to direct the heuristic search process and facilitate evolutionary testing. When the generator uses sequence diagrams, the resulting test cases showed an average improvement of 21% in branch coverage and 8% in statement coverage over test cases produced without using sequence diagrams.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Gandhi, Sachin. "Learning from a Genetic Algorithm with Inductive Logic Programming." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125511501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Thorstensson, Niklas. "A knowledge-based grapheme-to-phoneme conversion for Swedish." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-731.

Full text
Abstract:

A text-to-speech system is a complex system consisting of several different modules such as grapheme-to-phoneme conversion, articulatory and prosodic modelling, voice modelling etc.

This dissertation is aimed at the creation of the initial part of a text-to-speech system, i.e. the grapheme-to-phoneme conversion, designed for Swedish. The problem area at hand is the conversion of orthographic text into a phonetic representation that can be used as a basis for a future complete text-to speech system.

The central issue of the dissertation is the grapheme-to-phoneme conversion and the elaboration of rules and algorithms required to achieve this task. The dissertation aims to prove that it is possible to make such a conversion by a rule-based algorithm with reasonable performance. Another goal is to find a way to represent phonotactic rules in a form suitable for parsing. It also aims to find and analyze problematic structures in written text compared to phonetic realization.

This work proposes a knowledge-based grapheme-to-phoneme conversion system for Swedish. The system suggested here is implemented, tested, evaluated and compared to other existing systems. The results achieved are promising, and show that the system is fast, with a high degree of accuracy.

APA, Harvard, Vancouver, ISO, and other styles
46

Gebser, Martin. "Proof theory and algorithms for answer set programming." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5542/.

Full text
Abstract:
Answer Set Programming (ASP) is an emerging paradigm for declarative programming, in which a computational problem is specified by a logic program such that particular models, called answer sets, match solutions. ASP faces a growing range of applications, demanding for high-performance tools able to solve complex problems. ASP integrates ideas from a variety of neighboring fields. In particular, automated techniques to search for answer sets are inspired by Boolean Satisfiability (SAT) solving approaches. While the latter have firm proof-theoretic foundations, ASP lacks formal frameworks for characterizing and comparing solving methods. Furthermore, sophisticated search patterns of modern SAT solvers, successfully applied in areas like, e.g., model checking and verification, are not yet established in ASP solving. We address these deficiencies by, for one, providing proof-theoretic frameworks that allow for characterizing, comparing, and analyzing approaches to answer set computation. For another, we devise modern ASP solving algorithms that integrate and extend state-of-the-art techniques for Boolean constraint solving. We thus contribute to the understanding of existing ASP solving approaches and their interconnections as well as to their enhancement by incorporating sophisticated search patterns. The central idea of our approach is to identify atomic as well as composite constituents of a propositional logic program with Boolean variables. This enables us to describe fundamental inference steps, and to selectively combine them in proof-theoretic characterizations of various ASP solving methods. In particular, we show that different concepts of case analyses applied by existing ASP solvers implicate mutual exponential separations regarding their best-case complexities. We also develop a generic proof-theoretic framework amenable to language extensions, and we point out that exponential separations can likewise be obtained due to case analyses on them. We further exploit fundamental inference steps to derive Boolean constraints characterizing answer sets. They enable the conception of ASP solving algorithms including search patterns of modern SAT solvers, while also allowing for direct technology transfers between the areas of ASP and SAT solving. Beyond the search for one answer set of a logic program, we address the enumeration of answer sets and their projections to a subvocabulary, respectively. The algorithms we develop enable repetition-free enumeration in polynomial space without being intrusive, i.e., they do not necessitate any modifications of computations before an answer set is found. Our approach to ASP solving is implemented in clasp, a state-of-the-art Boolean constraint solver that has successfully participated in recent solver competitions. Although we do here not address the implementation techniques of clasp or all of its features, we present the principles of its success in the context of ASP solving.
Antwortmengenprogrammierung (engl. Answer Set Programming; ASP) ist ein Paradigma zum deklarativen Problemlösen, wobei Problemstellungen durch logische Programme beschrieben werden, sodass bestimmte Modelle, Antwortmengen genannt, zu Lösungen korrespondieren. Die zunehmenden praktischen Anwendungen von ASP verlangen nach performanten Werkzeugen zum Lösen komplexer Problemstellungen. ASP integriert diverse Konzepte aus verwandten Bereichen. Insbesondere sind automatisierte Techniken für die Suche nach Antwortmengen durch Verfahren zum Lösen des aussagenlogischen Erfüllbarkeitsproblems (engl. Boolean Satisfiability; SAT) inspiriert. Letztere beruhen auf soliden beweistheoretischen Grundlagen, wohingegen es für ASP kaum formale Systeme gibt, um Lösungsmethoden einheitlich zu beschreiben und miteinander zu vergleichen. Weiterhin basiert der Erfolg moderner Verfahren zum Lösen von SAT entscheidend auf fortgeschrittenen Suchtechniken, die in gängigen Methoden zur Antwortmengenberechnung nicht etabliert sind. Diese Arbeit entwickelt beweistheoretische Grundlagen und fortgeschrittene Suchtechniken im Kontext der Antwortmengenberechnung. Unsere formalen Beweissysteme ermöglichen die Charakterisierung, den Vergleich und die Analyse vorhandener Lösungsmethoden für ASP. Außerdem entwerfen wir moderne Verfahren zum Lösen von ASP, die fortgeschrittene Suchtechniken aus dem SAT-Bereich integrieren und erweitern. Damit trägt diese Arbeit sowohl zum tieferen Verständnis von Lösungsmethoden für ASP und ihrer Beziehungen untereinander als auch zu ihrer Verbesserung durch die Erschließung fortgeschrittener Suchtechniken bei. Die zentrale Idee unseres Ansatzes besteht darin, Atome und komposite Konstrukte innerhalb von logischen Programmen gleichermaßen mit aussagenlogischen Variablen zu assoziieren. Dies ermöglicht die Isolierung fundamentaler Inferenzschritte, die wir in formalen Charakterisierungen von Lösungsmethoden für ASP selektiv miteinander kombinieren können. Darauf aufbauend zeigen wir, dass unterschiedliche Einschränkungen von Fallunterscheidungen zwangsläufig zu exponentiellen Effizienzunterschieden zwischen den charakterisierten Methoden führen. Wir generalisieren unseren beweistheoretischen Ansatz auf logische Programme mit erweiterten Sprachkonstrukten und weisen analytisch nach, dass das Treffen bzw. Unterlassen von Fallunterscheidungen auf solchen Konstrukten ebenfalls exponentielle Effizienzunterschiede bedingen kann. Die zuvor beschriebenen fundamentalen Inferenzschritte nutzen wir zur Extraktion inhärenter Bedingungen, denen Antwortmengen genügen müssen. Damit schaffen wir eine Grundlage für den Entwurf moderner Lösungsmethoden für ASP, die fortgeschrittene, ursprünglich für SAT konzipierte, Suchtechniken mit einschließen und darüber hinaus einen transparenten Technologietransfer zwischen Verfahren zum Lösen von ASP und SAT erlauben. Neben der Suche nach einer Antwortmenge behandeln wir ihre Aufzählung, sowohl für gesamte Antwortmengen als auch für Projektionen auf ein Subvokabular. Hierfür entwickeln wir neuartige Methoden, die wiederholungsfreies Aufzählen in polynomiellem Platz ermöglichen, ohne die Suche zu beeinflussen und ggf. zu behindern, bevor Antwortmengen berechnet wurden.
APA, Harvard, Vancouver, ISO, and other styles
47

Dieng, Cheikh Tidiane. "Etude et implantation de l'extraction de requetes frequentes dans les bases de donnees multidimensionnelles." Thesis, Cergy-Pontoise, 2011. http://www.theses.fr/2011CERG0530.

Full text
Abstract:
Au cours de ces dernières années, le problème de la recherche de requêtes fréquentes dans les bases de données est un problème qui a suscité de nombreuses recherches. En effet, beaucoup de motifs intéressants comme les règles d'association, des dépendances fonctionnelles exactes ou approximatives, des dépendances fonctionnelles conditionnelles exactes ou approximatives peuvent être découverts simplement, contrairement au méthodes classiques qui requièrent plusieurs transformations de la base pour extraire de tels motifs.Cependant, le problème de la recherche de requêtes fréquentes dans les bases de données relationnelles est un problème difficile car, d'une part l'espace de recherche est très grand (puisque égal à l'ensemble de toutes les requêtes pouvant être posées sur une base de données), et d'autre part, savoir si deux requêtes sont équivalentes (donc engendrant les calculs de support redondants) est un problème NP-Complet.Dans cette thèse, nous portons notre attention sur les requêtes de type projection-selection-jointure, et nous supposons que la base de données est définie selon un schéma étoile. Sous ces hypothèses, nous définissons une relation de pré-ordre (≼) entre les requêtes et nous montrons que :1. La mesure de support est anti-monotone par rapport à ≼, et2. En définissant, q ≡ q′ si et seulement si q ≼ q′ et q′ ≼ q, alors toutes les requêtes d'une même classe d'équivalence ont même support.Les principales contributions de cette thèse sont, d'une part d'étudier formellement les propriétés du pré-ordre et de la relation d'équivalence ci-dessus, et d'autre part, de proposer un algorithme par niveau de type Apriori pour rechercher l'ensemble des requêtes fréquentes d'une base de données définie sur un schéma étoile. De plus, cet algorithme a été implémenté et les expérimentations que nous avons réalisées montrent que, selon notre approche, le temps de calcul des requêtes fréquentes dans une base de données définie sur un schéma étoile reste acceptable, y compris dans le cas de grandes tables de faits
The problem of mining frequent queries in a database has motivated many research efforts during the last two decades. This is so because many interesting patterns, such as association rules, exact or approximative functional dependencies and exact or approximative conditional functional dependencies can be easily retrieved, which is not possible using standard techniques.However, the problem mining frequent queries in a relational database is not easy because, on the one hand, the size of the search space is huge (because encompassing all possible queries that can be addressed to a given database), and on the other hand, testing whether two queries are equivalent (which entails redundant support computations) is NP-Complete.In this thesis, we focus on projection-selection-join queries, assuming that the database is defined over a star schema. In this setting, we define a pre-ordering (≼) between queries and we prove the following basic properties:1. The support measure is anti-monotonic with respect to ≼, and2. Defining q ≡ q′ if and only if q ≼ q′ and q′ ≼ q, all equivalent queries have the same support.The main contributions of the thesis are, on the one hand to formally sudy properties of the pre-ordering and the equivalence relation mentioned above, and on the other hand, to prose a levewise, Apriori like algorithm for the computation of all frequent queries in a relational database defined over a star schema. Moreover, this algorithm has been implemented and the reported experiments show that, in our approach, runtime is acceptable, even in the case of large fact tables
APA, Harvard, Vancouver, ISO, and other styles
48

Ben, Mohamed Khalil. "Traitement de requêtes conjonctives avec négation : algorithmes et expérimentations." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00563217.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à des problèmes à la croisée de deux domaines, les bases de données et les bases de connaissances. Nous considérons deux problèmes équivalents concernant les requêtes conjonctives avec négation : l'inclusion de requêtes et l'évaluation d'une requête booléenne sous l'hypothèse du monde ouvert. Nous reformulons ces problèmes sous la forme d'un problème de déduction dans un fragment de la logique du premier ordre. Puis nous raffinons des schémas d'algorithmes déjà existants et proposons de nouveaux algorithmes. Pour les étudier et les comparer expérimentalement, nous proposons un générateur aléatoire et analysons l'influence des différents paramètres sur la difficulté des instances du problème étudié. Finalement, à l'aide de cette méthodologie expérimentale, nous comparons les apports des différents raffinements et les algorithmes entre eux.
APA, Harvard, Vancouver, ISO, and other styles
49

Haldavnekar, Nikhil. "An algorithm and implementation for extracting schematic and semantic knowledge from relational database systems." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Abu-Hakima, Suhayya Carleton University Dissertation Engineering Systems and Computer. "DR: the diagnostic remodeler algorithm for automated model acquisition through fault knowledge re-use." Ottawa, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography