Dissertations / Theses on the topic 'Algorithmic knowledge'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Algorithmic knowledge.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hartland, Joanne. "The machinery of medicine : an analysis of algorithmic approaches to medical knowledge and practice." Thesis, University of Bath, 1993. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357868.
Full textSjö, Kristoffer. "Semantics and Implementation of Knowledge Operators in Approximate Databases." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2438.
Full textIn order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database:
* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas.
* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.
Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.
Hawasly, Majd. "Policy space abstraction for a lifelong learning agent." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/9931.
Full textChen, Hsinchun, and Tobun Dorbin Ng. "An Algorithmic Approach to Concept Exploration in a Large Knowledge Network (Automatic Thesaurus Consultation): Symbolic Branch-and-Bound Search vs. Connectionist Hopfield Net Activation." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/105241.
Full textThis paper presents a framework for knowledge discovery and concept exploration. In order to enhance the concept exploration capability of knowledge-based systems and to alleviate the limitations of the manual browsing approach, we have developed two spreading activation-based algorithms for concept exploration in large, heterogeneous networks of concepts (e.g., multiple thesauri). One algorithm, which is based on the symbolic Al paradigm, performs a conventional branch-and-bound search on a semantic net representation to identify other highly relevant concepts (a serial, optimal search process). The second algorithm, which is based on the neural network approach, executes the Hopfield net parallel relaxation and convergence process to identify â convergentâ concepts for some initial queries (a parallel, heuristic search process). Both algorithms can be adopted for automatic, multiple-thesauri consultation. We tested these two algorithms on a large text-based knowledge network of about 13,000 nodes (terms) and 80,000 directed links in the area of computing technologies. This knowledge network was created from two external thesauri and one automatically generated thesaurus. We conducted experiments to compare the behaviors and performances of the two algorithms with the hypertext-like browsing process. Our experiment revealed that manual browsing achieved higher-term recall but lower-term precision in comparison to the algorithmic systems. However, it was also a much more laborious and cognitively demanding process. In document retrieval, there were no statistically significant differences in document recall and precision between the algorithms and the manual browsing process. In light of the effort required by the manual browsing process, our proposed algorithmic approach presents a viable option for efficiently traversing largescale, multiple thesauri (knowledge network).
Goyder, Matthew. "Knowledge Accelerated Algorithms and the Knowledge Cache." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339763385.
Full textHarispe, Sébastien. "Knowledge-based Semantic Measures : From Theory to Applications." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20038/document.
Full textThe notions of semantic proximity, distance, and similarity have long been considered essential for the elaboration of numerous cognitive processes, and are therefore of major importance for the communities involved in the development of artificial intelligence. This thesis studies the diversity of semantic measures which can be used to compare lexical entities, concepts and instances by analysing corpora of texts and knowledge representations (e.g., ontologies). Strengthened by the development of Knowledge Engineering and Semantic Web technologies, these measures are arousing increasing interest in both academic and industrial fields.This manuscript begins with an extensive state-of-the-art which presents numerous contributions proposed by several communities, and underlines the diversity and interdisciplinary nature of this domain. Thanks to this work, despite the apparent heterogeneity of semantic measures, we were able to distinguish common properties and therefore propose a general classification of existing approaches. Our work goes on to look more specifically at measures which take advantage of knowledge representations expressed by means of semantic graphs, e.g. RDF(S) graphs. We show that these measures rely on a reduced set of abstract primitives and that, even if they have generally been defined independently in the literature, most of them are only specific expressions of generic parametrised measures. This result leads us to the definition of a unifying theoretical framework for semantic measures, which can be used to: (i) design new measures, (ii) study theoretical properties of measures, (iii) guide end-users in the selection of measures adapted to their usage context. The relevance of this framework is demonstrated in its first practical applications which show, for instance, how it can be used to perform theoretical and empirical analyses of measures with a previously unattained level of detail. Interestingly, this framework provides a new insight into semantic measures and opens interesting perspectives for their analysis.Having uncovered a flagrant lack of generic and efficient software solutions dedicated to (knowledge-based) semantic measures, a lack which clearly hampers both the use and analysis of semantic measures, we consequently developed the Semantic Measures Library (SML): a generic software library dedicated to the computation and analysis of semantic measures. The SML can be used to take advantage of hundreds of measures defined in the literature or those derived from the parametrised functions introduced by the proposed unifying framework. These measures can be analysed and compared using the functionalities provided by the library. The SML is accompanied by extensive documentation, community support and software solutions which enable non-developers to take full advantage of the library. In broader terms, this project proposes to federate the several communities involved in this domain in order to create an interdisciplinary synergy around the notion of semantic measures: http://www.semantic-measures-library.org This thesis also presents several algorithmic and theoretical contributions related to semantic measures: (i) an innovative method for the comparison of instances defined in a semantic graph – we underline in particular its benefits in the definition of content-based recommendation systems, (ii) a new approach to compare concepts defined in overlapping taxonomies, (iii) algorithmic optimisation for the computation of a specific type of semantic measure, and (iv) a semi-supervised learning-technique which can be used to identify semantic measures adapted to a specific usage context, while simultaneously taking into account the uncertainty associated to the benchmark in use. These contributions have been validated by several international and national publications
何淑瑩 and Shuk-ying Ho. "Knowledge representation with genetic algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222638.
Full textHo, Shuk-ying. "Knowledge representation with genetic algorithms /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22030256.
Full textCorrea, Leonardo de Lima. "Uma proposta de algoritmo memético baseado em conhecimento para o problema de predição de estruturas 3-D de proteínas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/156640.
Full textMemetic algorithms are evolutionary metaheuristics intrinsically concerned with the exploiting and incorporation of all available knowledge about the problem under study. In this dissertation, we present a knowledge-based memetic algorithm to tackle the threedimensional protein structure prediction problem without the explicit use of template experimentally determined structures. The algorithm was divided into two main steps of processing: (i) sampling and initialization of the algorithm solutions; and (ii) optimization of the structural models from the previous stage. The first step aims to generate and classify several structural models for a determined target protein, by the use of the strategy Angle Probability List, aiming the definition of different structural groups and the creation of better structures to initialize the initial individuals of the memetic algorithm. The Angle Probability List takes advantage of structural knowledge stored in the Protein Data Bank in order to reduce the complexity of the conformational search space. The second step of the method consists in the optimization process of the structures generated in the first stage, through the applying of the proposed memetic algorithm, which uses a tree-structured population, where each node can be seen as an independent subpopulation that interacts with others, over global search operations, aiming at information sharing, population diversity, and better exploration of the multimodal search space of the problem The method also encompasses ad-hoc global search operators, whose objective is to increase the exploration capacity of the method turning to the characteristics of the protein structure prediction problem, combined with the Artificial Bee Colony algorithm to be used as a local search technique applied to each node of the tree. The proposed algorithm was tested on a set of 24 amino acid sequences, as well as compared with two reference methods in the protein structure prediction area, Rosetta and QUARK. The results show the ability of the method to predict three-dimensional protein structures with similar foldings to the experimentally determined protein structures, regarding the structural metrics Root-Mean-Square Deviation and Global Distance Total Score Test. We also show that our method was able to reach comparable results to Rosetta and QUARK, and in some cases, it outperformed them, corroborating the effectiveness of our proposal.
Johnson, Maury E. "Planning Genetic Algorithm: Pursuing Meta-knowledge." NSUWorks, 1999. http://nsuworks.nova.edu/gscis_etd/611.
Full textLópez, Vallverdú Joan Albert. "Knowledge-based incremental induction of clinical algorithms." Doctoral thesis, Universitat Rovira i Virgili, 2012. http://hdl.handle.net/10803/97210.
Full textMcCallum, Thomas Edward Reid. "Understanding how knowledge is exploited in Ant algorithms." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/880.
Full textTomczak, Jakub. "Algorithms for knowledge discovery using relation identification methods." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2563.
Full textDouble Diploma Programme, polish supervisor: prof. Jerzy Świątek, Wrocław University of Technology
Mallen, Jason. "Utilising incomplete domain knowledge in an information theoretic guided inductive knowledge discovery algorithm." Thesis, University of Portsmouth, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295773.
Full textGhai, Vishal V. "Knowledge Based Approach UsIng Neural Networks for Predicting Corrosion Rate." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1132954243.
Full textTosatto, Silvio Carlo Ermanno. "Protein structure prediction improving and automating knowledge-based approaches /." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605023.
Full textBauer, Sebastian [Verfasser]. "Algorithms for knowledge integration in biomedical sciences / Sebastian Bauer." Berlin : Freie Universität Berlin, 2012. http://d-nb.info/1029850844/34.
Full textMARTINS, ISNARD THOMAS. "KNOWLEDGE DISCOVERY IN POLICE CRIMINAL RECORDS: ALGORITHMS AND SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14011@1.
Full textEsta Tese propõe uma metodologia para extração de conhecimento em bases de históricos criminais. A abrangência da metodologia proposta envolve todo o ciclo de tratamento dos históricos criminais, desde a extração de radicais temáticos, passando pela construção de dicionários especializados para apoio à extração de entidades até o desenvolvimento de cenários criminais em formato de uma matriz de relacionamentos. Os cenários são convertidos em Mapas de Inteligência destinados à análise de vínculos criminais e descoberta de conhecimento para investigação e elucidação de delitos. Os Mapas de Inteligência extraídos são representados por redes de vínculos, posteriormente tratados como um grafo capacitado. Análises de associações extraídas serão desenvolvidas, utilizando métodos de caminho mais curto em grafos, mapas neurais autoorganizáveis e indicadores de relacionamentos sociais. O método proposto nesta pesquisa permite a visão de indícios ocultos pela complexidade das informações textuais e a descoberta de conhecimento entre associações criminais aplicando-se algoritmos híbridos. A metodologia proposta foi testada utilizando bases de documentos criminais referentes à quadrilhas de narcotraficantes e casos de crimes de maior comoção social ocorridos no Rio de Janeiro entre 1999 e 2003.
This Dissertation proposes a methodology to extract knowledge from databases of police criminal records. The scope of the proposed methodology comprises the full cycle for treatment of the criminal records, from the extraction of word radicals, including the construction of specialized dictionaries to support entity extraction, up to the development of criminal scenarios shaped into a relationship matrix. The scenarios are converted into intelligence maps for the analysis of criminal connections and the discovery of knowledge aimed at investigating and clarifying crimes. The intelligence maps extracted are represented by grids which are subsequently treated as capacitated graphs. Analyses of the connections extracted are carried out using the shortest path method in graphs, self-organizing neural maps, and indicators of social relationships. The method proposed in this study helps revealing evidence that was concealed by the complexity of textual information, and discovering knowledge based on criminal connections by applying hybrid algorithms. The proposed methodology was tested using databases of criminal police records related to drug traffic organizations and crimes that caused major social disturbances in Rio de Janeiro, Brazil, from 1999 to 2003.
Lisena, Pasquale. "Knowledge-based music recommendation : models, algorithms and exploratory search." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS614.
Full textRepresenting the information about music is a complex activity that involves different sub-tasks. This thesis manuscript mostly focuses on classical music, researching how to represent and exploit its information. The main goal is the investigation of strategies of knowledge representation and discovery applied to classical music, involving subjects such as Knowledge-Base population, metadata prediction, and recommender systems. We propose a complete workflow for the management of music metadata using Semantic Web technologies. We introduce a specialised ontology and a set of controlled vocabularies for the different concepts specific to music. Then, we present an approach for converting data, in order to go beyond the librarian practice currently in use, relying on mapping rules and interlinking with controlled vocabularies. Finally, we show how these data can be exploited. In particular, we study approaches based on embeddings computed on structured metadata, titles, and symbolic music for ranking and recommending music. Several demo applications have been realised for testing the previous approaches and resources
Doan, William. "Temporal Closeness in Knowledge Mobilization Networks." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34756.
Full textDing, Yingjia. "Knowledge retention with genetic algorithms by multiple levels of representation." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-12052009-020026/.
Full textKaterinchuk, Valeri. "Heuristic multicast routing algorithms in WSNs with incomplete network knowledge." Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/heuristic-multicast-routing-algorithms-in-wsns-with-incomplete-network-knowledge(91a1331e-b2ef-40ba-91f6-7eb03e6296cb).html.
Full textArthur, Kwabena(Kwabena K. ). "On the use of prior knowledge in deep learning algorithms." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127151.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 54-56).
Machine learning algorithms have seen increasing use in the field of computational imaging. In the past few decades, the rapid computing hardware developments such as in GPU, mathematical optimization and the availability of large public domain databases have made these algorithms, increasingly attractive for several imaging problems. While these algorithms have exceeded in tests of generalizability, there is the underlying question of whether these \black-box" approaches are indeed learning the correct tasks. Is there a way for us to incorporate prior knowledge into the underlying framework? In this work, we examine how prior information on a task can be incorporated, to more eciently make use of deep learning algorithms. First, we investigate the case of phase retrieval. We use our prior knowledge of the light propagation, and embed an approximation of the physical model into our training scheme. We test this on imaging in extremely dark conditions with as low as 1 photon per pixel on average. Secondly, we investigate the case of image-enhancement. We take advantage of the composite nature of the task of transform a low-resolution low-dynamic range image, into a higher resolution, higher dynamic range image. We also investigate the application of mixed losses in this multi-task scheme, learning more eciently from the composite tasks.
by Kwabena K. Arthur.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Zhang, Xiaoyu. "Effective Search in Online Knowledge Communities: A Genetic Algorithm Approach." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/35059.
Full textMaster of Science
Hennessy, Sara Catherine Barnard. "The role of conceptual knowledge in children's acquisition of arithmetic algorithms." Thesis, University College London (University of London), 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295184.
Full textEvans, Brian Lawrence. "A knowledge-based environment for the design and analysis of multidimensional multirate signal processing algorithms." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15623.
Full textFukuda, Kyoko. "Computer-Enhanced Knowledge Discovery in Environmental Science." Thesis, University of Canterbury. Mathematics and Statistics, 2009. http://hdl.handle.net/10092/2140.
Full textGwynne, Matthew. "Hierarchies for efficient clausal entailment checking : with applications to satisfiability and knowledge compilation." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa42854.
Full textCarral, David. "Efficient Reasoning Algorithms for Fragments of Horn Description Logics." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1491317096530938.
Full textDIETRICH, ERIC STANLEY. "COMPUTER THOUGHT: PROPOSITIONAL ATTITUDES AND META-KNOWLEDGE (ARTIFICIAL INTELLIGENCE, SEMANTICS, PSYCHOLOGY, ALGORITHMS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188116.
Full textGheyas, Iffat A. "Novel computationally intelligent machine learning algorithms for data mining and knowledge discovery." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2152.
Full textPérissé, Amélie. "Color formulation algorithms improvement through expert knowledge integration for automotive effect paints." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3025.
Full textNowadays, the automotive coating market is governed by a demand for deep and vibrant colors with effects. In this field, the requirement is very high because the color is associated with a sign of quality. In a typical collision, different parts of the vehicle may be damaged. The damaged part must be repaired, sanded and prepared before being painted. To reduce costs, the body shop must then prepare a paint with a good color matching, and thus as fast as possible. It is therefore necessary for the formulation of the repair coating to reproduce the effects, both colored and textured, from absorbent or effect pigments (aluminum particles, pearlescent materials …) from a characterization of the concerned vehicle coating. It is relatively simple to qualify the colored effects from the reflectance curves and then the CIELab coordinates. However, the description of the texturing effect generated by the distribution of effect particles at the microstructure scale is quite complex. The metrological approach of the perceptive properties is still at its beginnings. The parameters used do not necessarily correspond directly to the phenomena actually perceived by the human eye. As part of this thesis work, the mobilization of expert knowledge through various sessions of free sorting and brainstorming on coated samples made it possible to highlight really perceptive texture descriptors. These descriptors have been the subject of "objective" evaluations by experienced observers. They thus made it possible to associate a quantitative evaluation scale with each descriptor. This stage of the present thesis work allowed the establishment of ground truth data materialized by a set of reference samples representing different ordered levels of a descriptor. These ground truth data were then used to design a set of measurable physical texture descriptors that were directly correlated to perceptual scales constructed in the previous step. In the procedure developed, the human eye has been replaced by a digital camera acting as a tristimulus integrator of radiometric information. The image acquisition phase was a decisive step in the process: it was necessary to reproduce the conditions of evaluation of the properties perceived, recognized and retained during the various stages using expert human observers. It was then possible to characterize the texture phenomena by image analysis and to correlate them with the values of the previously defined mean observer
Honeycutt, Matthew Burton. "Knowledge frontier discovery a thesis presented to the faculty of the Graduate School, Tennessee Technological University /." Click to access online, 2009. http://proquest.umi.com/pqdweb?index=29&did=1908036131&SrchMode=1&sid=1&Fmt=6&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1264775728&clientId=28564.
Full textChen, Xiaodong. "Temporal data mining : algorithms, language and system for temporal association rules." Thesis, Manchester Metropolitan University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297977.
Full textHaidar, Ali Doureid. "Equipment selection in opencast mining using a hybrid knowledge base system and genetic algorithms." Thesis, London South Bank University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336376.
Full textJiahui, Yu. "Research on collaborative filtering algorithm based on knowledge graph and long tail." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18828.
Full textFlorez, Omar Ulises. "Knowledge Extraction in Video Through the Interaction Analysis of Activities Knowledge Extraction in Video Through the Interaction Analysis of Activities." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1720.
Full textFabregat, Traver Diego [Verfasser]. "Knowledge-based automatic generation of linear algebra algorithms and code / Diego Fabregat Traver." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1052303080/34.
Full textJun, Chen. "Biologically inspired optimisation algorithms for transparent knowledge extraction allied to engineering materials processing." Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/579/.
Full textKrajča, Petr. "Advanced algorithms for formal concept analysis." Diss., Online access via UMI:, 2009.
Find full textIncludes bibliographical references.
Truong, Quoc Hung. "Knowledge-based 3D point clouds processing." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00977434.
Full textBarber, T. J. "Project control strategies using an intelligent knowledge-based system and a heuristic algorithm." Thesis, University of Brighton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.551028.
Full textMakai, Matthew Charles. "Incorporating Design Knowledge into Genetic Algorithm-based White-Box Software Test Case Generators." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32029.
Full textMaster of Science
Gandhi, Sachin. "Learning from a Genetic Algorithm with Inductive Logic Programming." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1125511501.
Full textThorstensson, Niklas. "A knowledge-based grapheme-to-phoneme conversion for Swedish." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-731.
Full textA text-to-speech system is a complex system consisting of several different modules such as grapheme-to-phoneme conversion, articulatory and prosodic modelling, voice modelling etc.
This dissertation is aimed at the creation of the initial part of a text-to-speech system, i.e. the grapheme-to-phoneme conversion, designed for Swedish. The problem area at hand is the conversion of orthographic text into a phonetic representation that can be used as a basis for a future complete text-to speech system.
The central issue of the dissertation is the grapheme-to-phoneme conversion and the elaboration of rules and algorithms required to achieve this task. The dissertation aims to prove that it is possible to make such a conversion by a rule-based algorithm with reasonable performance. Another goal is to find a way to represent phonotactic rules in a form suitable for parsing. It also aims to find and analyze problematic structures in written text compared to phonetic realization.
This work proposes a knowledge-based grapheme-to-phoneme conversion system for Swedish. The system suggested here is implemented, tested, evaluated and compared to other existing systems. The results achieved are promising, and show that the system is fast, with a high degree of accuracy.
Gebser, Martin. "Proof theory and algorithms for answer set programming." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5542/.
Full textAntwortmengenprogrammierung (engl. Answer Set Programming; ASP) ist ein Paradigma zum deklarativen Problemlösen, wobei Problemstellungen durch logische Programme beschrieben werden, sodass bestimmte Modelle, Antwortmengen genannt, zu Lösungen korrespondieren. Die zunehmenden praktischen Anwendungen von ASP verlangen nach performanten Werkzeugen zum Lösen komplexer Problemstellungen. ASP integriert diverse Konzepte aus verwandten Bereichen. Insbesondere sind automatisierte Techniken für die Suche nach Antwortmengen durch Verfahren zum Lösen des aussagenlogischen Erfüllbarkeitsproblems (engl. Boolean Satisfiability; SAT) inspiriert. Letztere beruhen auf soliden beweistheoretischen Grundlagen, wohingegen es für ASP kaum formale Systeme gibt, um Lösungsmethoden einheitlich zu beschreiben und miteinander zu vergleichen. Weiterhin basiert der Erfolg moderner Verfahren zum Lösen von SAT entscheidend auf fortgeschrittenen Suchtechniken, die in gängigen Methoden zur Antwortmengenberechnung nicht etabliert sind. Diese Arbeit entwickelt beweistheoretische Grundlagen und fortgeschrittene Suchtechniken im Kontext der Antwortmengenberechnung. Unsere formalen Beweissysteme ermöglichen die Charakterisierung, den Vergleich und die Analyse vorhandener Lösungsmethoden für ASP. Außerdem entwerfen wir moderne Verfahren zum Lösen von ASP, die fortgeschrittene Suchtechniken aus dem SAT-Bereich integrieren und erweitern. Damit trägt diese Arbeit sowohl zum tieferen Verständnis von Lösungsmethoden für ASP und ihrer Beziehungen untereinander als auch zu ihrer Verbesserung durch die Erschließung fortgeschrittener Suchtechniken bei. Die zentrale Idee unseres Ansatzes besteht darin, Atome und komposite Konstrukte innerhalb von logischen Programmen gleichermaßen mit aussagenlogischen Variablen zu assoziieren. Dies ermöglicht die Isolierung fundamentaler Inferenzschritte, die wir in formalen Charakterisierungen von Lösungsmethoden für ASP selektiv miteinander kombinieren können. Darauf aufbauend zeigen wir, dass unterschiedliche Einschränkungen von Fallunterscheidungen zwangsläufig zu exponentiellen Effizienzunterschieden zwischen den charakterisierten Methoden führen. Wir generalisieren unseren beweistheoretischen Ansatz auf logische Programme mit erweiterten Sprachkonstrukten und weisen analytisch nach, dass das Treffen bzw. Unterlassen von Fallunterscheidungen auf solchen Konstrukten ebenfalls exponentielle Effizienzunterschiede bedingen kann. Die zuvor beschriebenen fundamentalen Inferenzschritte nutzen wir zur Extraktion inhärenter Bedingungen, denen Antwortmengen genügen müssen. Damit schaffen wir eine Grundlage für den Entwurf moderner Lösungsmethoden für ASP, die fortgeschrittene, ursprünglich für SAT konzipierte, Suchtechniken mit einschließen und darüber hinaus einen transparenten Technologietransfer zwischen Verfahren zum Lösen von ASP und SAT erlauben. Neben der Suche nach einer Antwortmenge behandeln wir ihre Aufzählung, sowohl für gesamte Antwortmengen als auch für Projektionen auf ein Subvokabular. Hierfür entwickeln wir neuartige Methoden, die wiederholungsfreies Aufzählen in polynomiellem Platz ermöglichen, ohne die Suche zu beeinflussen und ggf. zu behindern, bevor Antwortmengen berechnet wurden.
Dieng, Cheikh Tidiane. "Etude et implantation de l'extraction de requetes frequentes dans les bases de donnees multidimensionnelles." Thesis, Cergy-Pontoise, 2011. http://www.theses.fr/2011CERG0530.
Full textThe problem of mining frequent queries in a database has motivated many research efforts during the last two decades. This is so because many interesting patterns, such as association rules, exact or approximative functional dependencies and exact or approximative conditional functional dependencies can be easily retrieved, which is not possible using standard techniques.However, the problem mining frequent queries in a relational database is not easy because, on the one hand, the size of the search space is huge (because encompassing all possible queries that can be addressed to a given database), and on the other hand, testing whether two queries are equivalent (which entails redundant support computations) is NP-Complete.In this thesis, we focus on projection-selection-join queries, assuming that the database is defined over a star schema. In this setting, we define a pre-ordering (≼) between queries and we prove the following basic properties:1. The support measure is anti-monotonic with respect to ≼, and2. Defining q ≡ q′ if and only if q ≼ q′ and q′ ≼ q, all equivalent queries have the same support.The main contributions of the thesis are, on the one hand to formally sudy properties of the pre-ordering and the equivalence relation mentioned above, and on the other hand, to prose a levewise, Apriori like algorithm for the computation of all frequent queries in a relational database defined over a star schema. Moreover, this algorithm has been implemented and the reported experiments show that, in our approach, runtime is acceptable, even in the case of large fact tables
Ben, Mohamed Khalil. "Traitement de requêtes conjonctives avec négation : algorithmes et expérimentations." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00563217.
Full textHaldavnekar, Nikhil. "An algorithm and implementation for extracting schematic and semantic knowledge from relational database systems." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000541.
Full textAbu-Hakima, Suhayya Carleton University Dissertation Engineering Systems and Computer. "DR: the diagnostic remodeler algorithm for automated model acquisition through fault knowledge re-use." Ottawa, 1994.
Find full text