To see the other types of publications on this topic, follow the link: Logical data.

Dissertations / Theses on the topic 'Logical data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Logical data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Martin, Bryan. "Collocation of Data in a Multi-temperate Logical Data Warehouse." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1573569703200564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Becker, Katinka [Verfasser]. "Logical Analysis of Biological Data / Katinka Becker." Berlin : Freie Universität Berlin, 2021. http://d-nb.info/1241541779/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, Paulo Jorge Abreu Duarte. "Information flow analysis using data-dependent logical propositions." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8451.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
A significant number of today’s software systems are designed around database systems that store business information, as well as data relevant to access control enforcement, such as user profiles and permissions. Thus, the code implementing security mechanisms is scattered across the application code, often replicated at different architectural layers, each one written in its own programming language and with its own data format. Several approaches address this problem by integrating the development of all application layers in a single programming language. For instance, languages like Ur/Web and LiveWeb/lDB provide static verification of security policies related to access control, ensuring that access control code is correctly placed. However, these approaches provide limited support to the task of ensuring that information is not indirectly leaked because of implementation errors. In this thesis, we present a type-based information-flow analysis for a core language based in lDB, whose security levels are logical propositions depending on actual data. This approach allows for an accurate tracking of information throughout a databasebacked software system, statically detecting the information leaks that may occur, with precision at the table-cell level. In order to validate our approach, we discuss the implementation of a proof of-concept extension to the LiveWeb framework and the concerns involved in the development of a medium-sized application in our language.
APA, Harvard, Vancouver, ISO, and other styles
4

Chung, Koo-Don. "Data transfer over multiplexed logical data links sharing a single physical circuit." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/9201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alavi, Arash. "Data retrieval single layer networks of logical memory neurons." Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Higa, Kunihiko. "End user logical database design: The structured entity model approach." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184539.

Full text
Abstract:
We live in the Information Age. The effective use of information to manage organizational resources is the key to an organization's competitive power. Thus, a database plays a major role in the Information Age. A well designed database contains relevant, nonredundant, and consistent data. However, a well designed database is rarely achieved in practice. One major reason for this problem is the lack of effective support for logical database design. Since the late 1980s, various methodologies for database design have been introduced, based on the relational model, the functional model, the semantic database model, and the entity structure model. They all have, however, a common drawback: the successful design of database systems requires the experience, skills, and competence of a database analyst/designer. Unfortunately, such database analyst/designers are a scarce resource in organizations. The Structured Entity Model (SEM) method, as an alternative diagrammatic method developed by this research, facilitates the logical design phases of database system development. Because of the hierarchical structure and decomposition constructs of SEM, it can help a novice designer in performing top-down structured analysis and design of databases. SEM also achieves high semantic expressiveness by using a frame representation for entities and three general association categories (aspect, specialization, and multiple decomposition) for relationships. This also enables SEM to have high potential as a knowledge representation scheme for an integrated heterogeneous database system. Unlike most methods, the SEM method does not require designers to have knowledge of normalization theory in order to design a logical database. Thus, an end-user will be able to complete logical database design successfully using this approach. In this research, existing data models used for a logical database design were first studied. Second, the framework of SEM and the design approach using SEM were described and then compared with other data models and their use. Third, the effectiveness of the SEM method was validated in two experiments using novice designers and by a case analysis. In the final chapter of this dissertation, future research directions, such as the design of a logical database design expert system based on the SEM method and applications of this approach to other problems, such as the problems in integration of multiple databases and in an intelligent mail system, are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

PHIPPS, CASSANDRA J. "Migrating an Operational Database Schema to Data Warehouse Schemas." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1019667418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mallur, Vikram. "A Model for Managing Data Integrity." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20233.

Full text
Abstract:
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
APA, Harvard, Vancouver, ISO, and other styles
9

Hagward, Anders. "Using Git Commit History for Change Prediction : An empirical study on the predictive potential of file-level logical coupling." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172998.

Full text
Abstract:
In recent years, a new generation of distributed version control systems have taken the place of the aging centralized ones, with Git arguably being the most popular distributed system today. We investigate the potential of using Git commit history to predict files that are often changed together. Specifically, we look at the rename tracking heuristic found in Git, and the impact it has on prediction performance. By applying a data mining algorithm to five popular GitHub repositories we extract logical coupling – inter-file dependencies not necessarily detectable by static analysis – on which we base our change prediction. In addition, we examine if certain commits are better suited for change prediction than others; we define a bug fix commit as a commit that resolves one or more issues in the associated issue tracking system and compare their prediction performance. While our findings do not reveal any notable differences in prediction performance when disregarding rename information, they suggest that extracting coupling from, and predicting on, bug fix commits in particular could lead to predictions that are both more accurate and numerous.
De senaste åren har en ny generation av distribuerade versionshanteringssystem tagit plats där tidigare centraliserade sådana huserat. I spetsen för dessa nya system går ett system vid namn Git. Vi undersöker potentialen i att nyttja versionshistorik från Git i syftet att förutspå filer som ofta redigeras ihop. I synnerhet synar vi Gits heuristik för att detektera när en fil flyttats eller bytt namn, någonting som torde vara användbart för att bibehålla historiken för en sådan fil, och mäter dess inverkan på prediktionsprestandan. Genom att applicera en datautvinningsalgoritm på fem populära GitHubprojekt extraherar vi logisk koppling – beroenden mellan filer som inte nödvändigtvis är detekterbara medelst statisk analys – på vilken vi baserar vår prediktion. Därtill utreder vi huruvida vissa Gitcommits är bättre lämpade för prediktion än andra; vi definierar en buggfixcommit som en commit som löser en eller flera buggar i den tillhörande buggdatabasen, och jämför deras prediktionsprestanda. Medan våra resultat ej kan påvisa några större prestandamässiga skillnader när flytt- och namnbytesinformationen ignorerades, indikerar de att extrahera koppling från, och prediktera på, enbart bugfixcommits kan leda till förutsägelser som är både mer precisa och mångtaliga.
APA, Harvard, Vancouver, ISO, and other styles
10

Lopes, Siqueira Thiago Luis. "The Design of Vague Spatial Data Warehouses." Doctoral thesis, Universite Libre de Bruxelles, 2015. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/221701.

Full text
Abstract:
Spatial data warehouses (SDW) and spatial online analytical processing (SOLAP) enhance decision making by enabling spatial analysis combined with multidimensional analytical queries. A SDW is an integrated and voluminous multidimensional database containing both conventional and spatial data. SOLAP allows querying SDWs with multidimensional queries that select spatial data that satisfy a given topological relationship and that aggregate spatial data. Existing SDW and SOLAP applications mostly consider phenomena represented by spatial data having exact locations and sharp boundaries. They neglect the fact that spatial data may be affected by imperfections, such as spatial vagueness, which prevents distinguishing an object from its neighborhood. A vague spatial object does not have a precisely defined boundary and/or interior. Thus, it may have a broad boundary and a blurred interior, and is composed of parts that certainly belong to it and parts that possibly belong to it. Although several real-world phenomena are characterized by spatial vagueness, no approach in the literature addresses both spatial vagueness and the design of SDWs nor provides multidimensional analysis over vague spatial data. These shortcomings motivated the elaboration of this doctoral thesis, which addresses both vague spatial data warehouses (vague SDWs) and vague spatial online analytical processing (vague SOLAP). A vague SDW is a SDW that comprises vague spatial data, while vague SOLAP allows querying vague SDWs. The major contributions of this doctoral thesis are: (i) the Vague Spatial Cube (VSCube) conceptual model, which enables the creation of conceptual schemata for vague SDWs using data cubes; (ii) the Vague Spatial MultiDim (VSMultiDim) conceptual model, which enables the creation of conceptual schemata for vague SDWs using diagrams; (iii) guidelines for designing relational schemata and integrity constraints for vague SDWs, and for extending the SQL language to enable vague SOLAP; (iv) the Vague Spatial Bitmap Index (VSB-index), which improves the performance to process queries against vague SDWs. The applicability of these contributions is demonstrated in two applications of the agricultural domain, by creating conceptual schemata for vague SDWs, transforming these conceptual schemata into logical schemata for vague SDWs, and efficiently processing queries over vague SDWs.
Les entrepôts de données spatiales (EDS) et l'analyse en ligne spatiale (ALS) améliorent la prise de décision en permettant l'analyse spatiale combinée avec des requêtes analytiques multidimensionnelles. Un EDS est une base de données multidimensionnelle intégrée et volumineuse qui contient des données classiques et des données spatiales. L'ALS permet l'interrogation des EDS avec des requêtes multidimensionnelles qui sélectionnent des données spatiales qui satisfont une relation topologique donnée et qui agrègent les données spatiales. Les EDS et l'ALS considèrent essentiellement des phénomènes représentés par des données spatiales ayant une localisation exacte et des frontières précises. Ils négligent que les données spatiales peuvent être affectées par des imperfections, comme l'imprécision spatiale, ce qui empêche de distinguer précisément un objet de son entourage. Un objet spatial vague n'a pas de frontière et/ou un intérieur précisément définis. Ainsi, il peut avoir une frontière large et un intérieur flou, et est composé de parties qui lui appartiennent certainement et des parties qui lui appartiennent éventuellement. Bien que plusieurs phénomènes du monde réel sont caractérisés par l'imprécision spatiale, il n'y a pas dans la littérature des approches qui adressent en même temps l'imprécision spatiale et la conception d'EDS ni qui fournissent une analyse multidimensionnelle des données spatiales vagues. Ces lacunes ont motivé l'élaboration de cette thèse de doctorat, qui adresse à la fois les entrepôts de données spatiales vagues (EDS vagues) et l'analyse en ligne spatiale vague (ALS vague). Un EDS vague est un EDS qui comprend des données spatiales vagues, tandis que l'ALS vague permet d'interroger des EDS vagues. Les contributions majeures de cette thèse de doctorat sont: (i) le modèle conceptuel Vague Spatial Cube (VSCube), qui permet la création de schémas conceptuels pour des EDS vagues à l'aide de cubes de données; (ii) le modèle conceptuel Vague Spatial MultiDim (VSMultiDim), qui permet la création de schémas conceptuels pour des EDS vagues à l'aide de diagrammes; (iii) des directives pour la conception de schémas relationnels et des contraintes d'intégrité pour des EDS vagues, et pour l'extension du langage SQL pour permettre l'ALS vague; (iv) l'indice Vague Spatial Bitmap (VSB-index) qui améliore la performance pour traiter les requêtes adressées à des EDS vagues. L'applicabilité de ces contributions est démontrée dans deux applications dans le domaine agricole, en créant des schémas conceptuels des EDS vagues, la transformation de ces schémas conceptuels en schémas logiques pour des EDS vagues, et le traitement efficace des requêtes sur des EDS vagues.
O data warehouse espacial (DWE) é um banco de dados multidimensional integrado e volumoso que armazena dados espaciais e dados convencionais. Já o processamento analítico-espacial online (SOLAP) permite consultar o DWE, tanto pela seleção de dados espaciais que satisfazem um relacionamento topológico, quanto pela agregação dos dados espaciais. Deste modo, DWE e SOLAP beneficiam o suporte a tomada de decisão. As aplicações de DWE e SOLAP abordam majoritarimente fenômenos representados por dados espaciais exatos, ou seja, que assumem localizações e fronteiras bem definidas. Contudo, tais aplicações negligenciam dados espaciais afetados por imperfeições, tais como a vagueza espacial, a qual interfere na identificação precisa de um objeto e de seus vizinhos. Um objeto espacial vago não tem sua fronteira ou seu interior precisamente definidos. Além disso, é composto por partes que certamente pertencem a ele e partes que possivelmente pertencem a ele. Apesar de inúmeros fenômenos do mundo real serem caracterizados pela vagueza espacial, na literatura consultada não se identificaram trabalhos que considerassem a vagueza espacial no projeto de DWE e nem para consultar o DWE. Tal limitação motivou a elaboração desta tese de doutorado, a qual introduz os conceitos de DWE vago e de SOLAP vago. Um DWE vago é um DWE que armazena dados espaciais vagos, enquanto que SOLAP vago provê os meios para consultar o DWE vago. Nesta tese, o projeto de DWE vago é abordado e as principais contribuições providas são: (i) o modelo conceitual VSCube que viabiliza a criação de um cubos de dados multidimensional para representar o esquema conceitual de um DWE vago; (ii) o modelo conceitual VSMultiDim que permite criar um diagrama para representar o esquema conceitual de um DWE vago; (iii) diretrizes para o projeto lógico do DWE vago e de suas restrições de integridade, e para estender a linguagem SQL visando processar as consultas de SOLAP vago no DWE vago; e (iv) o índice VSB-index que aprimora o desempenho do processamento de consultas no DWE vago. A aplicabilidade dessas contribuições é demonstrada em dois estudos de caso no domínio da agricultura, por meio da criação de esquemas conceituais de DWE vago, da transformação dos esquemas conceituais em esquemas lógicos de DWE vago, e do processamento de consultas envolvendo as regiões vagas do DWE vago.
Doctorat en Sciences de l'ingénieur et technologie
Location of the public defense: Universidade Federal de São Carlos, São Carlos, SP, Brazil.
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
11

Yongyi, Yuan. "Investigation and implementation of data transmission look-ahead D flip-flops." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2529.

Full text
Abstract:

This thesis investigates four D flip-flops with data transmission look-ahead circuits. Based on logical effort and power-delay products to resize all the transistor widths along the critical path in µm CMOS technology. The main goal is to verify and proof this kind of circuits can be used when the input data have low switching probabilities. From comparing the average energy consumption between the normal D flip-flops and D flip-flops with look-ahead circuits, D flip-flops with look-ahead circuits consume less power when the data switching activities are low.

APA, Harvard, Vancouver, ISO, and other styles
12

Jäkel, Tobias. "Role-based Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.

Full text
Abstract:
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
APA, Harvard, Vancouver, ISO, and other styles
13

Chambon, Arthur. "Caractérisation logique de données : application aux données biologiques." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0030/document.

Full text
Abstract:
L’analyse de groupes de données binaires est aujourd’hui un défi au vu des quantités de données collectées. Elle peut être réalisée par des approches logiques. Ces approches identifient dessous-ensembles d’attributs booléens pertinents pour caractériser les observations d’un groupe et peuvent aider l’utilisateur à mieux comprendre les propriétés de ce groupe.Cette thèse présente une approche pour caractériser des groupes de données binaires en identifiant un sous-ensemble minimal d’attributs permettant de distinguer les données de différents groupes.Nous avons défini avec précision le problème de la caractérisation multiple et proposé de nouveaux algorithmes qui peuvent être utilisés pour résoudre ses différentes variantes. Notre approche de caractérisation de données peut être étendue à la recherche de patterns (motifs) dans le cadre de l’analyse logique de données. Un pattern peut être considéré comme une explication partielle des observations positives pouvant être utilisées par les praticiens, par exemple à des fins de diagnostic. De nombreux patterns existent et plusieurs critères de préférence peuvent être ajoutés pour se concentrer sur des ensembles plus restreints (prime patterns,strong patterns,. . .). Nous proposons donc une comparaison entre ces deux méthodologies ainsi que des algorithmes pour générer des patterns. Un autre objectif est d’étudier les propriétés des solutions calculées en fonction des propriétés topologiques des instances. Des expériences sont menées sur de véritables ensembles de données biologiques
Analysis of groups of binary data is now a challenge given the amount of collected data. It can be achieved by logical based approaches. These approaches identify subsets of relevant Boolean attributes to characterize the observations of a group and may help the user to better understand the properties of this group. This thesis presents an approach for characterizing groups of binary data by identifying a minimal subset of attributes that allows to distinguish data from different groups. We have precisely defined the multiple characterization problem and proposed new algorithms that can be used to solve its different variants. Our data characterization approach can be extended to search for patterns in the framework of logical analysis of data. A pattern can be considered as a partial explanation of the positive observations that can be used by practitioners, for instance for diagnosis purposes. Many patterns may exist and several preference criteria can be added in order to focus on more restricted sets of patterns (prime patterns, strong patterns, . . . ). We propose a comparison between these two methodologies as well as algorithms for generating patterns. The purpose is also to precisely study the properties of the solutions that are computed with regards to the topological properties of the instances. Experiments are thus conducted on real biological data
APA, Harvard, Vancouver, ISO, and other styles
14

Lourenço, Gustavo Vilaça. "Análise de algoritmos distribuídos para escalonamento em Data Grids." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-21082012-162544/.

Full text
Abstract:
É um resultado conhecido que em Data Grids, onde o processamento envolve grandes quantidades de dados, pode ser mais eficaz escalonar os processos para execução nos sites que já dispõem dos dados do que transferir os dados para um site onde o processo que irá necessitar deles foi escalonado. Os estudos existentes se baseiam em pequenas quantidades de sites, com conhecimento centralizado sobre o estado dos diversos sites. Essa opção não é escalável para Grids com grande número de participantes. Este trabalho analisa versões distribuídas com informação local para os algoritmos de escalonamento de processo e replicação de dados, mostrando o efeito das topologias de interconexão de sites no desempenho desses. É observado que, considerando a existência apenas de informações locais devido às restrições topologicas, resultados diferentes quanto aos melhores algoritmos de escalonamento de processos e replicação de dados são encontrados.
It is a known result that in Data Grids, where the processing involves large amounts of data, can be more effective schedule processes to run on sites that already have the data than transfering data to a site where the process that will require them was installed. The existing studies are based on small numbers of sites, with centralized knowledge about the state of the various sites. This option is not scalable for grids with large numbers of participants. This paper will propose distributed versions with local information for process scheduling algorithms and data replication, showing the effect of interconnect topologies on the performance of these sites. It is observed that, considering the existence of only local information due to topological constraints, different results related to the best scheduling algorithms and data replication processes are found.
APA, Harvard, Vancouver, ISO, and other styles
15

Voigt, Hannes. "Flexibility in Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-136681.

Full text
Abstract:
With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Jabba, Molinares Daladier. "A data link layer in support of swarming of autonomous underwater vehicles." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Siqueira, Thiago Luís Lopes. "The design of vague spatial data warehouses." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/298.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:04:00Z (GMT). No. of bitstreams: 1 6824.pdf: 22060515 bytes, checksum: bde19feb7a6e296214aebe081f2d09de (MD5) Previous issue date: 2015-12-07
Universidade Federal de Minas Gerais
O data warehouse espacial (DWE) é um banco de dados multidimensional integrado e volumoso que armazena dados espaciais e dados convencionais. Já o processamento analítico espacial online (SOLAP) permite consultar o DWE, tanto pela seleção de dados espaciais que satisfazem um relacionamento topológico, quanto pela agregação dos dados espaciais. Deste modo, DWE e SOLAP beneficiam o suporte a tomada de decisão. As aplicações de DWE e SOLAP abordam majoritarimente fenômenos representados por dados espaciais exatos, ou seja, que assumem localizações e fronteiras bem definidas. Contudo, tais aplicações negligenciam dados espaciais afetados por imperfeições, tais como a vagueza espacial, a qual interfere na identificação precisa de um objeto e de seus vizinhos. Um objeto espacial vago não tem sua fronteira ou seu interior precisamente definidos. Além disso, é composto por partes que certamente pertencem a ele e partes que possivelmente pertencem a ele. Apesar de inúmeros fenômenos do mundo real serem caracterizados pela vagueza espacial, na literatura consultada não se identificaram trabalhos que considerassem a vagueza espacial no projeto de DWE e nem para consultar o DWE. Tal limitação motivou a elaboração desta tese de doutorado, a qual introduz os conceitos de DWE vago e de SOLAP vago. Um DWE vago é um DWE que armazena dados espaciais vagos, enquanto que SOLAP vago provê os meios para consultar o DWE vago. Nesta tese, o projeto de DWE vago é abordado e as principais contribuições providas são: (i) o modelo conceitual VSCube que viabiliza a criação de um cubos de dados multidimensional para representar o esquema conceitual de um DWE vago; (ii) o modelo conceitual VSMultiDim que permite criar um diagrama para representar o esquema conceitual de um DWE vago; (iii) diretrizes para o projeto lógico do DWE vago e de suas restrições de integridade, e para estender a linguagem SQL visando processar as consultas de SOLAP vago no DWE vago; e (iv) o índice VSB-index que aprimora o desempenho do processamento de consultas no DWE vago. A aplicabilidade dessas contribuições é demonstrada em dois estudos de caso no domínio da agricultura, por meio da criação de esquemas conceituais de DWE vago, da transformação dos esquemas conceituais em esquemas lógicos de DWE vago, e do processamento de consultas envolvendo as regiões vagas do DWE vago.
Spatial data warehouses (SDW) and spatial online analytical processing (SOLAP) enhance decision making by enabling spatial analysis combined with multidimensional analytical queries. A SDW is an integrated and voluminous multidimensional database containing both conventional and spatial data. SOLAP allows querying SDWs with multidimensional queries that select spatial data that satisfy a given topological relationship and that aggregate spatial data. Existing SDW and SOLAP applications mostly consider phenomena represented by spatial data having exact locations and sharp boundaries. They neglect the fact that spatial data may be affected by imperfections, such as spatial vagueness, which prevents distinguishing an object from its neighborhood. A vague spatial object does not have a precisely defined boundary and/or interior. Thus, it may have a broad boundary and a blurred interior, and is composed of parts that certainly belong to it and parts that possibly belong to it. Although several real-world phenomena are characterized by spatial vagueness, no approach in the literature addresses both spatial vagueness and the design of SDWs nor provides multidimensional analysis over vague spatial data. These shortcomings motivated the elaboration of this doctoral thesis, which addresses both vague spatial data warehouses (vague SDWs) and vague spatial online analytical processing (vague SOLAP). A vague SDW is a SDW that comprises vague spatial data, while vague SOLAP allows querying vague SDWs. The major contributions of this doctoral thesis are: (i) the Vague Spatial Cube (VSCube) conceptual model, which enables the creation of conceptual schemata for vague SDWs using data cubes; (ii) the Vague Spatial MultiDim (VSMultiDim) conceptual model, which enables the creation of conceptual schemata for vague SDWs using diagrams; (iii) guidelines for designing relational schemata and integrity constraints for vague SDWs, and for extending the SQL language to enable vague SOLAP; (iv) the Vague Spatial Bitmap Index (VSB-index), which improves the performance to process queries against vague SDWs. The applicability of these contributions is demonstrated in two applications of the agricultural domain, by creating conceptual schemata for vague SDWs, transforming these conceptual schemata into logical schemata for vague SDWs, and efficiently processing queries over vague SDWs.
APA, Harvard, Vancouver, ISO, and other styles
18

Videla, Santiago. "Reasoning on the response of logical signaling networks with answer set programming." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7189/.

Full text
Abstract:
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
APA, Harvard, Vancouver, ISO, and other styles
19

Bodvill, Jonatan. "Enterprise network topology discovery based on end-to-end metrics : Logical site discovery in enterprise networks based on application level measurements in peer- to-peer systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227803.

Full text
Abstract:
In data intensive applications deployed in enterprise networks, especially applications utilizing peer-to-peer technology, locality is of high importance. Peers should aim to maximize data exchange with other peers where the connectivity is the best. In order to achieve this, locality information must be present which peers can base their decisions on. This information is not trivial to find as there is no readily available global knowledge of which nodes have good connectivity. Having each peer try other peers randomly until it finds good enough partners is costly and lowers the locality of the application until it converges. In this thesis a solution is presented which creates a logical topology of a peer-to-peer network, grouping peers into clusters based on their connectivity metrics. This can then be used to aid the peer-to-peer partner selection algorithm to allow for intelligent partner selection. A graph model of the system is created, where peers in the system are modelled as vertices and connections between peers are modelled as edges, with a weight in relation to the quality of the connection. The problem is then modelled as a weighted graph clustering problem which is a well-researched problem with a lot of published work tied to it. State-of-the-art graph community detection algorithms are researched, selected depending on factors such as performance and scalability, optimized for the current purpose and implemented. The results of running the algorithms on the streaming data are evaluated against known information. The results show that unsupervised graph community detection algorithm creates useful insights into networks connectivity structure and can be used in peer-to-peer contexts to find the best partners to exchange data with.
I dataintensiva applikationer i företagsnätverk, speciellt applikationer som använder sig av peer-to-peer teknologi, är lokalitet viktigt. Klienter bör försöka maximera datautbyte med andra klienter där nätverkskopplingen är som bäst. För att klienterna ska kunna göra sådana val måste information om vilka klienter som befinner sig vara vara tillgänglig som klienterna kan basera sina val på. Denna information är inte trivial att framställa då det inte finns någon färdig global information om vilka klienter som har bra uppkoppling med andra klienter och att låta varje klient prova sig fram blint tills de hittar de bästa partnerna är kostsamt och sänker applikationens lokalitet innan den konvergerar. I denna rapport presenteras en lösning som skapar en logisk vy över ett peer-to-peer nätverk, vilken grupperar klienter i kluster baserat på deras uppkopplingskvalitet. Denna vy kan sedan användas för att förbättra lokaliteten i peerto-peer applikationen. En grafmodell av systemet skapas, där klienter modelleras som hörn och kopplingar mellan klienter modelleras som kanter med en vikt i relation till uppkopplingskvaliteten. Problemet formuleras sedan som ett riktat grafklusterproblem vilket är ett väldokumenterat forskningsområde med mycket arbete publicerat kring. De mest framstående grafklusteralgoritmerna är sedan studerade, utvalda baserat på kravspecifikationer, optimerade för det aktuella problemet och implementerade. Resultaten som produceras av att algoritmerna körs på strömdata är evaluerade mot känd information. Resultaten visar att oövervakade grafklusteralgoritmer skapar användbar information kring nätverkens uppkopplingsstruktur och kan användas i peer-to-peerapplikationssammanhang för att hitta de bästa partnerna att utbyta data med.
APA, Harvard, Vancouver, ISO, and other styles
20

Tuğsal, Fatma Kübra. "Examination of primary school children's playing habits through digital puzzle games, and the impact of non-educational commercial puzzle games on the development of logical thinking in primary school children : An ethnographic case study with Supaplex." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16176.

Full text
Abstract:
Supaplex is a single player video-game released at the beginning of the 90's which known as a challenging puzzle game developed for MS-DOS and Amiga. Although Supaplex did not get an intense interest and did not become a "POP" icon like PacMan or Super Mario or Sonic during the 90's, it was quite popular among people who like puzzle games such as Boulder Dash. This paper aims to revive this nostalgic video-game and show if Supaplex help to improve the development of logical thinking in primary school children. This paper examines how can Supaplex effect on primary school children's way of developing problem-solving techniques. Moreover, this case study examines primary school children's playing habits at their own homes. The paper is based on an ethnographic case study in which I collaborated with a Turkish family to let their children play Supaplex at their home as a spare-time activity approximately five-month long period. During this five-month period, I went to their home and played Supaplex with the triplets.
APA, Harvard, Vancouver, ISO, and other styles
21

Darlay, Julien. "Analyse combinatoire de données : structures et optimisation." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00683651.

Full text
Abstract:
Cette thèse porte sur des problèmes d'exploration de données avec le point de vue de la recherche opérationnelle. L'exploration de données consiste en l'apprentissage de nouvelles connaissances à partir d'observations contenues dans une base de données. La nature des problèmes rencontrés dans ce domaine est proche de celle des problèmes de la recherche opérationnelle: grandes instances, objectifs complexes et difficulté algorithmique. L'exploration de données peut aussi se modéliser comme un problème d'optimisation avec un objectif partiellement connu. Cette thèse se divise en deux parties. La première est une introduction à l'exploration de données. Elle présente l'Analyse Combinatoire de Données (ACD), une méthode d'exploration de données issue de l'optimisation discrète. Cette méthode est appliquée à des données médicales originales et une extension aux problèmes d'analyse de temps de survie est proposée. L'analyse de temps de survie consiste à modéliser le temps avant un événement (typiquement un décès ou une rechute). Les heuristiques proposées utilisent des techniques classiques de recherche opérationnelle telles que la programmation linéaire en nombres entiers, la décomposition de problème, des algorithmes gloutons. La seconde partie est plus théorique et s'intéresse à deux problèmes combinatoires rencontrés dans le domaine de l'exploration de données. Le premier est un problème de partitionnement de graphes en sous-graphes denses pour l'apprentissage non supervisé. Nous montrons la complexité algorithmique de ce problème et nous proposons un algorithme polynomial basé sur la programmation dynamique lorsque le graphe est un arbre. Cet algorithme repose sur des résultats de la théorie des couplages. Le second problème est une généralisation des problèmes de couverture par les tests pour la sélection d'attributs. Les lignes d'une matrice sont coloriées en deux couleurs. L'objectif est de trouver un sous-ensemble minimum de colonnes tel que toute paire de lignes avec des couleurs différentes restent distinctes lorsque la matrice est restreinte au sous-ensemble de colonnes. Nous montrons des résultats de complexité ainsi que des bornes serrées sur la taille des solutions optimales pour différentes structures de matrices.
APA, Harvard, Vancouver, ISO, and other styles
22

Okutanoglu, Aydin. "Patim: Proximity Aware Time Management." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609997/index.pdf.

Full text
Abstract:
Logical time management is used to synchronize the executions of distributed simulation elements. In existing time management systems, such as High Level Architecture (HLA), logical times of the simulation elements are synchronized. However, in some cases synchronization can unnecessarily decrease the performance of the system. In the proposed HLA based time management mechanism, federates are clustered into logically related groups. The relevance of federates is taken to be a function of proximity which is defined as the distance between them in the virtual space. Thus, each federate cluster is composed of relatively close federates according to calculated distances. When federate clusters are sufficiently far from each other, there is no need to synchronize them, as they do not relate each other. So in PATiM mechanism, inter-cluster logical times are not synchronized when clusters are sufficiently distant. However, if the distant federate clusters get close to each other, they will need to resynchronize their logical times. This temporal partitioning is aimed at reducing network traffic and time management calculations and also increasing the concurrency between federates. The results obtained based on case applications have verified that clustering improves local performance as soon as federates become unrelated.
APA, Harvard, Vancouver, ISO, and other styles
23

Arnold, Holger. "A linearized DPLL calculus with clause learning (2nd, revised version)." Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2908/.

Full text
Abstract:
Many formal descriptions of DPLL-based SAT algorithms either do not include all essential proof techniques applied by modern SAT solvers or are bound to particular heuristics or data structures. This makes it difficult to analyze proof-theoretic properties or the search complexity of these algorithms. In this paper we try to improve this situation by developing a nondeterministic proof calculus that models the functioning of SAT algorithms based on the DPLL calculus with clause learning. This calculus is independent of implementation details yet precise enough to enable a formal analysis of realistic DPLL-based SAT algorithms.
Viele formale Beschreibungen DPLL-basierter SAT-Algorithmen enthalten entweder nicht alle wesentlichen Beweistechniken, die in modernen SAT-Solvern implementiert sind, oder sind an bestimmte Heuristiken oder Datenstrukturen gebunden. Dies erschwert die Analyse beweistheoretischer Eigenschaften oder der Suchkomplexität derartiger Algorithmen. Mit diesem Artikel versuchen wir, diese Situation durch die Entwicklung eines nichtdeterministischen Beweiskalküls zu verbessern, der die Arbeitsweise von auf dem DPLL-Kalkül basierenden SAT-Algorithmen mit Klausellernen modelliert. Dieser Kalkül ist unabhängig von Implementierungsdetails, aber dennoch präzise genug, um eine formale Analyse realistischer DPLL-basierter SAT-Algorithmen zu ermöglichen.
APA, Harvard, Vancouver, ISO, and other styles
24

Hesselius, Tobias, and Tommy Savela. "A Java Framework for Broadcast Encryption Algorithms." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2504.

Full text
Abstract:

Broadcast encryption is a fairly new area in cryptology. It was first addressed in 1992, and the research in this area has been large ever since. In short, broadcast encryption is used for efficient and secure broadcasting to an authorized group of users. This group can change dynamically, and in some cases only one-way communication between the sender and receivers is available. An example of this is digital TV transmissions via satellite, in which only the paying customers can decrypt and view the broadcast.

The purpose of this thesis is to develop a general Java framework for implementation and performance analysis of broadcast encryption algorithms. In addition to the actual framework a few of the most common broadcast encryption algorithms (Complete Subtree, Subset Difference, and the Logical Key Hierarchy scheme) have been implemented in the system.

This master’s thesis project was defined by and carried out at the Information Theory division at the Department of Electrical Engineering (ISY), Linköping Institute of Technology, during the first half of 2004.

APA, Harvard, Vancouver, ISO, and other styles
25

Kraut, Daniel. "Generování modelů pro testy ze zdrojových kódů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403157.

Full text
Abstract:
The aim of the masters thesis is to design and implement a tool for automatic generation of paths in source code. Firstly was acquired a study of model based testing and possible design for the desired automatic generator based on coverage criteria defined on CFG model. The main point of the master theis is the tool design and description of its implementation. The tool supports many coverage criteria, which allows the user of such tool to focus on specific artefact of the system under test. Moreover, this tool is tuned to allow aditional requirements on the size of generated test suite, reflecting real world practical usage. The generator was implemented in C++ language and web interface for it in Python language, which at the same time is used to integrated the tool into Testos platform.
APA, Harvard, Vancouver, ISO, and other styles
26

Alam, Mohammad Saquib. "Automatic generation of critical driving scenarios." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288886.

Full text
Abstract:
Despite the tremendous development in the autonomous vehicle industry, the tools for systematic testing are still lacking. Real-world testing is time-consuming and above all, dangerous. There is also a lack of a framework to automatically generate critical scenarios to test autonomous vehicles. This thesis develops a general framework for end- to- end testing of an autonomous vehicle in a simulated environment. The framework provides the capability to generate and execute a large number of traffic scenarios in a reliable manner. Two methods are proposed to compute the criticality of a traffic scenario. A so-called critical value is used to learn the probability distribution of the critical scenario iteratively. The obtained probability distribution can be used to sample critical scenarios for testing and for benchmarking a different autonomous vehicle. To describe the static and dynamic participants of urban traffic scenario executed by the simulator, OpenDrive and OpenScenario standards are used.
Trots den enorma utvecklingen inom den autonoma fordonsindustrin saknas fortfarande verktygen för systematisk testning. Verklig testning är tidskrävande och framför allt farlig. Det saknas också ett ramverk för att automatiskt generera kritiska scenarier för att testa autonoma fordon. Denna avhandling utvecklar en allmän ram för end-to-end- test av ett autonomt fordon i en simulerad miljö. Ramverket ger möjlighet att generera och utföra ett stort antal trafikscenarier på ett tillförlitligt sätt. Två metoder föreslås för att beräkna kritiken i ett trafikscenario. Ett så kallat kritiskt värde används för att lära sig sannolikhetsfördelningen för det kritiska scenariot iterativt. Den erhållna sannolikhetsfördelningen kan användas för att prova kritiska scenarier för testning och för benchmarking av ett annat autonomt fordon. För att beskriva de statiska och dynamiska deltagarna i stadstrafikscenariot som körs av simulatorn används OpenDrive och OpenScenario-standarder.
APA, Harvard, Vancouver, ISO, and other styles
27

Obeidat, Laith Mohammad. "Enhancing the Indoor-Outdoor Visual Relationship: Framework for Developing and Integrating a 3D-Geospatial-Based Inside-Out Design Approach to the Design Process." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97726.

Full text
Abstract:
This research study aims to enhance the effectiveness of the architectural design process regarding the exploration and framing of the best visual connections to the outside environment within built environments. Specifically, it aims to develop a framework for developing and integrating an inside-out design approach augmented and informed by digital 3D geospatial data as a way to enhance the explorative ability and decision-making process for designers regarding the visual connection to the outside environment. To do so, the strategy of logical argumentation is used to analyze and study the phenomenon of making visual connections to a surrounding context. The initial recommendation of this stage is to integrate an inside-out design approach that operates within the digital immersion within 3D digital representations of the surrounding context. This strategy will help to identify the basic logical steps of the proposed inside-out design process. Then, the method of immersive case study is used to test and further develop a proposed process by designing a specific building, specifically, an Art Museum building on the campus of Virginia Tech. Finally, the Delphi method is used in order to evaluate the necessity and importance of the proposed approach to the design process and its ability to achieve this goal. A multi-round survey was distributed to measure the consensus among a number of experts regarding the proposed design approach and its developed design tool. Overall, findings refer to a total agreement among the participating experts regarding the proposed design approach with some different concerns regarding the proposed design tool.
Doctor of Philosophy
Achieving a well-designed visual connection to one's surroundings is considered by many philosophers and theorists to be an essential aspect of our spatial experience within built environments. The goal of this research is to help designers to achieve better visual connections to the outside environment and therefore create more meaningful spatial experiences within the built environment. This research aims to enhance the ability of designers to explore the best possible views and make the right design decisions to frame these views of the outdoors from the inside of their buildings. Of course, the physical presence of designers at a building site has been the traditional method of determining the best views; however, this is not always possible during the design process for many reasons. Thus, this research aims to find a more effective alternative to visiting a building site in order to inform each design decision regarding the quality of its visual connection to the outdoors. To do so, this research developed a proposed inside-out design approach to be integrated into the design process. Specifically, it outlines a process that allows the designers to be digitally immersed within an accurate 3D representation of the surrounding context, which will help designers to explore views from multiple angles both inside the space and in response make the most suitable design decision. For further developing the proposed process, it was used during conducting this research to design an Art Museum on Virginia Tech Campus.
APA, Harvard, Vancouver, ISO, and other styles
28

Sbampato, Fernando Vilgino. "Estudo da aplicabilidade de técnicas de sanitização de dados em discos rí­gidos atuais." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-29082018-085110/.

Full text
Abstract:
A sanitização de dados é um dos desafios que está em aberto quando tange a segurança de dados nos discos rígidos. Há duas formas de realizar este procedimento de sanitização de dados nos discos rígidos. A primeira é a utilização de técnicas físicas que visam à destruição do disco rígido por completo. A segunda é a utilização de técnicas lógicas que visam realizar a sanitização dos dados armazenados no disco, permitindo que este seja reutilizado. A proposta principal deste trabalho é a de verificar por meio da técnica de Microscopia de Força Magnética (MFM) a possibilidade de recuperação dos dados originais após o processo de sanitização ter ocorrido por meio de uma técnica lógica. Com este objetivo foram selecionadas oito técnicas lógicas (Gutmann, VSITR, RCMP TSSIP OPS-II, CSEC ITSG-06, DoD 5220.22-M, AR 280-19, GOST R 50739-95 e ISM 6.9.92), após esta seleção foi realizada uma avaliação lógica dessas técnicas com o intuito de selecionar duas técnicas para a avaliação experimental. Para realizar a avaliação experimental foram utilizados dois microscópios (Dimension Icon e o MultiMode 8) para aplicar a técnica de MFM em disco rígido. O objetivo foi comprovar a eficiência das técnicas lógicas na sanitização de dados armazenados nos discos rígidos.
Data sanitization is one of the challenges you face when it comes to data security on hard disk. There are two ways to perform this data sanitization procedure on hard disks. The first one is the deployment of physical techniques aimed at destroying the hard drive altogether. The second one is the use of logical techniques that aim to sanitize the data stored on the disk, allowing it to be used again. The main purpose of this work is to verify through the Magnetic Force Microscopy technique (MFM) the possibility of recovering of the original data after the sanitization process has occurred through of a logical technique. To this purpose, eight logics (Gutmann, VSITR, RCMP TSSIP OPS-II, CSEC ITSG-06, DoD 5220.22-M, AR 280-19, GOST R 50739-95 and ISM 6.9.92) were selected. After this selection, a logical evaluation of these techniques was carried out for selecting two techniques for the experimental evaluation. To perform the experimental evaluation, two microscopes (Dimension Icon and MultiMode 8) were used to apply the MFM technique to the hard disk. The objective was to verify the efficiency of the logical techniques in the sanitization of data stored in the hard disks.
APA, Harvard, Vancouver, ISO, and other styles
29

Lutovska, Irina. "Implementering av programmering i grundskolan." Thesis, Malmö universitet, Fakulteten för lärande och samhälle (LS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-41978.

Full text
Abstract:
This essay will address the introduction of programming as an element in the compulsory school curriculum and examine the extent to which teachers have been affected by the introduction of programming in curriculaand syllabi. The essay is based on two questions: "What difficulties and opportunities have arisen in the teaching after programming was implemented in the syllabus?" and "What forms of pedagogy can teachers use to best support the development of logical thinking, problem solving, creativity and a structured approach through programming skills?". The questions are answered using scientific literature and analysis of previous scientific research. The results of the study are, among other things, that there is a great deal of uncertainty in how programming should be implemented in the students' syllabi, but also how the teachers should feel comfortable in teaching such a new subject when there is a lack of proper training for the teachers. Another conclusion is that the biggest reason for the introduction of programming in schools is to strengthen students' digital skills as society becomes increasingly digital.
APA, Harvard, Vancouver, ISO, and other styles
30

Galicia, Auyón Jorge Armando. "Revisiting Data Partitioning for Scalable RDF Graph Processing Combining Graph Exploration and Fragmentation for RDF Processing Query Optimization for Large Scale Clustered RDF Data RDFPart- Suite: Bridging Physical and Logical RDF Partitioning. Reverse Partitioning for SPARQL Queries: Principles and Performance Analysis. ShouldWe Be Afraid of Querying Billions of Triples in a Graph-Based Centralized System? EXGRAF: Exploration et Fragmentation de Graphes au Service du Traitement Scalable de Requˆetes RDF." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2021. http://www.theses.fr/2021ESMA0001.

Full text
Abstract:
Le Resource Description Framework (RDF) et SPARQL sont des standards très populaires basés sur des graphes initialement conçus pour représenter et interroger des informations sur le Web. La flexibilité offerte par RDF a motivé son utilisation dans d'autres domaines. Aujourd'hui les jeux de données RDF sont d'excellentes sources d'information. Ils rassemblent des milliards de triplets dans des Knowledge Graphs qui doivent être stockés et exploités efficacement. La première génération de systèmes RDF a été construite sur des bases de données relationnelles traditionnelles. Malheureusement, les performances de ces systèmes se dégradent rapidement car le modèle relationnel ne convient pas au traitement des données RDF intrinsèquement représentées sous forme de graphe. Les systèmes RDF natifs et distribués cherchent à surmonter cette limitation. Les premiers utilisent principalement l’indexation comme stratégie d'optimisation pour accélérer les requêtes. Les deuxièmes recourent au partitionnement des données. Dans le modèle relationnel, la représentation logique de la base de données est cruciale pour concevoir le partitionnement. La couche logique définissant le schéma explicite de la base de données offre un certain confort aux concepteurs. Cette couche leur permet de choisir manuellement ou automatiquement, via des assistants automatiques, les tables et les attributs à partitionner. Aussi, elle préserve les concepts fondamentaux sur le partitionnement qui restent constants quel que soit le système de gestion de base de données. Ce schéma de conception n'est plus valide pour les bases de données RDF car le modèle RDF n'applique pas explicitement un schéma aux données. Ainsi, la couche logique est inexistante et le partitionnement des données dépend fortement des implémentations physiques des triplets sur le disque. Cette situation contribue à avoir des logiques de partitionnement différentes selon le système cible, ce qui est assez différent du point de vue du modèle relationnel. Dans cette thèse, nous promouvons l'idée d'effectuer le partitionnement de données au niveau logique dans les bases de données RDF. Ainsi, nous traitons d'abord le graphe de données RDF pour prendre en charge le partitionnement basé sur des entités logiques. Puis, nous proposons un framework pour effectuer les méthodes de partitionnement. Ce framework s'accompagne de procédures d'allocation et de distribution des données. Notre framework a été incorporé dans un système de traitement des données RDF centralisé (RDF_QDAG) et un système distribué (gStoreD). Nous avons mené plusieurs expériences qui ont confirmé la faisabilité de l'intégration de notre framework aux systèmes existants en améliorant leurs performances pour certaines requêtes. Enfin, nous concevons un ensemble d'outils de gestion du partitionnement de données RDF dont un langage de définition de données (DDL) et un assistant automatique de partitionnement
The Resource Description Framework (RDF) and SPARQL are very popular graph-based standards initially designed to represent and query information on the Web. The flexibility offered by RDF motivated its use in other domains and today RDF datasets are great information sources. They gather billions of triples in Knowledge Graphs that must be stored and efficiently exploited. The first generation of RDF systems was built on top of traditional relational databases. Unfortunately, the performance in these systems degrades rapidly as the relational model is not suitable for handling RDF data inherently represented as a graph. Native and distributed RDF systems seek to overcome this limitation. The former mainly use indexing as an optimization strategy to speed up queries. Distributed and parallel RDF systems resorts to data partitioning. The logical representation of the database is crucial to design data partitions in the relational model. The logical layer defining the explicit schema of the database provides a degree of comfort to database designers. It lets them choose manually or automatically (through advisors) the tables and attributes to be partitioned. Besides, it allows the partitioning core concepts to remain constant regardless of the database management system. This design scheme is no longer valid for RDF databases. Essentially, because the RDF model does not explicitly enforce a schema since RDF data is mostly implicitly structured. Thus, the logical layer is inexistent and data partitioning depends strongly on the physical implementations of the triples on disk. This situation contributes to have different partitioning logics depending on the target system, which is quite different from the relational model’s perspective. In this thesis, we promote the novel idea of performing data partitioning at the logical level in RDF databases. Thereby, we first process the RDF data graph to support logical entity-based partitioning. After this preparation, we present a partitioning framework built upon these logical structures. This framework is accompanied by data fragmentation, allocation, and distribution procedures. This framework was incorporated to a centralized (RDF_QDAG) and a distributed (gStoreD) triple store. We conducted several experiments that confirmed the feasibility of integrating our framework to existent systems improving their performances for certain queries. Finally, we design a set of RDF data partitioning management tools including a data definition language (DDL) and an automatic partitioning wizard
APA, Harvard, Vancouver, ISO, and other styles
31

Raimondi, Daniele. "Crittoanalisi Logica di DES." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1895/.

Full text
Abstract:
La crittografia ha sempre rivestito un ruolo primario nella storia del genere umano, dagli albori ai giorni nostri, e il periodo in cui viviamo non fa certo eccezione. Al giorno d'oggi, molti dei gesti che vengono compiuti anche solo come abitudine (operazioni bancarie, apertura automatica dell'auto, accedere a Facebook, ecc.), celano al loro interno la costante presenza di sofisticati sistemi crittografici. Proprio a causa di questo fatto, è importante che gli algoritmi utilizzati siano in qualche modo certificati come ragionevolmente sicuri e che la ricerca in questo campo proceda costantemente, sia dal punto di vista dei possibili nuovi exploit per forzare gli algoritmi usati, sia introducendo nuovi e sempre più complessi sistemi di sicurezza. In questa tesi viene proposto una possibile implementazione di un particolare tipo di attacco crittoanalitico, introdotto nel 2000 da due ricercatori dell'Università "La Sapienza" di Roma, e conosciuto come "Crittoanalisi Logica". L'algoritmo su cui è incentrato il lavoro è il Data Encryption Standard (DES), ostico standard crittografico caduto in disuso nel 1999 a causa delle dimensioni ridotte della chiave, seppur tuttora sia algebricamente inviolato. Il testo è strutturato nel seguente modo: il primo capitolo è dedicato ad una breve descrizione di DES e della sua storia, introducendo i concetti fondamentali con cui si avrà a che fare per l'intera dissertazione Nel secondo capitolo viene introdotta la Crittoanalisi Logica e viene fornita una definizione della stessa, accennando ai concetti matematici necessari alla comprensione dei capitoli seguenti. Nel capitolo 3 viene presentato il primo dei due software sviluppati per rendere possibile l'attuazione di questo attacco crittoanalitico, una libreria per la rappresentazione e la manipolazione di formule logiche scritta in Java. Il quarto ed ultimo capitolo descrive il programma che, utilizzando la libreria descritta nel capitolo 3, elabora in maniera automatica un insieme di proposizioni logiche semanticamente equivalenti a DES, la cui verifica di soddisfacibilità, effettuata tramite appositi tools (SAT solvers) equivale ad effettuare un attacco di tipo known-plaintext su tale algoritmo.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yu. "Parallel inductive logic in data mining." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0019/MQ54492.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kabiri, Chimeh Mozhgan. "Data structures for SIMD logic simulation." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7521/.

Full text
Abstract:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
APA, Harvard, Vancouver, ISO, and other styles
34

Rivaz, Sebastien de. "Développement d'outils de caractérisation et d'optimisation des performances électriques des réseaux d'interconnexions de circuits intégrés rapides sub-CMOS 65 nm et nouveaux concepts d'interconnexions fonctionnelles." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT106/document.

Full text
Abstract:
Les objectifs de ces travaux de recherche portent sur le développement d'outils d'évaluation des performances électriques des interconnexions de circuits intégrés des générations sub-CMOS 65 nm et sur la proposition de solutions d'optimisation de ces performances, permettant à la fois de maximiser la rapidité des circuits et de minimiser les niveaux de diaphonie. Cette optimisation est obtenue en jouant sur les largeurs et les espacements des interconnexions mais aussi sur le nombre et de taille des répéteurs placés à leurs interfaces. Une attention toute particulière a également été portée sur la réduction de la complexité de ces réseaux d'interconnexions. Pour ce faire, un simulateur basé sur des modèles de propagation des signaux a été construit. Pour les composants passifs les données d'entrée du simulateur sont issues de modélisations fréquentielles électromagnétiques précises ou de résultats de caractérisation hyperfréquences et, pour les composants actifs que sont les répéteurs, de modèles électriques fournis par des partenaires spécialistes des technologies MOS. Le travail de modélisation s'est focalisé tout particulièrement sur cinq points : la modélisation de réseaux couplés complexes, le passage dans le domaine temporel à partir de mesures fréquentielles discrètes limitées, la vérification de la causalité des signaux temporels obtenus, la modélisation de l'environnent diélectrique incluant notamment les pertes et la présence éventuelles de conducteurs flottants et enfin l'intégration de la connaissance des charges aux interfaces des interconnexions. La problématique de la mesure a elle même été adressée puisqu'une procédure dite de « de-embedding » est proposée, spécifiquement dédiée à la caractérisation aux hautes fréquences de dispositifs passifs enfouis dans le BEOL. Sont investiguées enfin des solutions de fonctionnalisation alternatives des interconnexions tirant bénéfice des couplages très forts existant dans le BEOL des technologies sub-CMOS 65 nm. Les résultats de simulations ont souligné un certain nombre de difficultés potentielles notamment le fait que les performances des technologies CMOS sur la voie « more Moore » allait requérir plus que jamais depuis la génération 45 nm une approche globalisée et rationnelle de la réalisation des circuits
X
APA, Harvard, Vancouver, ISO, and other styles
35

Reynolds, Robert. "Gene Expression Data Analysis Using Fuzzy Logic." Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/REynoldsR2001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wolstenholme, David Edwin. "External data in logic-based advice systems." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Faria, Francisco Henrique Otte Vieira de. "Learning acyclic probabilistic logic programs from data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-27022018-090821/.

Full text
Abstract:
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs.
O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
APA, Harvard, Vancouver, ISO, and other styles
38

Carapelle, Claudia. "On the Satisfiability of Temporal Logics with Concrete Domains." Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-190987.

Full text
Abstract:
Temporal logics are a very popular family of logical languages, used to specify properties of abstracted systems. In the last few years, many extensions of temporal logics have been proposed, in order to address the need to express more than just abstract properties. In our work we study temporal logics extended by local constraints, which allow to express quantitative properties on data values from an arbitrary relational structure called the concrete domain. An example of concrete domain can be (Z, <, =), where the integers are considered as a relational structure over the binary order relation and the equality relation. Formulas of temporal logics with constraints are evaluated on data-words or data-trees, in which each node or position is labeled by a vector of data from the concrete domain. We call the constraints local because they can only compare values at a fixed distance inside such models. Several positive results regarding the satisfiability of LTL (linear temporal logic) with constraints over the integers have been established in the past years, while the corresponding results for branching time logics were only partial. In this work we prove that satisfiability of CTL* (computation tree logic) with constraints over the integers is decidable and also lift this result to ECTL*, a proper extension of CTL*. We also consider other classes of concrete domains, particularly ones that are \"tree-like\". We consider semi-linear orders, ordinal trees and trees of a fixed height, and prove decidability in this framework as well. At the same time we prove that our method cannot be applied in the case of the infinite binary tree or the infinitely branching infinite tree. We also look into extending the expressiveness of our logic adding non-local constraints, and find that this leads to undecidability of the satisfiability problem, even on very simple domains like (Z, <, =). We then find a way to restrict the power of the non-local constraints to regain decidability.
APA, Harvard, Vancouver, ISO, and other styles
39

Lundqvist, Patrik, and Michael Enhörning. "Speltestning : Med Fuzzy Logic." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20260.

Full text
Abstract:
Vid design av ett dataspel försöker speldesignern ofta skapa banor och fiender som tvingar spelaren att använda olika strategier för att överleva. För att hitta dessa strategier krävs speltestning. Speltestning är tidskrävande och då också dyrt. Den enklaste metoden för att spara tid är då att använda data hooks i spelet och sedan låta testpersoner spela spelet. Data samlas då in under alla spelsessioner och lagras i loggfiler.Med hjälp av data hooks samlades data in till denna rapport. Spelet som analyserades var ett spel av typen top-down-shooter. Anledning till att detta spel valdes var att spelet är ett exempel från Microsoft där designen är enligt XNAs standard, samt att spelidén är allmänt känd.Är det då möjligt att hitta tydliga strategier i den insamlade datan med hjälp av data mining och fuzzy logic? Det är definitivt möjligt att hitta tydliga strategier. Den insamlade datan från spelsessioner analyserades med hjälp av data mining, fuzzy logic och verktyget G-REX. Det visade sig att det fanns tydliga regler för att särskilja bra spelare från dåliga spelare. Detta visar att det är möjligt att utläsa spelarens strategi samt att jämföra denna mot hur speldesignern tänkt ut när han skapat spelet.Det som är mest intressant från resultatet är att G-REX hittade regler som sa hur en bra spelare skulle spela, och att vissa av dessa regler inte stämde överrens med hur speldesignern tänkt. En sådan regel var att i ett av passen var det bra att förlora hälsa, just därför att då kom det fler fiender och spelaren hann få mer poäng under passet.Vid jämförelse av de olika fuzzifieringstyperna (ramverket mot G-REX) visade sig att testresultatet blev väldigt likt varandra. Det innebär att allt arbete med ramverkets fuzzifiering inte hade behövts. Det innebär även att speldesigners med liten eller ingen kunskap alls om fuzzy logic skulle kunna använda G-REX till att evaluera sina spel och loggfiler med hjälp av data mining och fuzzy logic.
APA, Harvard, Vancouver, ISO, and other styles
40

Ng, Kee Siong, and kee siong@rsise anu edu au. "Learning Comprehensible Theories from Structured Data." The Australian National University. Research School of Information Sciences and Engineering, 2005. http://thesis.anu.edu.au./public/adt-ANU20051031.105726.

Full text
Abstract:
This thesis is concerned with the problem of learning comprehensible theories from structured data and covers primarily classification and regression learning. The basic knowledge representation language is set around a polymorphically-typed, higher-order logic. The general setup is closely related to the learning from propositionalized knowledge and learning from interpretations settings in Inductive Logic Programming. Individuals (also called instances) are represented as terms in the logic. A grammar-like construct called a predicate rewrite system is used to define features in the form of predicates that individuals may or may not satisfy. For learning, decision-tree algorithms of various kinds are adopted.¶ The scope of the thesis spans both theory and practice. On the theoretical side, I study in this thesis¶ 1. the representational power of different function classes and relationships between them;¶ 2. the sample complexity of some commonly-used predicate classes, particularly those involving sets and multisets;¶ 3. the computational complexity of various optimization problems associated with learning and algorithms for solving them; and¶ 4. the (efficient) learnability of different function classes in the PAC and agnostic PAC models.¶ On the practical side, the usefulness of the learning system developed is demontrated with applications in two important domains: bioinformatics and intelligent agents. Specifically, the following are covered in this thesis:¶ 1. a solution to a benchmark multiple-instance learning problem and some useful lessons that can be drawn from it;¶ 2. a successful attempt on a knowledge discovery problem in predictive toxicology, one that can serve as another proof-of-concept that real chemical knowledge can be obtained using symbolic learning;¶ 3. a reworking of an exercise in relational reinforcement learning and some new insights and techniques we learned for this interesting problem; and¶ 4. a general approach for personalizing user agents that takes full advantage of symbolic learning.
APA, Harvard, Vancouver, ISO, and other styles
41

Geske, Ulrich, and Armin Wolf. "Preface." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4140/.

Full text
Abstract:
The workshops on (constraint) logic programming (WLP) are the annual meeting of the Society of Logic Programming (GLP e.V.) and bring together researchers interested in logic programming, constraint programming, and related areas like databases, artificial intelligence and operations research. In this decade, previous workshops took place in Dresden (2008), Würzburg (2007), Vienna (2006), Ulm (2005), Potsdam (2004), Dresden (2002), Kiel (2001), and Würzburg (2000). Contributions to workshops deal with all theoretical, experimental, and application aspects of constraint programming (CP) and logic programming (LP), including foundations of constraint/ logic programming. Some of the special topics are constraint solving and optimization, extensions of functional logic programming, deductive databases, data mining, nonmonotonic reasoning,
interaction of CP/LP with other formalisms like agents, XML, JAVA, program analysis, program transformation, program verification, meta programming, parallelism and concurrency, answer set programming, implementation and software techniques (e.g., types, modularity, design patterns), applications (e.g., in production, environment, education, internet), constraint/logic programming for semantic web systems and applications, reasoning on the semantic web, data modelling for the web, semistructured data, and web query languages.
APA, Harvard, Vancouver, ISO, and other styles
42

Boskovitz, Agnes, and abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Full text
Abstract:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
APA, Harvard, Vancouver, ISO, and other styles
43

Boskovitz, Agnes. "Data editing and logic : the covering set method from the perspective of logic /." View thesis entry in Australian Digital Theses, 2008. http://thesis.anu.edu.au/public/adt-ANU20080314.163155/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Thost, Veronika. "Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-227633.

Full text
Abstract:
Ontology-based data access (OBDA) augments classical query answering in databases by including domain knowledge provided by an ontology. An ontology captures the terminology of an application domain and describes domain knowledge in a machine-processable way. Formal ontology languages additionally provide semantics to these specifications. Systems for OBDA thus may apply logical reasoning to answer queries; they use the ontological knowledge to infer new information, which is only implicitly given in the data. Moreover, they usually employ the open-world assumption, which means that knowledge not stated explicitly in the data or inferred is neither assumed to be true nor false. Classical OBDA regards the knowledge however only w.r.t. a single moment, which means that information about time is not used for reasoning and hence lost; in particular, the queries generally cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing, such as context recognition.
APA, Harvard, Vancouver, ISO, and other styles
45

Doskočil, Radek. "Metodika zjišťování bonity klienta v pojišťovnictví." Doctoral thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-233724.

Full text
Abstract:
This dissertation thesis deals with problems of identifying client’s solvency in the insurance business and is drawn for the insurance companies needs. The main target of this work is a construction of methodology, which will provide managers a tool to support their decision making in cases of client solvency assessment. The basic theoretical background, an overview of the current state of the analyzed subject and the description of utilized methods are presented in the introductory part of this work. In following parts of this work is introduced a real database of insurance company’s clients, which serves as a basis to accomplish the defined goal. The source data were subject to a necessary analysis to determine the cross-correlations and variables entering the decision-making model. A large variety of traditional statistical methods, including relevant software were used to analyze the data. Decision-making model was formed with the help of artificial intelligence methods, especially fuzzy logic. The technical realization of the model was made using MATLAB software. The process of insurance company’s client solvency assessment methodology creation is described in detail and elaborated into phases. The fundamental part of the methodology, decision-making model, can be easily modified and adapted to the end user’s specific needs. The text also includes a verification and implementation of the model, an interpretation of the results, a comprehensive client solvency assessment methodology process in insurance business and the definition of contribution of this methodology to practice, theory and pedagogy.
APA, Harvard, Vancouver, ISO, and other styles
46

Campbell, Jonathan G. "Fuzzy logic and neural network techniques in data analysis." Thesis, University of Ulster, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nderu, Lawrence. "Fuzzy logic pattern in text and image data analysis." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080084.

Full text
Abstract:
La logique floue est aujourd'hui universellement admise comme discipline ayant fait ses preuves à l'intersection des mathématiques, de l'informatique, des sciences cognitives et de l'Intelligence Artificielle. En termes formels, la logique floue est une extension de la logique classique ayant pour but de mesurer la flexibilité du raisonnement humain, et permettant la modélisation des imperfections des données, en particulier, les deux imperfections les plus fréquentes : l'imprécision et l'incertitude. En outre, la logique floue ignore le principe du tiers exclu et celui de non-contradiction.Nous n'allons pas, dans ce court résumé de la thèse, reprendre et définir tous les concepts de cet outil devenu désormais classique : fonction d'appartenance, degré d'appartenance, variable linguistique, opérateurs flous, fuzzyfication, défuzzication, raisonnement approximatif … L'un des concepts de base de cette logique est la notion de possibilité qui permet de modéliser la fonction d'appartenance d'un concept. La possibilité d'un événement diffère de sa probabilité dans la mesure où elle n'est pas intimement liée à celle de l'événement contraire. Ainsi, par exemple, si la probabilité qu'il pleuve demain est de 0,6, alors la probabilité qu'il ne pleuve pas doit être égale à 0,4 tandis que les possibilités qu'il pleuve demain ou non peuvent toutes les deux être égales à 1 (ou encore deux autres valeurs dont la somme peut dépasser 1).Dans le domaine de l'informatique, l'apprentissage non supervisé (ou « clustering ») est une méthode d'apprentissage automatique quasi-autonome. Il s'agit pour un algorithme de diviser un groupe de données, en sous-groupes de manière que les données considérées comme les plus similaires soient associées au sein d'un groupe homogène. Une fois que l'algorithme propose ces différents regroupements, le rôle de l'expert ou du groupe d'experts est alors de nommer chaque groupe, éventuellement diviser certains ou de regrouper certains, afin de créer des classes. Les classes deviennent réelles une fois que l'algorithme a fonctionné et que l'expert les a nommées.Encore une fois, notre travail, comme tous les travaux du domaine, vise à adapter les modèles traditionnelles d'apprentissage et/ou de raisonnement à l'imprécision du monde réel. L'analyse des sentiments à partir de ressources textuelles et les images dans le cadre de l'agriculture de précision nous ont permis d'illustrer nos hypothèses. L'introduction par le biais de notre travail du concept de motifs flous est sans aucun doute une contribution majeure.Ce travail a donné lieu à trois contributions majeures :
Standard (type-1) fuzzy sets were introduced to mimic human reasoning in its use of approximate information and uncertainty to generate decisions. Since knowledge can be expressed in a natural way by using fuzzy sets, many decision problems can be greatly simpli_ed. However, standard type-1 fuzzy sets have limitations when it comes to modelinghuman decision making.When Zadeh introduced the idea of higher types of fuzzy sets called type-n fuzzy sets andtype-2 fuzzy sets, the objective was to solve problems associated with modeling uncertainty using crisp membership functions of type-1 fuzzy sets. The extra dimension presented by type-2 fuzzy sets provides more design freedom and exibility than type-1 fuzzy sets. The ability of FLS to be hybridized with other methods extended the usage of Fuzzy LogicSystems (FLS) in many application domains. In architecture and software engineering the concept of patterns was introduced as a way of creating general repeatable solutions to commonly occurring problems in the respective_elds. In software engineering for example, the design pattern is not a _nished design that can be transformed directly into code. It is a description or template on how to solve a problem that can be used in many di_erent situations. This thesis introduces the novel concept of fuzzy patterns in T2 FLS. Micro-blogs and social media platforms are now considered among the most popular forms of online communication. Through a platform like TwitterTM much information reecting people's opinions and attitudes is published and shared among users on a daily basis. This has brought great opportunities to companies interested in tracking and monitoring the reputation of their brands and businesses, and to policy makers and politicians to support their assessment of public opinions about their policies or political issues. Thisresearch demonstrates the importance of the neutral category in sentiment polarity analysis, it then introduces the concept of fuzzy patterns in sentiment polarity analysis. The xvii Interval Type-2 Fuzzy Set (IT2 FS), were proposed by reference [Men07c] to model words. This is because it is characterized by its Footprint Of Uncertainty (FOU). The FOU providesa potential to capture word uncertainties. The use of IT2 FS in polarity sentiment classi_cation is demonstrated. The importance of the neutral category is demonstrated in both supervised and unsupervised learning methods. In the _nal section the concept of fuzzy patterns in contrast
APA, Harvard, Vancouver, ISO, and other styles
48

Babari, Parvaneh. "Quantitative Automata and Logic for Pictures and Data Words." Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-221165.

Full text
Abstract:
Mathematical logic and automata theory are two scientific disciplines with a close relationship that is not only fundamental for many theoretical results but also forms the basis of a coherent methodology for the verification and synthesis of computing systems. This connection goes back to a much longer history in the 1960s, through the fundamental work of Büchi-Elgot-Trakhtenbrot, which shows the expressive equivalence of automata and logical systems such as monadic second-order logic on finite and infinite words. This allowed the handling of specifications (where global system properties are stated), and implementations (which involve the definition of the local steps in order to satisfy the global goals laid out in the specifications) in a single framework. This connection has been extended to and well-investigated for many other structures such as trees, finite pictures, timed words and data words. For many computer science applications, however, quantitative phenomena need to be modelled, as well. Examples are vagueness and uncertainty of a statement, length of time periods, spatial information, and resource consumption. Weighted automata, introduced by Schützenberger, are prominent models for quantitative aspects of systems. The framework of weighted monadic second-order logic over words was first introduced by Droste and Gastin. They gave a characterization of quantitative behavior of weighted finite automata, as semantics of monadic second-order sentences within their logic. Meanwhile, the idea of weighted logics was also applied to devices recognizing more general structures such as weighted tree automata, weighted automata on infinite words or traces. The main goal of this thesis is to give logical characterizations for weighted automata models on pictures and data words as well as for Büchi-tiling systems in the spirit of the classical Büchi-Elgot theorem. As the second goal, we deal with synchronizing problem for data words. Below, we briefly summarize the contents of this thesis. Informally, a two-dimensional string is called a picture and is defined as a rectangular array of symbols taken from a finite alphabet. A two-dimensional language (or picture language) is a set of pictures. Picture languages have been intensively investigated by several research groups. In Chapter 1, we define weighted two-dimensional on-line tessellation automata (W2OTA) taking weights from a new weight structure called picture valuation monoid. This new weighted picture automaton model can be used to model several applications, e.g. the average density of a picture. Such aspects could not be modelled by semiring weighted picture automaton model. The behavior of this automaton model is a picture series mapping pictures over an alphabet to elements of a picture valuation monoid. As one of our main results, we prove a Nivat theorem for W2OTA. It shows that recognizable picture series can be obtained precisely as projections of particularly simple unambiguously recognizable series restricted to unambiguous recognizable picture languages. In addition, we introduce a weighted monadic second-order logic (WMSO) which can model average density of pictures. As the other main result, we show that W2OTA and a suitable fragment of our weighted MSO logic are expressively equivalent. In Chapter 2, we generalize the notion of finite pictures to +ω-pictures, i.e., pictures which have finite number of rows and infinite number of columns. We extend conventional tiling systems with a Büchi acceptance condition in order to define the class of Büchi-tiling recognizable +ω-picture languages. The class of recognizable +ω-picture languages is indeed, a natural generalization of ω-regular languages. We show that the class of all Büchi-tiling recognizable +ω-picture languages has the similar closure properties as the class of tiling recognizable languages of finite pictures: it is closed under projection, union, and intersection, but not under complementation. While for languages of finite pictures, tiling recognizability and EMSO-definability coincide, the situation is quite different for languages of +ω-pictures. In this setting, the notion of tiling recognizability does not even cover the language of all +ω -pictures over Σ = {a, b} in which the letter a occurs at least once – a picture language that can easily be defined in first-order logic. As a consequence, EMSO is too strong for being captured by the class of tiling recognizable +ω-picture languages. On the other hand, EMSO is too weak for being captured by the class of all Büchi-tiling recognizable +ω-picture languages. To obtain a logical characterization of this class, we introduce the logic EMSO∞, which extends EMSO with existential quantification of infinite sets. Additionally, using combinatorial arguments, we show that the Büchi characterization theorem for ω-regular languges does not carry over to the Büchi-tiling recognizable +ω-picture languages. In Chapter 3, we consider the connection between weighted register automata and weighted logic on data words. Data words are sequences of pairs where the first element is taken from a finite alphabet (as in classical words) and the second element is taken from an infinite data domain. Register automata, introduced by Francez and Kaminski, provide a widely studied model for reasoning on data words. These automata can be considered as classical nondeterministic finite automata equipped with a finite set of registers which are used to store data in order to compare them with some data in the future. In this chapter, for quantitative reasoning on data words, we introduce weighted register automata over commutative data semirings equipped with a collection of binary data functions in the spirit of the classical theory of weighted automata. Whereas in the models of register automata known from the literature data are usually compared with respect to equality or a linear order, here we allow data comparison by means of an arbitrary collection of binary data relations. This approach permits easily to incorporate timed automata and weighted timed automata into our framework. Motivated by the seminal Büchi-Elgot-Trakhtenbrot theorem about the expressive equivalence of finite automata and monadic second-order (MSO) logic and by the weighted MSO logic of Droste and Gastin, we introduce weighted MSO logic on data words and give a logical characterization of weighted register automata. In Chapter 4, we study the concept of synchronizing data words in register automata. The synchronizing problem for data words asks whether there exists a data word that sends all states of the register automaton to a single state. The class of register automata that we consider here has a decidable non-emptiness problem, and the subclass of nondeterministic register automata with a single register has a decidable non-universality problem. We provide the complexity bounds of the synchronizing problem in the family of deterministic register automata with k registers (k-DRA), and in the family of nondeterministic register automata with single register (1-NRA), and in general undecidability of the problem in the family of k-NRA. To this end, we prove that, for k-DRA, inputting data words with only 2k + 1 distinct data values, from the infinite data domain, is sufficient to synchronize. Then, we show that the synchronizing problem for k-DRA is in general PSPACE-complete, and it is in NLOGSPACE for 1-DRA. For nondeterministic register automata (NRA), we show that Ackermann(n) distinct data, where n is the number of states of the register automaton, might be necessary to synchronize. Then, by means of a construction, proving that the synchronizing problem and the non-universality problem in 1-NRA are interreducible, we show the Ackermann-completeness of the problem for 1-NRA. However, for k-NRA, in general, we prove that this problem is undecidable due to the unbounded length of synchronizing data words.
APA, Harvard, Vancouver, ISO, and other styles
49

Kusalik, Anthony Joseph. "Logic programming as a formalism for specification and implementation of computer systems." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28848.

Full text
Abstract:
The expressive power of logic-programming languages allows utilization of conventional constructs in development of computer systems based on logic programming. However, logic-programming languages have many novel features and capabilities. This thesis investigates how advantage can be taken of these features in the development of a logic-based computer system. It demonstrates that innovative approaches to software, hardware, and computer system design and implementation are feasible in a logic-programming context and often preferable to adaptation of conventional ones. The investigation centers on three main ideas: executable specification, declarative I/O, and implementation through transformation and meta-interpretation. A particular class of languages supporting parallel computation, committed-choice logic-programming languages, are emphasized. One member of this class, Concurrent Prolog, serves as the machine, specification, and implementation language. The investigation has several facets. Hardware, software, and overall system models for a logic-based computer are determined and examined. The models are described by logic programs. The computer system is represented as a goal for resolution. The clauses involved in the subsequent reduction steps constitute its specification. The same clauses also describe the manner in which the computer system is initiated. Frameworks are given for developing models of peripheral devices whose actions and interactions can be declaratively expressed. Interactions do not rely on side-effects or destructive assignment, and are term-based. A methodology is presented for realizing (prototypic) implementations from device specifications. The methodology is based on source-to-source transformation and meta-interpretation. A magnetic disk memory is used as a representative example, resulting in an innovative approach to secondary storage in a logic-programming environment. Building on these accomplishments, a file system for a logic-based computer system is developed. The file system follows a simple model and supports term-based, declarative I/O. Throughout the thesis, features of the logic-programming paradigm are demonstrated and exploited. Interesting and innovative concepts established include: device processes and device processors; restartable and perpetual devices and systems; peripheral devices modelled as function computations or independent logical (inference) systems; unique, compact representations of terms; lazy term expansion; files systems as perpetual processes maintaining local states; and term- and unification-based file abstractions. Logic programs are the sole formalism for specifications and implementations.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

Ng, Kee Siong. "Learning comprehensible theories from structured data /." View thesis entry in Australian Digital Theses Program, 2005. http://thesis.anu.edu.au/public/adt-ANU20051031.105726/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography