To see the other types of publications on this topic, follow the link: Merkle Trees.

Dissertations / Theses on the topic 'Merkle Trees'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Merkle Trees.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Östersjö, Rasmus. "Sparse Merkle Trees: Definitions and Space-Time Trade-Offs with Applications for Balloon." Thesis, Karlstads universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-42913.

Full text
Abstract:
This dissertation proposes an efficient representation of a sparse Merkle tree (SMT): an authenticated data structure that supports logarithmic insertion, removal, and look-up in a verifiable manner. The proposal is general in the sense that it can be implemented using a variety of underlying non-authenticated data structures, and it allows trading time for space by the use of an abstract model which represents caching strategies. Both theoretical evaluations and performance results from a proof-of-concept implementation are provided, and the proposed SMT is applied to another authenticated data structure referred to as Balloon. The resulting Balloon has preserved efficiency in the expected case, and is improved with respect to worst case scenarios.
APA, Harvard, Vancouver, ISO, and other styles
2

Spik, Charlotta. "Using Hash Trees for Database Schema Inconsistency Detection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254672.

Full text
Abstract:
For this work, two algorithms have been developed to improve the performance of the inconsistency detection by using Merkle trees. The first builds a hash tree from a database schema version, and the second compares two hash trees to find where changes have occurred. The results of performance testing done on the hash tree approach compared to the current approach used by Cisco where all data in the schema is traversed, shows that the hash tree algorithm for inconsistency detection performs significantly better than the complete traversal algorithm in all cases tested, with the exception of when all nodes have changed in the tree. The factor of improvement is directly related to the number of nodes that have to be traversed for the hash tree, which in turn depends on the number of changes done between versions and the positioning in the schema of the nodes that have changed. The real-life example scenarios used for performance testing show that on average, the hash tree algorithm only needs to traverse 1,5% of the number of nodes that the complete traversal algorithm used by Cisco does, and on average gives a 200 times improvement in performance. Even in the worst real-life case used for testing, the hash tree algorithm performed five times better than the complete traversal algorithm.
I detta arbete har två algoritmer utvecklats for att förbättra prestandan på processen att hitta skillnader mellan schemana genom att använda Merkle träd. Den första bygger ett hashträd från schemaversionen, och den andra jämför två hashträd för att hitta var förändringar har skett. Resultaten från prestandautvärderingen som gjorts på hashträdalgoritmen jämfört med nuvarande algoritm som används på Cisco där all data i schemat traverseras, visar att hashträdalgoritmen presterar signifikant bättre än algoritmen som traverserar all data i alla fall som testats, förutom då alla noder har ändrats i trädet. Förbättringsfaktorn är direkt kopplad till antalet noder som behöver traverseras för hashträdalgoritmen, vilket i sin tur beror på antalet förändringar som skett mellan versionerna och positioneringen i schemat av de noder som har förändrats. De exempelscenarior som har tagits från riktiga uppdateringar som har skett för existerande scheman visar att i genomsnitt behöver hashträdalgoritmen bara traversera 1,5% av noderna som den nuvarande algoritmen som används av Cisco måste traversera, och hashträdalgoritmen ger i genomsnitt en 200 gånger prestandaförbättring. Även i det värsta fallet för dessa uppdateringar tagna från verkliga scenarier presterade hashträdalgoritmen fem gånger bättre än algoritmen som traverserar all data i schemat.
APA, Harvard, Vancouver, ISO, and other styles
3

Ouaarab, Salaheddine. "Protection du contenu des mémoires externes dans les systèmes embarqués, aspect matériel." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0046/document.

Full text
Abstract:
Ces dernières années, les systèmes informatiques (Cloud Computing, systèmes embarqués, etc.) sont devenus omniprésents. La plupart de ces systèmes utilisent des espaces de stockage (flash,RAM, etc.) non fiables ou non dignes de confiance pour stocker du code ou des données. La confidentialité et l’intégrité de ces données peuvent être menacées par des attaques matérielles (espionnage de bus de communication entre le composant de calcul et le composant de stockage) ou logicielles. Ces attaques peuvent ainsi révéler des informations sensibles à l’adversaire ou perturber le bon fonctionnement du système. Dans cette thèse, nous nous sommes focalisés, dans le contexte des systèmes embarqués, sur les attaques menaçant la confidentialité et l’intégrité des données qui transitent sur le bus de communication avec la mémoire ou qui sont stockées dans celle-ci.Plusieurs primitives de protection de confidentialité et d’intégrité ont déjà été proposées dans la littérature, et notamment les arbres de Merkle, une structure de données protégeant efficacement l’intégrité des données notamment contre les attaques par rejeu. Malheureusement,ces arbres ont un impact important sur les performances et sur l’empreinte mémoire du système.Dans cette thèse, nous proposons une solution basée sur des variantes d’arbres de Merkle (arbres creux) et un mécanisme de gestion adapté du cache afin de réduire grandement l’impact de la vérification d’intégrité d’un espace de stockage non fiable. Les performances de cette solution ont été évaluées théoriquement et à l’aide de simulations. De plus, une preuve est donnée de l’équivalence, du point de vue de la sécurité, avec les arbres de Merkle classiques.Enfin, cette solution a été implémentée dans le projet SecBus, une architecture matérielle et logicielle ayant pour objectif de garantir la confidentialité et l’intégrité du contenu des mémoires externes d’un système à base de microprocesseurs. Un prototype de cette architecture a été réalisé et les résultats de l’évaluation de ce dernier sont donnés
During the past few years, computer systems (Cloud Computing, embedded systems...) have become ubiquitous. Most of these systems use unreliable or untrusted storage (flash, RAM...)to store code or data. The confidentiality and integrity of these data can be threaten by hardware (spying on the communication bus between the processing component and the storage component) or software attacks. These attacks can disclose sensitive information to the adversary or disturb the behavior of the system. In this thesis, in the context of embedded systems, we focused on the attacks that threaten the confidentiality and integrity of data that are transmittedover the memory bus or that are stored inside the memory. Several primitives used to protect the confidentiality and integrity of data have been proposed in the literature, including Merkle trees, a data structure that can protect the integrity of data including against replay attacks. However, these trees have a large impact on the performances and the memory footprint of the system. In this thesis, we propose a solution based on variants of Merkle trees (hollow trees) and a modified cache management mechanism to greatly reduce the impact of the verification of the integrity. The performances of this solution have been evaluated both theoretically and in practice using simulations. In addition, a proof a security equivalence with regular Merkle treesis given. Finally, this solution has been implemented in the SecBus architecture which aims at protecting the integrity and confidentiality of the content of external memories in an embedded system. A prototype of this architecture has been developed and the results of its evaluation are given
APA, Harvard, Vancouver, ISO, and other styles
4

Ouaarab, Salaheddine. "Protection du contenu des mémoires externes dans les systèmes embarqués, aspect matériel." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0046.

Full text
Abstract:
Ces dernières années, les systèmes informatiques (Cloud Computing, systèmes embarqués, etc.) sont devenus omniprésents. La plupart de ces systèmes utilisent des espaces de stockage (flash,RAM, etc.) non fiables ou non dignes de confiance pour stocker du code ou des données. La confidentialité et l’intégrité de ces données peuvent être menacées par des attaques matérielles (espionnage de bus de communication entre le composant de calcul et le composant de stockage) ou logicielles. Ces attaques peuvent ainsi révéler des informations sensibles à l’adversaire ou perturber le bon fonctionnement du système. Dans cette thèse, nous nous sommes focalisés, dans le contexte des systèmes embarqués, sur les attaques menaçant la confidentialité et l’intégrité des données qui transitent sur le bus de communication avec la mémoire ou qui sont stockées dans celle-ci.Plusieurs primitives de protection de confidentialité et d’intégrité ont déjà été proposées dans la littérature, et notamment les arbres de Merkle, une structure de données protégeant efficacement l’intégrité des données notamment contre les attaques par rejeu. Malheureusement,ces arbres ont un impact important sur les performances et sur l’empreinte mémoire du système.Dans cette thèse, nous proposons une solution basée sur des variantes d’arbres de Merkle (arbres creux) et un mécanisme de gestion adapté du cache afin de réduire grandement l’impact de la vérification d’intégrité d’un espace de stockage non fiable. Les performances de cette solution ont été évaluées théoriquement et à l’aide de simulations. De plus, une preuve est donnée de l’équivalence, du point de vue de la sécurité, avec les arbres de Merkle classiques.Enfin, cette solution a été implémentée dans le projet SecBus, une architecture matérielle et logicielle ayant pour objectif de garantir la confidentialité et l’intégrité du contenu des mémoires externes d’un système à base de microprocesseurs. Un prototype de cette architecture a été réalisé et les résultats de l’évaluation de ce dernier sont donnés
During the past few years, computer systems (Cloud Computing, embedded systems...) have become ubiquitous. Most of these systems use unreliable or untrusted storage (flash, RAM...)to store code or data. The confidentiality and integrity of these data can be threaten by hardware (spying on the communication bus between the processing component and the storage component) or software attacks. These attacks can disclose sensitive information to the adversary or disturb the behavior of the system. In this thesis, in the context of embedded systems, we focused on the attacks that threaten the confidentiality and integrity of data that are transmittedover the memory bus or that are stored inside the memory. Several primitives used to protect the confidentiality and integrity of data have been proposed in the literature, including Merkle trees, a data structure that can protect the integrity of data including against replay attacks. However, these trees have a large impact on the performances and the memory footprint of the system. In this thesis, we propose a solution based on variants of Merkle trees (hollow trees) and a modified cache management mechanism to greatly reduce the impact of the verification of the integrity. The performances of this solution have been evaluated both theoretically and in practice using simulations. In addition, a proof a security equivalence with regular Merkle treesis given. Finally, this solution has been implemented in the SecBus architecture which aims at protecting the integrity and confidentiality of the content of external memories in an embedded system. A prototype of this architecture has been developed and the results of its evaluation are given
APA, Harvard, Vancouver, ISO, and other styles
5

Lindqvist, Anton. "Privacy Preserving Audit Proofs." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210694.

Full text
Abstract:
The increased dependence on computers for critical tasks demands sufficient and transparent methods to audit its execution. This is commonly solved using logging where the log must not only be resilient against tampering and rewrites in hindsight but also be able to answer queries concerning (non)-membership of events in the log while preserving privacy. Since the log cannot assume to be trusted the answers must be verifiable using a proof of correctness. This thesis describes a protocol capable of producing verifiable privacy preserving membership proofs using Merkle trees. For non-membership, a method used to authenticate Bloom filters using Merkle trees is proposed and analyzed. Since Bloom filters are a probabilistic data structures, a method of handling false positives is also proposed.
Den ökande avlastningen av kritisk funktionalitet till datorer ställer högre krav på loggning och möjlighet till övervakning. Loggen måste vara resistent mot manipulation och möjliggöra för andra parter att ställa frågor berörande en viss händelse i loggen utan att läcka känslig information. Eftersom loggen inte antas vara att lita på måste varje svar vara verifierbart med hjälp av ett bevis. Denna rapport presenterar ett protokoll kapabelt till att producera verifierbara och integritetsbevarande svar på frågor om en viss händelse i loggen genom användning av Merkle-träd. Vid avsaknad av den förfrågade händelsen används ny metod för att autentisera ett Bloom filter med hjälp av Merkle-träd. Eftersom Bloom filtren är en probabilistisk konstruktion presenteras även en metod för att hantera falsk positiva svar.
APA, Harvard, Vancouver, ISO, and other styles
6

Kruber, Nico. "Approximate Distributed Set Reconciliation with Defined Accuracy." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21294.

Full text
Abstract:
Mit aktuell vorhandenen Mitteln ist es schwierig, objektiv approximative Algorithmen zum Mengenabgleich gegenüberzustellen und zu vergleichen. Jeder Algorithmus kann durch unterschiedliche Wahl seiner jeweiligen Parameter an ein gegebenes Szenario angepasst werden und so zum Beispiel Bandbreiten- oder CPU-optimiert werden. Änderungen an den Parametern gehen jedoch meistens auch mit Änderungen an der Genauigkeit bei der Erkennung von Differenzen in den teilnehmenden Mengen einher und behindern somit objektive Vergleiche, die auf derselben Genauigkeit basieren. In dieser Arbeit wird eine Methodik entwickelt, die einen fairen Vergleich von approximativen Algorithmen zum Mengenabgleich erlaubt. Dabei wird eine feste Zielgenauigkeit definiert und im Weiteren alle die Genauigkeit beeinflussenden Parameter entsprechend gesetzt. Diese Methode ist universell genug, um für eine breite Masse an Algorithmen eingesetzt zu werden. In der Arbeit wird sie auf zwei triviale hashbasierte Algorithmen, einem basierend auf Bloom Filtern und einem basierend auf Merkle Trees angewandt, um dies zu untermauern. Im Vergleich zu vorherigen Arbeiten zu Merkle Trees wird vorgeschlagen, die Größe der Hashsummen dynamisch im Baum zu wählen und so den Bandbreitenbedarf an die gewünschte Zielgenauigkeit anzupassen. Dabei entsteht eine neue Variante des Mengenabgleichs mit Merkle Trees, bei der sich erstmalig die Genauigkeit konfigurieren lässt. Eine umfassende Evaluation eines jeden der vier unter dem Genauigkeitsmodell angepassten Algorithmen bestätigt die Anwendbarkeit der entwickelten Methodik und nimmt eine Neubewertung dieser Algorithmen vor. Die vorliegenden Ergebnisse erlauben die Auswahl eines effizienten Algorithmus für unterschiedliche praktische Szenarien basierend auf einer gewünschten Zielgenauigkeit. Die präsentierte Methodik zur Bestimmung passender Parameter, um für unterschiedliche Algorithmen die gleiche Genauigkeit zu erreichen, kann auch auf weitere Algorithmen zum Mengenabgleich angewandt werden und erlaubt eine objektive, allgemeingültige Einordnung ihrer Leistung unter verschiedenen Metriken. Der in der Arbeit entstandene neue approximative Mengenabgleich mit Merkle Trees erweitert die Anwendbarkeit von Merkle Trees und wirft ein neues Licht auf dessen Effektivität.
The objective comparison of approximate versioned set reconciliation algorithms is challenging. Each algorithm's behaviour can be tuned for a given use case, e.g. low bandwidth or computational overhead, using different sets of parameters. Changes of these parameters, however, often also influence the algorithm's accuracy in recognising differences between participating sets and thus hinder objective comparisons based on the same level of accuracy. We develop a method to fairly compare approximate set reconciliation algorithms by enforcing a fixed accuracy and deriving accuracy-influencing parameters accordingly. We show this method's universal applicability by adopting two trivial hash-based algorithms as well as set reconciliation with Bloom filters and Merkle trees. Compared to previous research on Merkle trees, we propose to use dynamic hash sizes to align the transfer overhead with the desired accuracy and create a new Merkle tree reconciliation algorithm with an adjustable accuracy target. An extensive evaluation of each algorithm under this accuracy model verifies its feasibility and ranks these four algorithms. Our results allow to easily choose an efficient algorithm for practical set reconciliation tasks based on the required level of accuracy. Our way to find configuration parameters for different, yet equally accurate, algorithms can also be adopted to other set reconciliation algorithms and allows to rate their respective performance in an objective manner. The resultant new approximate Merkle tree reconciliation broadens the applicability of Merkle trees and sheds some new light on its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
7

Brown, Jordan Lee. "Verifiable and redactable medical documents." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44890.

Full text
Abstract:
The objective of the proposed research is to answer the question of how to provide verification and redactability to medical documents at a manageable computation cost to all parties involved. The approach for this solution examines the use of Merkle Hash Trees to provide the redaction and verification characteristics required. Using the Merkle Hash Tree, various Continuity of Care Documents will have their various elements extracted for storage in the signature scheme. An analysis of the approach and the various characteristics that made this approach a likely candidate for success are provided within. A description of a framework implementation and a sample application are provided to demonstrate potential uses of the system. Finally, results seen from various experiments with the framework are included to provide concrete evidence of a solution to the question which was the focus of this research.
APA, Harvard, Vancouver, ISO, and other styles
8

Fredriksson, Bastian. "A Distributed Public Key Infrastructure for the Web Backed by a Blockchain." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210912.

Full text
Abstract:
The thesis investigates how a blockchain can be used to build a decentralised public key infrastructure for the web, by proposing a custom federation blockchain relying on honest majority. Our main contribution is the design of a Proof of Stake protocol based on a stake tree, which builds upon an idea called follow-the-satoshi used in previous papers. Digital identities are stored in an authenticated self-balancing tree maintained by blockchain nodes. Our back-of-the-envelope calculations, based on the size of the domain name system, show that the block size must be set to at least 5.2 MB, while each blockchain node with a one-month transaction history would need to store about 243 GB. Thin clients would have to synchronise about 13.6 MB of block headers per year, and download an additional 3.7 KB of proof data for every leaf certificate which is to be checked.
Uppsatsen undersöker hur en blockkedja kan användas för att bygga en decentraliserad publik nyckel-infrastruktur för webben. Vi ger ett designförslag på en blockkedja som drivs av en pålitlig grupp av noder, där en majoritet antas vara ärliga. Vårt huvudsakliga bidrag är utformningen av ett Proof of Stake-protokoll baserat på ett staketräd, vilket bygger på en idé som kallas follow-the-satoshi omnämnd i tidigare publikationer. Digitala identiteter sparas i ett autentiserat, självbalanserande träd som underhålls av noder anslutna till blockkedjenätverket. Våra preliminära beräkningar baserade på storleken av DNS-systemet visar att blockstorleken måste sättas till åtminstone 5.2 MB, medan varje nod med en månads transaktionshistorik måste spara ungefär 243 GB. Webbläsare och andra resurssnåla klienter måste synkronisera 13.6 MB data per år, och ladda ner ytterligare 3.7 KB för varje användarcertifikat som skall valideras.
APA, Harvard, Vancouver, ISO, and other styles
9

Saikia, Himangshu. "Comparison and Tracking Methods for Interactive Visualization of Topological Structures in Scalar Fields." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-216375.

Full text
Abstract:
Scalar fields occur quite commonly in several application areas in both static and time-dependent forms. Hence a proper visualization of scalar fieldsneeds to be equipped with tools to extract and focus on important features of the data. Similarity detection and pattern search techniques in scalar fields present a useful way of visualizing important features in the data. This is done by isolating these features and visualizing them independently or show all similar patterns that arise from a given search pattern. Topological features are ideal for this purpose of isolating meaningful patterns in the data set and creating intuitive feature descriptors. The Merge Tree is one such topological feature which has characteristics ideally suited for this purpose. Subtrees of merge trees segment the data into hierarchical regions which are topologically defined. This kind of feature-based segmentation is more intelligent than pure data based segmentations involving windows or bounding volumes. In this thesis, we explore several different techniques using subtrees of merge trees as features in scalar field data. Firstly, we begin with a discussion on static scalar fields and devise techniques to compare features - topologically segmented regions given by the subtrees of the merge tree - against each other. Second, we delve into time-dependent scalar fields and extend the idea of feature comparison to spatio-temporal features. In this process, we also come up with a novel approach to track features in time-dependent data considering the entire global network of likely feature associations between consecutive time steps.The highlight of this thesis is the interactivity that is enabled using these feature-based techniques by the real-time computation speed of our algorithms. Our techniques are implemented in an open-source visualization framework Inviwo and are published in several peer-reviewed conferences and journals.

QC 20171020

APA, Harvard, Vancouver, ISO, and other styles
10

Kovář, Adam. "Bezpečná implementace technologie blockchain." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413104.

Full text
Abstract:
This thesis describes basis of blockchain technology implementation for SAP Cloud platform with emphasis to security and safety of critical data which are stored in blockchain. This diploma thesis implements letter of credit to see and control business process administration. It also compares all the possible technology modification. Thesis describes all elementary parts of software which are necessary to implement while storing data and secure integrity. This thesis also leverages ideal configuration of each programable block in implementation. Alternative configurations of possible solutions are described with pros and cons as well. Another part of diploma thesis is actual working implementation as a proof of concept to cover letter of credit. All parts of code are design to be stand alone to provide working concept for possible implementation and can source as a help to write productive code. User using this concept will be able to see whole process and create new statutes for whole letter of credit business process.
APA, Harvard, Vancouver, ISO, and other styles
11

Silvaroli, Antonio. "Design and Analysis of Erasure Correcting Codes in Blockchain Data Availability Problems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
In questo lavoro viene affrontato il concetto di Blockchain e Bitcoin, con enfasi sugli attacchi alla disponibilità riguardanti le transazioni, nel caso in cui nella rete vengano considerati alcuni nodi detti "light nodes", che migliorano la scalabilità del sistema. Quindi, si analizza il funzionamento della Blockchain di Bitcoin quando la struttura dati "Merkle Tree" viene codificata, in modo da aumentare la probabilità dei light nodes di rilevare cancellazioni di transazioni, attuate da parte di nodi attaccanti. Attraverso una codifica con erasure codes, in particolare con codici low density parity check (LDPC), si riesce ad aumentare la probabilità di detection di una cancellazione e, grazie alla decodifica iterativa è possibile recuperare tale cancellazione. Viene affrontato il problema degli stopping sets, cioè quelle strutture che impediscono il recupero dei dati tramite decodifica iterativa e si progetta un algoritmo per l'enumerazione di tali strutture. Sono poi testate, in modo empirico, alcune soluzioni teoriche presenti in letteratura. Successivamente vengono progettati nuovi codici, seguendo un metodo di design diverso da quello presente in letteratura. Tali codici comportano un miglioramento delle performance, in quanto il minimo stopping set per tali codici risulta più alto di quello di codici già analizzati. In questo modo eventuali attacchi alla disponibilità risultano, in probabilità, più difficili. Come conseguenza, il throughput della rete risulta più stabile dato che, con minori attacchi che vanno a buon fine, la frequenza di generazione di nuovi codici, per un nuovo processo di codifica delle transazioni, tende ad essere più bassa. Infine vengono proposti dei possibili miglioramenti.
APA, Harvard, Vancouver, ISO, and other styles
12

Bauer, David Allen. "Preserving privacy with user-controlled sharing of verified information." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31676.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Blough, Douglas; Committee Member: Ahamad, Mustaque; Committee Member: Liu, Ling; Committee Member: Riley, George; Committee Member: Yalamanchili, Sudha. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
13

Apelqvist, Joakim. "Sorteringsalgoritmer för strömmad data : Algoritmer för sortering av spatio-temporal data i JSON-objekt." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18698.

Full text
Abstract:
Data från positioneringssystem som GPS är alltmer vanlig, men är svårhanterlig i traditionella datalagringssystem. Sådan data består av spatiala och temporala attribut och representeras i vissa fall i JSON-format. Sortering av JSON objekt sker via inbyggda sorteringsfunktioner, vilka kräver att hela JSON objektet finns avserialiserat i minnet. Om datan strömmas måste hela datamängden tas emot innan sortering kan ske. För att förebygga detta krävs att en utvecklare utvecklar metoder för sortering av strömmad data medans strömmen pågår. Den här studien identifierar tre lämpliga sorteringsalgoritmer, och jämför dessa på hur snabbt de sorterar den strömmade datan samt deras minnesanvändning. En klientapplikation och en serverapplikation jämfördes även för att se om sortering på servern genererade bättre resultat. De slutsatser som drogs av experimentets resultat var att merge sort var snabbast men använde mest minne, medans heap sort var långsammast men hade lägst minesanvändning. Klientapplikationens sorteringstider var något snabbare än serverapplikationens.
APA, Harvard, Vancouver, ISO, and other styles
14

Tan, Heng Chuan. "Vers des communications de confiance et sécurisées dans un environnement véhiculaire." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0063/document.

Full text
Abstract:
Le routage et la gestion des clés sont les plus grands défis dans les réseaux de véhicules. Un comportement de routage inapproprié peut affecter l’efficacité des communications et affecter la livraison des applications liées à la sécurité. D’autre part, la gestion des clés, en particulier en raison de l’utilisation de la gestion des certificats PKI, peut entraîner une latence élevée, ce qui peut ne pas convenir à de nombreuses applications critiques. Pour cette raison, nous proposons deux modèles de confiance pour aider le protocole de routage à sélectionner un chemin de bout en bout sécurisé pour le transfert. Le premier modèle se concentre sur la détection de noeuds égoïstes, y compris les attaques basées sur la réputation, conçues pour compromettre la «vraie» réputation d’un noeud. Le second modèle est destiné à détecter les redirecteurs qui modifient le contenu d’un paquet avant la retransmission. Dans la gestion des clés, nous avons développé un système de gestion des clés d’authentification et de sécurité (SA-KMP) qui utilise une cryptographie symétrique pour protéger la communication, y compris l’élimination des certificats pendant la communication pour réduire les retards liés à l’infrastructure PKI
Routing and key management are the biggest challenges in vehicular networks. Inappropriate routing behaviour may affect the effectiveness of communications and affect the delivery of safety-related applications. On the other hand, key management, especially due to the use of PKI certificate management, can lead to high latency, which may not be suitable for many time-critical applications. For this reason, we propose two trust models to assist the routing protocol in selecting a secure end-to-end path for forwarding. The first model focusses on detecting selfish nodes, including reputation-based attacks, designed to compromise the “true” reputation of a node. The second model is intended to detect forwarders that modify the contents of a packet before retransmission. In key management, we have developed a Secure and Authentication Key Management Protocol (SA-KMP) scheme that uses symmetric cryptography to protect communication, including eliminating certificates during communication to reduce PKI-related delays
APA, Harvard, Vancouver, ISO, and other styles
15

Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-203056.

Full text
Abstract:
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
APA, Harvard, Vancouver, ISO, and other styles
16

Tan, Heng Chuan. "Vers des communications de confiance et sécurisées dans un environnement véhiculaire." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0063.

Full text
Abstract:
Le routage et la gestion des clés sont les plus grands défis dans les réseaux de véhicules. Un comportement de routage inapproprié peut affecter l’efficacité des communications et affecter la livraison des applications liées à la sécurité. D’autre part, la gestion des clés, en particulier en raison de l’utilisation de la gestion des certificats PKI, peut entraîner une latence élevée, ce qui peut ne pas convenir à de nombreuses applications critiques. Pour cette raison, nous proposons deux modèles de confiance pour aider le protocole de routage à sélectionner un chemin de bout en bout sécurisé pour le transfert. Le premier modèle se concentre sur la détection de noeuds égoïstes, y compris les attaques basées sur la réputation, conçues pour compromettre la «vraie» réputation d’un noeud. Le second modèle est destiné à détecter les redirecteurs qui modifient le contenu d’un paquet avant la retransmission. Dans la gestion des clés, nous avons développé un système de gestion des clés d’authentification et de sécurité (SA-KMP) qui utilise une cryptographie symétrique pour protéger la communication, y compris l’élimination des certificats pendant la communication pour réduire les retards liés à l’infrastructure PKI
Routing and key management are the biggest challenges in vehicular networks. Inappropriate routing behaviour may affect the effectiveness of communications and affect the delivery of safety-related applications. On the other hand, key management, especially due to the use of PKI certificate management, can lead to high latency, which may not be suitable for many time-critical applications. For this reason, we propose two trust models to assist the routing protocol in selecting a secure end-to-end path for forwarding. The first model focusses on detecting selfish nodes, including reputation-based attacks, designed to compromise the “true” reputation of a node. The second model is intended to detect forwarders that modify the contents of a packet before retransmission. In key management, we have developed a Secure and Authentication Key Management Protocol (SA-KMP) scheme that uses symmetric cryptography to protect communication, including eliminating certificates during communication to reduce PKI-related delays
APA, Harvard, Vancouver, ISO, and other styles
17

Koessler, Denise Renee. "A Predictive Model for Secondary RNA Structure Using Graph Theory and a Neural Network." Digital Commons @ East Tennessee State University, 2010. https://dc.etsu.edu/etd/1684.

Full text
Abstract:
In this work we use a graph-theoretic representation of secondary RNA structure found in the database RAG: RNA-As-Graphs. We model the bonding of two RNA secondary structures to form a larger structure with a graph operation called merge. The resulting data from each tree merge operation is summarized and represented by a vector. We use these vectors as input values for a neural network and train the network to recognize a tree as RNA-like or not based on the merge data vector. The network correctly assigned a high probability of RNA-likeness to trees identified as RNA-like in the RAG database, and a low probability of RNA-likeness to those classified as not RNA-like in the RAG database. We then used the neural network to predict the RNA-likeness of all the trees of order 9. The use of a graph operation to theoretically describe the bonding of secondary RNA is novel.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahmoud, Mahmoud Yehia Ahmed. "Secure and efficient post-quantum cryptographic digital signature algorithms." Thesis, 2021. http://hdl.handle.net/1828/13307.

Full text
Abstract:
Cryptographic digital signatures provide authentication to communicating parties over communication networks. They are integral asymmetric primitives in cryptography. The current digital signature infrastructure adopts schemes that rely on the hardness of finding discrete logarithms and factoring in finite groups. Given the recent advances in physics which point towards the eventual construction of large scale quantum computers, these hard problems will be solved in polynomial time using Shor’s algorithm. Hence, there is a clear need to migrate the cryptographic infrastructure to post-quantum secure alternatives. Such an initiative is demonstrated by the PQCRYPTO project and the current Post-Quantum Cryptography (PQC) standardization competition run by the National Institute of Standards and Technology (NIST). This dissertation considers hash-based digital signature schemes. Such algorithms rely on simple security notions such as preimage, and weak and strong collision resistances of hash functions. These notions are well-understood and their security against quantum computers has been well-analyzed. However, existing hash-based signature schemes have large signature sizes and high computational costs. Moreover, the signature size increases with the number of messages to be signed by a key pair. The goal of this work is to develop hash-based digital signature schemes to overcome the aforementioned limitations. First, FORS, the underlying few-time signature scheme of the NIST PQC alternate candidate SPHINCS+ is analyzed against adaptive chosen message attacks, and DFORS, a few-time signature scheme with adaptive chosen message security, is proposed. Second, a new variant of SPHINCS+ is introduced that improves the computational cost and security level. Security analysis for the new variant is presented. In addition, the hash-based group digital signature schemes, Group Merkle (GM) and Dynamic Group Merkle (DGM), are studied and their security is analyzed. Group Merkle Multi-Treem (GMMT) is proposed to solve some of the limitations of the GM and DGM hash-based group signature schemes.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Hsin-Ming, and 吳欣明. "Authenticated Data Structure via Arithmetic Merkle Tree." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/63784926759173633571.

Full text
Abstract:
碩士
元智大學
資訊工程學系
104
Due to the prosperous developing of network rate, various cloud storage systems, such as Google Drive, Dropbox, and SkyDrive, are rising rapidly and have been widely used. Nonetheless, how can we assure that the cloud will return the exactly correct information back to the querying user and never occupies the users' storage space with dummy data remain a challenging task. In this sense, it is necessary to construct protocols to protect and prevent users from being cheated by the untrusted cloud. Verifiable Data Streaming (VDS) protocol was proposed in 2012. In VDS scheme, the client outsources a growing array to the cloud server stored in DB. The stored data is verifiable so that the content and sequence of elements should not be modified by the server. The client can also retrieve the stored data and update them whenever they request. We present, Arithmetic Merkle Tree (AMT), having the ability to authenticate elements. Instead of using chameleon hash functions in previous work, our proposed Arithmetic Merkle Tree scheme has no need to calculate the collisions through trapdoors; instead, we adopt sum operation as hash function in Merkle tree. We instantiate our AMT using Paillier's Homomorphic Encryption to encrypt data and send to server. Consequently, after outsourcing, we retrieve encrypted data from server and decrypt it to verify the correctness of the data.
APA, Harvard, Vancouver, ISO, and other styles
20

CHEN, YI-CHENG, and 陳奕呈. "An Image Authentication Scheme Using Merkle Tree Mechanism." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/pfkckr.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系
107
Research in the digital image processing field has received significant attention and has advanced rapidly in recent years. Image tampering and misattribution have become real concerns in the open environment of the worldwide web, and scholars have proposed various image verification mechanisms to detect and mitigate image tampering. Likewise, blockchain technology has become very popular in recent years. This study proposes a novel image verification mechanism based on the Merkle tree, a fundamental component of blockchains that underpin their functionality. The Merkle tree root in the blockchain mechanism provides a reliable environment for storage of image features. The verification of images to detect tampering can be performed by the Merkle tree mechanism to obtain the hash values of the Merkle tree nodes. In addition, the proposed method combined with the Inter-Planetary File System (IPFS) to improve the availability of images. The primary purpose of this study is to achieve the goal of image integrity verification. The proposed method can not only verify the integrity of an image, but can also repair the tampered area if the image has been altered. Because the proposed method employs the blockchain mechanism, third party is not needed for image verification. The verification method is performed by each node in the blockchain network. The experimental results demonstrate that the proposed method successfully achieves the goal of image authentication and tampered area restoration. Since the verification mechanism uses the hash values for change detection, it can recognize the slightest alterations in the image. However, the tampering of the rotation and translation is less obvious.
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, WeiSian, and 黃偉賢. "Instant Auditing of Cloud Storage Access by Cache Partial Merkle tree." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/22505799904029871228.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊工程學系
101
Nowadays cloud service is becoming more and more popular. One of the most important applications is the cloud storage. However, storing important data in cloud storage may suffer serious security risks. For example, the service provider can launch roll-back attack which is to restore lost files using a backup of an early version of them and their associated digital signatures. Then, the service provider can deny that the user’s latest version of files have been lost. Therefore, we need a scheme to have the client device be able to audit if a file obtained from the service provider is valid. In this paper, we first show that the intuitive solution of instant auditing by applying Merkle tree is inappropriate. Then, we propose an instant auditing communication protocol that can guarantee mutual nonrepudiation between the service provider and user and each client device only has to keep a partial Merkle tree of its account and its last attestation. All the client devices can audit if the obtained file is valid after every file writ operation without requiring broadcast their attestation to all other client devices. The experimental results demonstrate the feasibility of the proposed scheme. A service provider of cloud storage can use the proposed scheme to provide instant auditing guarantee in their service-level agreement.
APA, Harvard, Vancouver, ISO, and other styles
22

Wilde, Evan. "Merge-Trees: Visualizing the integration of commits into Linux." Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10053.

Full text
Abstract:
Version control systems are an asset to software development, enabling developers to keep snapshots of the code as they work. Stored in the version control system is the entire history of the software project, rich in information about who is contributing to the project, when contributions are made, and to what part of the project they are being made. Presented in the right way, this information can be made invaluable in helping software developers continue the development of the project, and maintainers to understand how the changes to the current version can be applied to older versions of projects. Maintainers are unable to effectively use the information stored within a software repository to assist with the maintanance older versions of that software in highly-collaborative projects. The Linux kernel repository is an example of such a project. This thesis focuses on improving visualizations of the Linux kernel repository, developing new visualizations that help answer questions about how commits are integrated into the project. Older versions of the kernel are used in a variety of systems where it is impractical to update to the current version of the kernel. Some of these applications include the controllers for spacecrafts, the core of mobile phones, the operating system driving internet routers, and as Internet-Of-Things (IOT) device firmware. As vulnerabilities are discovered in the kernel, they are patched in the current version. To ensure that older versions are also protected against the vulnerabilities, the patches applied to the current version of the kernel must be applied back to the older version. To do this, maintainers must be able to understand how the patch that fixed the vulnerability was integrated into the kernel so that they may apply it to the old version as well. This thesis makes four contributions: (1) a new tree-based model, the \mt{}, that abstracts the commits in the repository, (2) three visualizations that use this model, (3) a tool called \tool{} that uses these visualizations, (4) a user study that evaluates whether the tool is effective in helping users answer questions related to how commits are integrated about the Linux repository. The first contribution includes the new tree-based model, the algorithm that constructs the trees from the repository, and the evaluation of the results of the algorithm. the second contribution demonstrates some of the potential visualizations of the repository that are made possible by the model, and how these visualizations can be used depending on the structure of the tree. The third contribution is an application that applies the visualizations to the Linux kernel repository. The tool was able to help the participants of the study with understanding how commits were integrated into the Linux kernel repository. Additionally, the participants were able to summarize information about merges, including who made the most contributions, which file were altered the most, more quickly and accurately than with Gitk and the command line tools.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
23

Bouidani, Maher M. "Design and implementation of a blockchain shipping application." Thesis, 2019. https://dspace.library.uvic.ca//handle/1828/10568.

Full text
Abstract:
The emerging Blockchain technology has the potential to shift the traditional centralized systems to become more flexible, efficient and decentralized. An important area to apply this capability is supply chain. Supply chain visibility and transparency has become an important aspect of a successful supply chain platform as it becomes more complex than ever before. The complexity comes from the number of participants involved and the intricate roles and relations among them. This puts more pressure on the system and the customers in terms of system availability and tamper-resistant data. This thesis presents a private and permisioned application that uses Blockchain and aims to automate the shipping processes among different participants in the supply chain ecosystem. Data in this private ledger is governed with the participants’ invocation of their smart contracts. These smart contracts are designed to satisfy the participants’ different roles in the supply chain. Moreover, this thesis discusses the performance measurements of this application results in terms of the transaction throughput, transaction average latency and resource utilization.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Подкорытов, Д. А., and D. A. Podkorytov. "Разработка «smart-контракта» для партнерской программы на основе блокчейн-технологии : магистерская диссертация." Master's thesis, 2018. http://hdl.handle.net/10995/64271.

Full text
Abstract:
Цель работы: разработка концепции эффективной системы учета и отслеживания продукции, основанной на применении блокчейн-технологии, в связи c изменениями в маркировке табачной продукции. Задачи работы: - изучить принципы блокчейн технологий; - изучить принципы «smart-контракта» технологии; - рассмотреть преимущества и недостатки технологии; - разработать smart-контракт; - оценить экономическую эффективность проекта. Объект исследования – блокчейн-технологии. Предмет исследования - процесс маркировки продукции. В первой главе приведен обзор теоретического материала по технологии блокчейн. Вторая глава посвящена разработке методики использования «smart-контракта» при маркировке табачной продукции. В третьей главе рассмотрено применение смарт - контракта для конкретного бизнес-процесса. Результаты работы: практическим результатом работы является концепция умного контракта, который существенно упростит процесс, а также позволит маркировать продукцию и контролировать сбыт.
The main objective is to develop the concept of effective system accounting and product tracking, based on using of blockchain system, due to changes in the labeling of tobacco products. Tasks of this research: - to examine principles of technology blockchain system - to examine principles of “Smart contract” technology - to consider the advantages and disadvantages of technology -to develop a smart contract - to examine the economical efficient of this project The object of this research is block-technology. The subject of the study is the process of labeling products. The first chapter provides an overview of theoretical material of blockchain technology. The second chapter is devoted to developing a methodology of using a "smart contract" for the labeling of tobacco products. The third chapter deals with the application of a smart contract for a particular business process The result of this Project: the practical result of the work is the concept of a smart contract, which will greatly simplify the process and will also allow to label products and control sales.
APA, Harvard, Vancouver, ISO, and other styles
25

Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction." Doctoral thesis, 2015. https://ul.qucosa.de/id/qucosa%3A14718.

Full text
Abstract:
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\''s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\''s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography