To see the other types of publications on this topic, follow the link: Private information retrieval (PIR).

Dissertations / Theses on the topic 'Private information retrieval (PIR)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 31 dissertations / theses for your research on the topic 'Private information retrieval (PIR).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Miceli, Michael. "Private Information Retrieval in an Anonymous Peer-to-Peer Environment." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1331.

Full text
Abstract:
Private Information Retrieval (PIR) protocols enable a client to access data from a server without revealing what data was accessed. The study of Computational Private Information Retrieval (CPIR) protocols, an area of PIR protocols focusing on computational security, has been a recently reinvigorated area of focus in the study of cryptography. However, CPIR protocols still have not been utilized in any practical applications. The aim of this thesis is to determine whether the Melchor Gaborit CPIR protocol can be successfully utilized in a practical manner in an anonymous peer-to-peer environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Duguépéroux, Joris. "Protection des travailleurs dans les plateformes de crowdsourcing : une perspective technique." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S023.

Full text
Abstract:
Ce travail porte sur les moyens de protéger les travailleurs dans le cadre du crowdsourcing. Une première contribution s’intéresse à la protection de la vie privée des travailleurs pour une plateforme unique, tout en autorisant différents usages des données (pour affecter des tâches aux travailleurs ou pour avoir des statistiques sur la population par exemple). Une seconde contribution propose la mise à disposition d’outils, pour les législateurs, permettant de réguler de multiples plateformes en combinant à la fois transparence et respect de la vie privée. Ces deux approches font appel à de nombreux outils (d’anonymisation, de chiffrement ou de distribution des calculs notamment), et sont à la fois accompagnées de preuves de sécurité et validées par des expérimentations. Une troisième contribution, moins développée, propose de mettre en lumière un problème de sécurité dans une des techniques utilisées (le PIR) lorsque celle-ci est utilisée à de multiples reprises, problème jusqu’à présent ignoré dans les contributions de l’état de l’art<br>This work focuses on protecting workers in a crowdsourcing context. Indeed, workers are especially vulnerable in online work, and both surveillance from platforms and lack of regulation are frequently denounced for endangering them. Our first contribution focuses on protecting their privacy, while allowing usages of their anonymized data for, e.g. assignment to tasks or providing help for task-design to requesters. Our second contribution considers a multi-platform context, and proposes a set of tools for law-makers to regulate platforms, allowing them to enforce limits on interactions in various ways (to limit the work time for instance), while also guaranteeing transparency and privacy. Both of these approaches make use of many technical tools such as cryptography, distribution, or anonymization tools, and include security proofs and experimental validations. A last, smaller contribution, draws attention on a limit and possible security issue for one of these technical tools, the PIR, when it is used multiple times, which has been ignored in current state-of-the-art contributions
APA, Harvard, Vancouver, ISO, and other styles
3

Malek, Behzad. "Efficient private information retrieval." Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26966.

Full text
Abstract:
In this thesis, we study Private Information Retrieval and Oblivious Transfer, two strong cryptographic tools that are widely used in various security-related applications, such as private data-mining schemes and secure function evaluation protocols. The first non-interactive, secure dot-product protocol, widely used in private data-mining schemes, is proposed based on trace functions over finite fields. We further improve the communication overhead of the best, previously known Oblivious Transfer protocol from O ((log(n))2) to O (log(n)), where n is the size of the database. Our communication-efficient Oblivious Transfer protocol is a non-interactive, single-database scheme that is generally built on Homomorphic Encryption Functions. We also introduce a new protocol that reduces the computational overhead of Private Information Retrieval protocols. This protocol is shown to be computationally secure for users, depending on the security of McEliece public-key cryptosystem. The total online computational overhead is the same as the case where no privacy is required. The computation-saving protocol can be implemented entirely in software, without any need for installing a secure piece of hardware, or replicating the database among servers.
APA, Harvard, Vancouver, ISO, and other styles
4

Meyer, Pierre. "Sublinear-communication secure multiparty computation." Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7129.

Full text
Abstract:
Le calcul multipartite sécurisé (en anglais, MPC) [Yao82,GMW87a] permet à des agents d'un réseau de communication de calculer conjointement une fonction de leurs entrées sans avoir à n'en rien révéler de plus que le résultat du calcul lui-même. Une question primordiale est de savoir dans quelle mesure le coût en communication entre les agents dépend de la complexité calculatoire de la fonction en question. Un point de départ est l'étude d'une hypothétique barrière de la taille du circuit. L'existence d'une telle barrière est suggérée par le fait que tous les protocoles MPC fondateurs, des années 80 et 90, emploient une approche "porte-logique-par-porte-logique" au calcul sécurisé: la communication d'un tel protocole sera nécessairement au moins linéaire en le nombre de portes, c'est-à-dire en la taille du circuit. De plus ceux-ci représentent moralement l'état de l'art encore de nos jours en ce qui concerne la sécurité dite "par théorie de l'information". La barrière de la taille du circuit a été franchie pour le MPC avec sécurité calculatoire, mais sous des hypothèses structurées impliquant l'existence de chiffrement totalement homomorphe (en anglais, FHE) [Gen09] ou de partage de secret homomorphe (en anglais, HSS) [BGI16a]. De plus, il existe des protocoles avec sécurité par théorie de l'information dont la communication en-ligne (mais pas la communication totale) est sous-linéaire en la taille du circuit [IKM + 13, DNNR17, Cou19]. Notre méthodologie de recherche consiste à s'inspirer des techniques developpées dans le modèle de l'aléa corrélée dans lequel tout résultat pourra être considéré comme plus "fondamental" que le modèle calculatoire (de par le type de sécurité obtenue) mais qui est néanmoins un modèle inadapté à comprendre la complexité de communication du MPC (puisque que l'on s'autorise à ne pas compter toute quantité de communication qui peut être reléguée à une phase "hors-ligne", c'est-à-dire avant que les participants au calcul ne prennent connaissance de leurs entrées) pour développer de nouvelles méthodes dans le modèle calculatoire. Avec cette approche, nous obtenons des protocoles franchissant la barrière de la taille du circuit sous l'hypothèse de la sécurité quasipolynomiale de LPN [CM21] ou sous l'hypothèse QR+LPN [BCM22]. Ces hypothèses calculatoires n'étant pas précédement réputées impliquer l'existence de MPC sous-linéaire, la pertinence de notre méthodologie est, dans une certaine mesure, validée a posteriori. Plus fondamentalement cependant, nos travaux empruntent un nouveau paradigme pour construire du MPC sous-linéaire, sans utiliser les outils "avec de fortes propriétés homomorphiques" que sont le FHE ou du HSS. En combinant toutes nos techniques héritées de notre étude du modèle de l'aléa corrélé, nous parvenons à briser la barrière des deux joueurs pour le calcul sécurisé avec communication sous-linéaire, sans FHE [BCM23]. Spécifiquement, nous présentons le premier protocole à plus de deux participants dont la communication est sous-linéaire en la taille du circuit et qui ne soit pas fondé sur des hypothèses sous lesquelles on sait déjà faire du FHE. Parallèlement à ces travaux centrés sur la sécurité calculatoire, nous montrons [CMPR23] comment adapter les approches à deux joueurs utilisant du HSS, à la [BGI16a], pour gurantir une sécurité "théorie de l'information" à l'un des deux joueurs et une sécurité calculatoire à l'autre. Ceci est, de façon prouvable, la notion de sécurité la plus forte que l'on puisse espérer en présence de seulement deux joueurs (sans aléa corrélé). Nous obtenons le premier protocole de ce type avec communication sous-linéaire, qui ne soit pas fondé sur des hypothèses sous lesquelles on sait déjà faire du FHE<br>Secure Multi-Party Computation (MPC) [Yao82, GMW87a] allows a set of mutually distrusting parties to perform some joint computation on their private inputs without having to reveal anything beyond the output. A major open question is to understand how strongly the communication complexity of MPC and the computational complexity of the function being computed are correlated. An intriguing starting point is the study of the circuit-size barrier. The relevance of this barrier is a historical, and potentially absolute, one: all seminal protocols from the 1980s and 1990s use a "gate-by-gate" approach, requiring interaction between the parties for each (multiplicative) gate of the circuit to be computed, and this remains the state of the art if we wish to provide the strongest security guarantees. The circuit-size barrier has been broken in the computational setting from specific, structured, computational assumption, via Fully Homomorphic Encryption (FHE) [Gen09] and later Homomorphic Secret Sharing [BGI16a]. Additionally, the circuit-size barrier for online communication has been broken (in the correlated randomness model) information-theoretically [IKM + 13, DNNR17, Cou19], but no such result is known for the total communication complexity (in the plain model). Our methodology is to draw inspiration from known approaches in the correlated randomness model, which we view simultaneously as fundamental (because it provides information-theoretic security guarantees) and inherently limited (because the best we can hope for in this model is to understand the online communication complexity of secure computation), in order to devise new ways to break the circuit-size barrier in the computational setting. In the absence of a better way to decide when concrete progress has been made, we take extending the set of assumptions known to imply sublinear-communication secure computation as "proof of conceptual novelty". This approach has allowed us to break the circuit-size barrier under quasipolynomial LPN [CM21] or QR and LPN [BCM22]. More fundamentally, these works constituted a paradigm shift, away from the "homomorphism-based" approaches of FHE and HSS, which ultimately allowed us to break the two-party barrier for sublinear-communication secure computation and provide in [BCM23] the first sublinear-communication protocol with more than two parties, without FHE. Orthogonally to this line of work, purely focusing on computational security, we showed in [CMPR23] that [BGI16a] could be adapted to provide information-theoretic security for one of the two parties, and computational security for the other: these are provably the strongest security guarantees one can hope to achieve in the two-party setting (without setup), and ours is the first sublinear-communication protocol in this setting which does not use FHE
APA, Harvard, Vancouver, ISO, and other styles
5

Yekhanin, Sergey. "Locally Decodable Codes and Private Information Retrieval Schemes." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42242.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.<br>Includes bibliographical references (leaves 90-99).<br>This thesis studies two closely related notions, namely Locally Decodable Codes (LDCs) and Private Information Retrieval Schemes (PIRs). Locally decodable codes are error-correcting codes that allow extremely efficient, "sublinear-time" decoding procedures. More formally, a k-query locally decodable code encodes n-bit messages x in such a way that one can probabilistically recover any bit xi of the message by querying only k bits of the (possibly corrupted) code-word, where k can be as small as 2. LDCs were initially introduced in complexity theory in the context of worst-case to average-case reductions and probabilistically checkable proofs. Later they have found applications in numerous other areas including information theory, cryptography and the theory of fault tolerant computation. The major goal of LDC related research is to establish the optimal trade-off between length N and query complexity k of such codes, for a given message length n. Private information retrieval schemes are cryptographic protocols developed in order to protect the privacy of the user's query, when accessing a public database. In such schemes a database (modelled by an n-bit string x) is replicated between k non-communicating servers. The user holds an index i and is interested in obtaining the value of the bit xi. To achieve this goal, the user queries each of the servers and gets replies from which the desired bit xi can be computed. The query to each server is distributed independently of i and therefore each server gets no information about what the user is after. The main parameter of interest in a PIR scheme is its communication complexity, namely the number of bits exchanged by the user accessing an n-bit database and the servers. In this thesis we provide a fresh algebraic look at the theory of locally decodable codes and private information retrieval schemes.<br>(cont.) We obtain new families of LDCs and PIRs that have much better parameters than those of previously known constructions. We also prove limitations of two server PIRs in a restricted setting that covers all currently known schemes. Below is a more detailed summary of our contributions. * Our main result is a novel (point removal) approach to constructing locally decodable codes that yields vast improvements upon the earlier work. Specifically, given any Mersenne prime p = 2t - 1, we design three query LDCs of length N = exp (nl/t), for every n. Based on the largest known Mersenne prime, this translates to a length of less than exp (n10-7), compared to exp (n1/2) in the previous constructions. It has often been conjectured that there are infinitely many Mersenne primes. Under this conjecture, our constructions yield three query locally decodable codes of length N = exp n(oglog)) for infinitely many n. * We address a natural question regarding the limitations of the point-removal approach. We argue that further progress in the unconditional bounds via this method (under a fairly broad definition of the method) is tied to progress on an old number theory question regarding the size of the largest prime factors of Mersenne numbers. * Our improvements in the parameters of locally decodable codes yield analogous improvements for private information retrieval schemes. We give 3-server PIR schemes with communication complexity of O (n10-7) to access an n-bit database, compared to the previous best scheme with complexity 0(n1/5.25).<br>(cont.) Assuming again that there are infinitely many Mersenne primes, we get 3-server PIR schemes of communication complexity n(1/ loglog n) for infinitely many n. * Our constructions yield tremendous improvements for private information retrieval schemes involving three or more servers, and provide no insights on the two server case. This raises a natural question regarding whether the two server case is truly intrinsically different. We argue that this may well be the case. We introduce a novel combinatorial approach to PIR and establish the optimality of the currently best known two server schemes a restricted although fairly broad model<br>by Sergey Yekhanin.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Tianren S. M. Massachusetts Institute of Technology. "On basing private information retrieval on NP-hardness." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106093.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 17-19).<br>The possibility of basing the security of cryptographic objects on the (minimal) assumption that NP ... BPP is at the very heart of complexity-theoretic cryptography. Most known results along these lines are negative, showing that assuming widely believed complexity-theoretic conjectures, there are no reductions from an NP-hard problem to the task of breaking certain cryptographic schemes. We make progress along this line of inquiry by showing that the security of single-server single-round private information retrieval schemes cannot be based on NP-hardness, unless the polynomial hierarchy collapses. Our main technical contribution is in showing how to break the security of a PIR protocol given an SZK oracle. Our result is tight in terms of both the correctness and the privacy parameter of the PIR scheme.<br>by Tianren Liu.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Lincoln, Laura Beth. "Symmetric private information retrieval via additive homomorphic probabilistic encryption /." Online version of thesis, 2006. https://ritdml.rit.edu/dspace/handle/1850/2792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Raymond, Jean-Francois 1974. "Private information retrieval : improved upper bound, extension and applications." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30830.

Full text
Abstract:
Private Information Retrieval (PIR), which allows users to query one (or many replicated) database(s) for the ith element, while keeping i private, has received a lot of attention in recent years. Indeed, since Chor et al. [31, 32] introduced this problem in 1995, many researchers have improved bounds and proposed extensions. The following pages continue along this path: pushing the techniques of [52] we obtain an improved upper bound and define and provide a solution to a new problem which we call private information retrieval with authentication. In addition, we motivate the study of PIRs by presenting new and useful real world applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Yanliang. "Efficient Linear Secure Computation and Symmetric Private Information Retrieval Protocols." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1752381/.

Full text
Abstract:
Security and privacy are of paramount importance in the modern information age. Secure multi-party computation and private information retrieval are canonical and representative problems in cryptography that capture the key challenges in understanding the fundamentals of security and privacy. In this dissertation, we use information theoretic tools to tackle these two classical cryptographic primitives. In the first part, we consider the secure multi-party computation problem, where multiple users, each holding an independent message, wish to compute a function on the messages without revealing any additional information. We present an efficient protocol in terms of randomness cost to securely compute a vector linear function. In the second part, we discuss the symmetric private information retrieval problem, where a user wishes to retrieve one message from a number of replicated databases while keeping the desired message index a secret from each individual database. Further, the user learns nothing about the other messages. We present an optimal protocol that achieves the minimum upload cost for symmetric private information retrieval, i.e., the queries sent from the user to the databases have the minimum number of bits.
APA, Harvard, Vancouver, ISO, and other styles
10

Asonov, Dmitri. "Querying databases privately : a new approach to private information retrieval /." Berlin : Springer, 2004. http://springerlink.metapress.com/openurl.asp?genre=issue&issn=0302-9743&volume=3128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Guarente, Jeffrey. "Study of the computational efficiency of single server private information retrieval." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12769.

Full text
Abstract:
Thesis (M.S.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.<br>Private Information Retrieval (PIR) is a protocol for a client to retrieve information from a server without revealing anything about which item (s)he retrieved. It has numerous applications but its impracticality due to extremely poor performance or awkward assumptions about its usage prevent its uptake. This work provides a background on the topic of PIR performance, frames the problem of finding efficient PIR as the problem of finding a code with a local decoding property, shows existing families of locally decodable codes are not suitable, and lists some requirements that codes must have to produce secure PIR.
APA, Harvard, Vancouver, ISO, and other styles
12

Harvey, Brett D. "A code of practice for practitioners in private healthcare: a privacy perspective." Thesis, Nelson Mandela Metropolitan University, 2007. http://hdl.handle.net/10948/521.

Full text
Abstract:
Whereas there are various initiatives to standardize the storage, processing and use of electronic patient information in the South African health sector, the sector is fragmented through the adoption of various approaches on national, provincial and district levels. Divergent IT systems are used in the public and private health sectors (“Recommendations of the Committee on …” 2003). Furthermore, general practitioners in some parts of the country still use paper as a primary means of documentation and storage. Nonetheless, the use of computerized systems is increasing, even in the most remote rural areas. This leads to the exposure of patient information to various threats that are perpetuated through the use of information technology. Irrespective of the level of technology adoption by practitioners in private healthcare practice, the security and privacy of patient information remains of critical importance. The disclosure of patient information whether intentional or not, can have dire consequences for a patient. In general, the requirements pertaining to the privacy of patient information are controlled and enforced through the adoption of legislation by the governing body of a country. Compared with developed nations, South Africa has limited legislation to help enforce privacy in the health sector. Conversely, Australia, New Zealand and Canada have some of the most advanced legislative frameworks when it comes to the privacy of patient information. In this dissertation, the Australian, New Zealand, Canadian and South African health sectors and the legislation they have in place to ensure the privacy of health information, will be investigated. Additionally, codes of practice and guidelines on privacy of patient information for GPs, in the afore-mentioned countries, will be investigated to form an idea as to what is needed in creating and formulating a new code of practice for the South African GP, as well as a pragmatic tool (checklist) to check adherence to privacy requirements.
APA, Harvard, Vancouver, ISO, and other styles
13

SENIGAGLIESI, LINDA. "Information-theoretic security techniques for data communications and storage." Doctoral thesis, Università Politecnica delle Marche, 2019. http://hdl.handle.net/11566/263165.

Full text
Abstract:
Negli ultimi anni il bisogno di sicurezza e privacy è cresciuto in maniera esponenziale in molti aspetti delle comunicazioni, parallelamente allo sviluppo tecnologico. La maggior parte dei sistemi di sicurezza attualmente implementati sono basati sulla nozione di sicurezza computazionale, e devono essere continuamente tenuti aggiornati per affrontare il miglioramento degli attacchi e l’avanzamento tecnologico. Allo scopo di soddisfare requisiti sempre più stringenti e rigorosi, di recente è cresciuto l’interesse verso soluzioni appartenenti al paradigma della teoria dell’informazione a supporto di schemi di segretezza prettamente crittografici, in particolare grazie alla capacità di queste soluzioni di garantire sicurezza indipendentemente dalla capacità dell’attaccante, altrimenti nota come sicurezza incondizionata. In questo lavoro di tesi il nostro scopo è quello di analizzare come metriche di segretezza relative alla teoria dell’informazione possono essere applicate in sistemi pratici con lo scopo di garantire la sicurezza e la privacy dei dati. Per iniziare, vengono definite delle metriche di tipo information-teoretiche per valutare le prestazioni di segretezza di sistemi realistici di comunicazione wireless sotto vincoli pratici, e con esse un protocollo che combina tecniche di codifica per sicurezza a livello fisico con soluzioni crittografiche. Questo schema è in grado di raggiungere un dato livello di sicurezza semantica in presenza di un attaccante passivo. Allo stesso tempo vengono presi in considerazione molteplici scenari: viene fornita un’analisi di sicurezza per canali paralleli con nodi relay, trovando l’allocazione ottima di risorse che massimizza il secrecy rate. Successivamente, sfruttando un model checker probabilistico, vengono definiti i parametri per sistemi di storage distribuiti ed eterogenei che permettono di raggiungere la segretezza perfetta in condizioni pratiche. Per garantire la privacy, proponiamo inoltre uno schema che garantisce il recupero privato delle informazioni in uno scenario di caching wireless in presenza di nodi malevoli. Definiamo infine il piazzamento ottimale dei contenuti tale minimizzare l’uso del canale di backhaul, riducendo così il costo delle comunicazioni del sistema.<br>The last years have seen a growing need of security and privacy in many aspects of communications, together with the technological progress. Most of the implemented security solutions are based on the notion of computational security, and must be kept continuously updated to face new attacks and technology advancements. To meet the more and more strict requirements, solutions based on the information-theoretic paradigm are gaining interest to support pure cryptographic techniques, thanks to their capacity to achieve security independently on the attacker’s computing resources, also known as unconditional security. In this work we investigate how information-theoretic security can be applied to practical systems in order to ensure data security and privacy. We first start defining information-theoretic metrics to assess the secrecy performance of realistic wireless communication settings under practical conditions, together with a protocol that mixes coding techniques for physical layer security and cryptographic solutions. This scheme is able to achieve some level of semantic security at the presence of a passive attacker. At the same time, multiple scenarios are considered. We provide a security analysis for parallel relay channels, thus finding an optimal resource allocation that maximizes the secrecy rate. Successively, by exploiting a probabilistic model checker, we define the parameters for heterogeneous distributed storage systems that permit us to achieve perfect secrecy in practical conditions. For privacy purposes, we propose a scheme which guarantees private information retrieval of files for caching at the wireless edge against multiple spy nodes. We find the optimal content placement that minimizes the backhaul usage, thus reducing the communication cost of the system.
APA, Harvard, Vancouver, ISO, and other styles
14

Hezaveh, Maryam. "Privacy Preservation for Nearby-Friends and Nearby-Places Location-Based Services." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39234.

Full text
Abstract:
This thesis looks at the problem of discovering nearby friends and nearby places of interest in a privacy-preserving way using location-based services on mobile devices (e.g., smartphones). First, we propose a privacy-preserving protocol for the discovery of nearby friends. In this scenario, Alice wants to verify whether any of her friends are close to her or not. This should be done without disclosing any information about Alice to her friends and also any of the other parties’ information to Alice. We also demonstrate that our approach can be efficiently applied to other similar problems; in particular, we use it to provide a solution to the socialist millionaires' problem. Second, we propose a privacy-preserving protocol for discovering nearby places of interest. In this scenario, the proposed protocol allows Alice to learn whether there is any place that she is looking for near her. However, the location-based service (LBS) that tries to help Alice to find nearby places does not learn Alice’s location. Alice can send a request to the LBS database to retrieve nearby places of interest (POIs) without the database learning what Alice fetched by using private information retrieval (PIR). Our approach reduces the client side computational overhead by applying the grid square system and the POI types ideas to block-based PIR schemes to make it suitable for LBS smartphone applications. We also show our second approach is flexible and can support all types of block-based PIR schemes. As an item of independent interest, we also propose the idea of adding a machine learning algorithm to our nearby friends’ Android application to estimate the validity of a user's claimed location to prevent users from sending a fake location to the LBS application.
APA, Harvard, Vancouver, ISO, and other styles
15

Kincaid, David Thomas. "Evaluation of computer hardware and software in the private country club sector of Virginia." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/42007.

Full text
Abstract:
<p>The world has seen incredible changes in recent years, and the most notable has been the introduction of computers into our society. One industry that has greatly benefited from the use of computers in their field has been the hospitality industry. The country club sector is one area of the hospitality industry that has been greatly improved through the use of computers. This study evaluated the software and hardware for private country clubs, and related that to the usage of these products by the private country clubs in Virginia.</p> <p>The study utilized a survey to investigate the types of and methods by which computers have impacted these country clubs. The survey's results were offered to each country club that was surveyed, for their usage in whatever manner they find helpful.</p><br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Barrier, Joris. "Chiffrement homomorphe appliqué au retrait d'information privé." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0041/document.

Full text
Abstract:
Le retrait d’information privé que nous nommons PIR, désigne un groupe de protocoles qui s’inscrit dans un ensemble plus vaste des technologies d’amélioration de la vie privée. Sa fonctionnalité principale est de dissimuler l’index d’un élément d’une liste accédée par un client au regard de son hôte. Sans négliger l’appart de leurs auteurs à la communauté scientifique, l’utilisabilité de ce groupe de protocoles semble limitée, car pour un client, télécharger l’intégralité de la liste est plus efficient. À ce jour, les PIR, se fondent sur des serveurs répliqués mutuellement méfiants, des périphériques de confiance ou bien des systèmes cryptographiques. Nous considérerons ici les retraits d’informations privés computationnels et plus particulièrement ceux reposant sur les réseaux euclidiens qui n’offrent des propriétés particulières, comme l’homomorphisme. Afin d’en démontrer l’utilisabilité, nous proposons un retrait d’information privé reposant sur un système cryptographique homomorphe performant et aisé d’utilisation<br>Private information retrieval, named PIR, is a set of protocols that is a part of privacy enhancement technologies.Its major feature is to hide the index of a record that a user retrieved from the host.Without neglecting the scientific contributions of its authors, the usability of this protocol seems hard since that, for a user, it seems more and more efficient to receive all the records.Thus far, PIR can be achieved using mutually distrustful databases replicated databases, trusted hardware, or cryptographic systems.We focus on computational private information retrieval, and specifically on thus based on cryptographic systems.This decision is contingent to the spread of cryptographic systems based on lattices who provide specific properties.To demonstrate it usability, we offer an efficient and easy-to-use private Information retrieval based on homomorphic encryption
APA, Harvard, Vancouver, ISO, and other styles
17

Stokes, Klara. "Combinatorial structures for anonymous database search." Doctoral thesis, Universitat Rovira i Virgili, 2011. http://hdl.handle.net/10803/52799.

Full text
Abstract:
This thesis treats a protocol for anonymous database search (or if one prefer, a protocol for user-private information retrieval), that is based on the use of combinatorial configurations. The protocol is called P2P UPIR. It is proved that the (v,k,1)-balanced incomplete block designs (BIBD) and in particular the finite projective planes are optimal configurations for this protocol. The notion of n-anonymity is applied to the configurations for P2P UPIR protocol and the transversal designs are proved to be n-anonymous configurations for P2P UPIR, with respect to the neighborhood points of the points of the configuration. It is proved that to the configurable tuples one can associate a numerical semigroup. This theorem implies results on existence of combinatorial configurations. The proofs are constructive and can be used as algorithms for finding combinatorial configurations. It is also proved that to the triangle-free configurable tuples one can associate a numerical semigroup. This implies results on existence of triangle-free combinatorial configurations.
APA, Harvard, Vancouver, ISO, and other styles
18

Booysen, Mary Kathleen. "An assessment of the computer literacy status of nurse managers in a private hospital group in the Nelson Mandela metropolitan area." Thesis, Nelson Mandela Metropolitan University, 2009. http://hdl.handle.net/10948/924.

Full text
Abstract:
There has been an increase in the use of information technology in the hospital environment over the past decade and the use of computers by Nursing Managers is rapidly increasing. The latter poses a challenge to Nurse Managers, as their computer literacy status is unknown. This is evident from the fact that prior to 1996 there was only four computers at one of the private hospitals used in this study. Computer skills were never a requirement when applying for the position of Nurse Manager; and there is still currently no formal computer training provided for Nurse Managers or Acting Nurse Managers. Resources are however available in the hospitals to assist the managers with various computer problems but it is not known if these resources equip managers with the appropriate tools to become efficient in their role. The lack of formal training and lack of assessment of resources to determine whether the computer needs of Nurse Managers are met results in a lot of time being wasted and many frustrations experienced among Nurse Managers. The researcher was therefore motivated by the latter problem to explore and describe the computer literacy status of Nurse Managers in order to make recommendations to management regarding the research findings. The researcher selected a quantitative, explorative, contextual and descriptive survey design. The research population was made up of all Nurse Managers and Acting Nurse Managers at the time of the study. A 100 percent sample was utilised and comprised thirty-four respondents who made up the entire group of Nurse Managers and Acting Nurse Managers at the time of the study. A structured, self-administered questionnaire was used in Phase One of the research and in Phase Two a data observation sheet was used to test the respondents and to collect the necessary data. This data was manually processed and analysed by the iii researcher. All ethical considerations were honoured throughout the research process. The main findings of the research study reflected that the respondents had a below average ability to use various software packages such as Microsoft Word, EXCEL and Power Point. Findings further revealed that the respondent’s literacy levels were average with regard to the use of peripheral components of the computer such as the use of the mouse and keyboard. The respondents rated their competency level as average with regard to using a computer. Due to the limitations and small sample size used in the study the researcher recommends that further research using a larger sample by expanding the research into the other private hospitals in the group through out South Africa should take place in order to produce more constructive results than this study.
APA, Harvard, Vancouver, ISO, and other styles
19

Lancrenon, Jean. "Authentification d'objets à distance." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00685206.

Full text
Abstract:
Cette thèse est consacrée à la description et à l'étude de la sécurité de divers protocoles destinés à faire de l'authentification d'objets physiques à distance à base de comparaison de vecteurs binaires. L'objectif des protocoles proposés est de pouvoir réaliser une authentification en garantissant d'une part que les informations envoyées et reçues par le lecteur n'ont pas été manipulées par un adversaire extérieur et d'autre part sans révéler l'identité de l'objet testé à un tel adversaire, ou même, modulo certaines hypothèses raisonnables, aux composantes du système. Nous nous sommes fixés de plus comme objectif d'utiliser des méthodes de cryptographie sur courbe elliptique pour pouvoir profiter des bonnes propriétés de ces dernières, notamment une sécurité accrue par rapport à la taille des clefs utilisées. Nous présentons plusieurs protocoles atteignant l'objectif et établissons pour presque tous une preuve théorique de leur sécurité, grâce notamment à une nouvelle caractérisation d'une notion standard de sécurité.
APA, Harvard, Vancouver, ISO, and other styles
20

Minelli, Michele. "Fully homomorphic encryption for machine learning." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE056/document.

Full text
Abstract:
Le chiffrement totalement homomorphe permet d’effectuer des calculs sur des données chiffrées sans fuite d’information sur celles-ci. Pour résumer, un utilisateur peut chiffrer des données, tandis qu’un serveur, qui n’a pas accès à la clé de déchiffrement, peut appliquer à l’aveugle un algorithme sur ces entrées. Le résultat final est lui aussi chiffré, et il ne peut être lu que par l’utilisateur qui possède la clé secrète. Dans cette thèse, nous présentons des nouvelles techniques et constructions pour le chiffrement totalement homomorphe qui sont motivées par des applications en apprentissage automatique, en portant une attention particulière au problème de l’inférence homomorphe, c’est-à-dire l’évaluation de modèles cognitifs déjà entrainé sur des données chiffrées. Premièrement, nous proposons un nouveau schéma de chiffrement totalement homomorphe adapté à l’évaluation de réseaux de neurones artificiels sur des données chiffrées. Notre schéma atteint une complexité qui est essentiellement indépendante du nombre de couches dans le réseau, alors que l’efficacité des schéma proposés précédemment dépend fortement de la topologie du réseau. Ensuite, nous présentons une nouvelle technique pour préserver la confidentialité du circuit pour le chiffrement totalement homomorphe. Ceci permet de cacher l’algorithme qui a été exécuté sur les données chiffrées, comme nécessaire pour protéger les modèles propriétaires d’apprentissage automatique. Notre mécanisme rajoute un coût supplémentaire très faible pour un niveau de sécurité égal. Ensemble, ces résultats renforcent les fondations du chiffrement totalement homomorphe efficace pour l’apprentissage automatique, et représentent un pas en avant vers l’apprentissage profond pratique préservant la confidentialité. Enfin, nous présentons et implémentons un protocole basé sur le chiffrement totalement homomorphe pour le problème de recherche d’information confidentielle, c’est-à-dire un scénario où un utilisateur envoie une requête à une base de donnée tenue par un serveur sans révéler cette requête<br>Fully homomorphic encryption enables computation on encrypted data without leaking any information about the underlying data. In short, a party can encrypt some input data, while another party, that does not have access to the decryption key, can blindly perform some computation on this encrypted input. The final result is also encrypted, and it can be recovered only by the party that possesses the secret key. In this thesis, we present new techniques/designs for FHE that are motivated by applications to machine learning, with a particular attention to the problem of homomorphic inference, i.e., the evaluation of already trained cognitive models on encrypted data. First, we propose a novel FHE scheme that is tailored to evaluating neural networks on encrypted inputs. Our scheme achieves complexity that is essentially independent of the number of layers in the network, whereas the efficiency of previously proposed schemes strongly depends on the topology of the network. Second, we present a new technique for achieving circuit privacy for FHE. This allows us to hide the computation that is performed on the encrypted data, as is necessary to protect proprietary machine learning algorithms. Our mechanism incurs very small computational overhead while keeping the same security parameters. Together, these results strengthen the foundations of efficient FHE for machine learning, and pave the way towards practical privacy-preserving deep learning. Finally, we present and implement a protocol based on homomorphic encryption for the problem of private information retrieval, i.e., the scenario where a party wants to query a database held by another party without revealing the query itself
APA, Harvard, Vancouver, ISO, and other styles
21

Nardi, Jade. "Quelques retombées de la géométrie des surfaces toriques sur un corps fini sur l'arithmétique et la théorie de l'information." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30051.

Full text
Abstract:
Cette thèse, à la frontière entre les mathématiques et l'informatique, est consacrée en partie à l'étude des paramètres et des propriétés des codes de Goppa sur les surfaces de Hirzebruch. D'un point de vue arithmétique, la théorie des codes correcteurs a ravivé la question du nombre de points rationnels d'une variété définie sur un corps fini, qui semblait résolue par la formule de Lefschetz. La distance minimale de codes géométriques donne un majorant du nombre de points rationnels d'une hypersurface d'une variété donnée et de classe de Picard fixée. Ce majorant étant le plus souvent atteint pour les courbes très réductibles, il est naturel de se concentrer sur les courbes irréductibles pour affiner les bornes. On présente une stratégie globale pour majorer le nombre de points d'une variété en fonction de son ambiant et d'invariants géométriques, notamment liés à la théorie de l'intersection. De plus, une méthode de ce type pour les courbes d'une surface torique est développée en adaptant l'idée de F.J Voloch et K.O. Sthör aux variétés toriques. Enfin, on s'intéresse aux protocoles de Private Information Retrivial, qui visent à assurer qu'un utilisateur puisse accéder à une entrée d'une base de données sans révéler d'information sur l'entrée au propriétaire de la base de données. Un protocole basé sur des codes sur des plans projectifs pondérés est proposé ici. Il améliore les protocoles existants en résistant à la collusion de serveurs, au prix d'une grande perte de capacité de stockage. On pallie ce problème grâce à la méthode du lift qui permet la construction de familles de codes asymptotiquement bonnes, avec les mêmes propriétés locales<br>A part of this thesis, at the interface between Computer Science and Mathematics, is dedicated to the study of the parameters ans properties of Goppa codes over Hirzebruch surfaces. From an arithmetical perspective, the question about number of rational points of a variety defined over a finite field, which seemed dealt with by Lefchetz formula, regained interest thanks to error correcting codes. The minimum distance of an algebraic-geometric codes provides an upper bound of the number of rational points of a hypersurface of a given variety and with a fixed Picard class. Since reducible curves are most likely to reach this bound, one can focus on irreducible curves to get sharper bounds. A global strategy to bound the number of points on a variety depending on its ambient space and some of its geometric invariants is exhibited here. Moreover we develop a method for curves on toric surfaces by adapting F.J. Voloch et K.O. Sthör's idea on toric varieties. Finally, we interest in Private Information Retrivial protocols, which aim to ensure that a user can access an entry of a database without revealing any information on it to the database owner. A PIR protocol based on codes over weighted projective planes is displayed here. It enhances other protocols by offering a resistance to servers collusions, at the expense of a loss of storage capacity. This issue is fixed by a lifting process, which leads to asymptotically good families of codes, with the same local properties
APA, Harvard, Vancouver, ISO, and other styles
22

Olumofin, Femi George. "Practical Private Information Retrieval." Thesis, 2011. http://hdl.handle.net/10012/6142.

Full text
Abstract:
In recent years, the subject of online privacy has been attracting much interest, especially as more Internet users than ever are beginning to care about the privacy of their online activities. Privacy concerns are even prompting legislators in some countries to demand from service providers a more privacy-friendly Internet experience for their citizens. These are welcomed developments and in stark contrast to the practice of Internet censorship and surveillance that legislators in some nations have been known to promote. The development of Internet systems that are able to protect user privacy requires private information retrieval (PIR) schemes that are practical, because no other efficient techniques exist for preserving the confidentiality of the retrieval requests and responses of a user from an Internet system holding unencrypted data. This thesis studies how PIR schemes can be made more relevant and practical for the development of systems that are protective of users' privacy. Private information retrieval schemes are cryptographic constructions for retrieving data from a database, without the database (or database administrator) being able to learn any information about the content of the query. PIR can be applied to preserve the confidentiality of queries to online data sources in many domains, such as online patents, real-time stock quotes, Internet domain names, location-based services, online behavioural profiling and advertising, search engines, and so on. In this thesis, we study private information retrieval and obtain results that seek to make PIR more relevant in practice than all previous treatments of the subject in the literature, which have been mostly theoretical. We also show that PIR is the most computationally efficient known technique for providing access privacy under realistic computation powers and network bandwidths. Our result covers all currently known varieties of PIR schemes. We provide a more detailed summary of our contributions below: Our first result addresses an existing question regarding the computational practicality of private information retrieval schemes. We show that, unlike previously argued, recent lattice-based computational PIR schemes and multi-server information-theoretic PIR schemes are much more computationally efficient than a trivial transfer of the entire PIR database from the server to the client (i.e., trivial download). Our result shows the end-to-end response times of these schemes are one to three orders of magnitude (10--1000 times) smaller than the trivial download of the database for realistic computation powers and network bandwidths. This result extends and clarifies the well-known result of Sion and Carbunar on the computational practicality of PIR. Our second result is a novel approach for preserving the privacy of sensitive constants in an SQL query, which improves substantially upon the earlier work. Specifically, we provide an expressive data access model of SQL atop of the existing rudimentary index- and keyword-based data access models of PIR. The expressive SQL-based model developed results in between 7 and 480 times improvement in query throughput than previous work. We then provide a PIR-based approach for preserving access privacy over large databases. Unlike previously published access privacy approaches, we explore new ideas about privacy-preserving constraint-based query transformations, offline data classification, and privacy-preserving queries to index structures much smaller than the databases. This work addresses an important open problem about how real systems can systematically apply existing PIR schemes for querying large databases. In terms of applications, we apply PIR to solve user privacy problem in the domains of patent database query and location-based services, user and database privacy problems in the domain of the online sales of digital goods, and a scalability problem for the Tor anonymous communication network. We develop practical tools for most of our techniques, which can be useful for adding PIR support to existing and new Internet system designs.
APA, Harvard, Vancouver, ISO, and other styles
23

Vinayak, R. "On Codes for Private Information Retrieval and Ceph Implementation of a High-Rate Regenerating Code." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/3800.

Full text
Abstract:
Error-control codes, which are being extensively used in communication systems, have found themselves very useful in data storage as well during the past decade. This thesis deals with two types of codes for data storage, one pertaining to the issue of privacy and the other to reliability. In many scenarios, user accessing some critical data from a server would not want the server to learn the identity of data retrieved. This problem, called Private Information Retrieval (PIR) was rst formally introduced by Chor et al and they gave protocols for PIR in the case where multiple copies of the same data is stored in non-communicating servers. The PIR protocols that came up later also followed this replication model. The problem with data replication is the high storage overhead involved, which will lead to large storage costs. Later, Fazeli, Vardy and Yaakobi, came up with the notion of PIR code that enables information-theoretic PIR with low storage overhead. In the rst part of this thesis, construction of PIR codes for certain parameter values is presented. These constructions are based on a variant of conventional Reed-Muller (RM) codes called binary Projective Reed-Muller (PRM) codes. A lower bound on block length of systematic PIR codes is derived and the PRM based PIR codes are shown to be optimal with respect to this bound in some special cases. The codes constructed here have smaller block lengths than the short block length PIR codes known in the literature. The generalized Hamming weights of binary PRM codes are also studied. Another work described here is the implementation and evaluation of an erasure code called Coupled Layer (CL) code in Ceph distributed storage system. Erasure codes are used in distributed storage to ensure reliability. An additional desirable feature required for codes used in this setting is the ability to handle node repair efficiently. The Minimum Storage Regenerating (MSR) version of CL code downloads optimal amount of data from other nodes during repair of a failed node and even disk reads during this process is optimum, for that storage overhead. The CL-Near-MSR code, which is a variant of CL-MSR, can efficiently handle a restricted set of multiple node failures also. Four example CL codes were evaluated using a 26 node Amazon cluster and performance metrics like network bandwidth, disk read and repair time were measured. Repair time reduction of the order of 3 was observed for one of those codes, in comparison with Reed Solomon code having same parameters. To the best of our knowledge, such large gains in repair performance have never been demonstrated before.
APA, Harvard, Vancouver, ISO, and other styles
24

Vinayak, R. "On Codes for Private Information Retrieval and Ceph Implementation of a High-Rate Regenerating Code." Thesis, 2017. http://etd.iisc.ernet.in/2005/3800.

Full text
Abstract:
Error-control codes, which are being extensively used in communication systems, have found themselves very useful in data storage as well during the past decade. This thesis deals with two types of codes for data storage, one pertaining to the issue of privacy and the other to reliability. In many scenarios, user accessing some critical data from a server would not want the server to learn the identity of data retrieved. This problem, called Private Information Retrieval (PIR) was rst formally introduced by Chor et al and they gave protocols for PIR in the case where multiple copies of the same data is stored in non-communicating servers. The PIR protocols that came up later also followed this replication model. The problem with data replication is the high storage overhead involved, which will lead to large storage costs. Later, Fazeli, Vardy and Yaakobi, came up with the notion of PIR code that enables information-theoretic PIR with low storage overhead. In the rst part of this thesis, construction of PIR codes for certain parameter values is presented. These constructions are based on a variant of conventional Reed-Muller (RM) codes called binary Projective Reed-Muller (PRM) codes. A lower bound on block length of systematic PIR codes is derived and the PRM based PIR codes are shown to be optimal with respect to this bound in some special cases. The codes constructed here have smaller block lengths than the short block length PIR codes known in the literature. The generalized Hamming weights of binary PRM codes are also studied. Another work described here is the implementation and evaluation of an erasure code called Coupled Layer (CL) code in Ceph distributed storage system. Erasure codes are used in distributed storage to ensure reliability. An additional desirable feature required for codes used in this setting is the ability to handle node repair efficiently. The Minimum Storage Regenerating (MSR) version of CL code downloads optimal amount of data from other nodes during repair of a failed node and even disk reads during this process is optimum, for that storage overhead. The CL-Near-MSR code, which is a variant of CL-MSR, can efficiently handle a restricted set of multiple node failures also. Four example CL codes were evaluated using a 26 node Amazon cluster and performance metrics like network bandwidth, disk read and repair time were measured. Repair time reduction of the order of 3 was observed for one of those codes, in comparison with Reed Solomon code having same parameters. To the best of our knowledge, such large gains in repair performance have never been demonstrated before.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Chun-Hua, and 陳俊華. "Private Information Retrieval Schemes and their Applications." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/08573431891642343437.

Full text
Abstract:
博士<br>國立中興大學<br>資訊科學與工程學系<br>96<br>In the internet environment, the protection of users’ privacy from a server had not been considered feasible until the private information retrieval (PIR) problem was stated and solved. A PIR scheme allows a user to retrieve data items from an online database while hiding the identity of the items from a database server. The research of PIR was initiated by Chor et al. in 1995. The communication complexity of retrieving one out of n bits is a method to measure the cost of PIR schemes. It has been proved that the communication complexity of one-server scheme is O(n) in information theoretic security condition. The “n” is the size of database. However, it is unacceptable in real application. But through using a k-server scheme, the communication complexity of a PIR scheme had been improved to O(n1/k) by Chor et al. Some subsequent research of PIR was focused on reducing the communication complexity on k-server PIR schemes. In this dissertation, we point out the serious shortcoming of k-server PIR schemes because of big overhead of management of these severs. It’s astonishing that Kushilevitz et al. proposed a one-server PIR scheme based on the quadratic residue assumption in computational security condition, which is lower than information-theoretic security. Kushilevitz’s PIR scheme conquers the problem of heavy overheads in managing severs of k-server schemes. But, we find out the drawback of Kushilevitz’s PIR scheme. Kushilevitz’s PIR scheme reveals server’s privacy to the user. In the real applications, the user pays a fee in every query. So, it’s not fair to the server side. In this dissertation, we present a one-server PIR scheme with fair privacy on the user side and the server side to conquer the drawback. In Chapter 3 and Chapter 4 of this dissertation, we focus on the application of PIR schemes. In Chapter 3, we consider of protecting customer’s privacy in querying valuable information on the internet. We present the solution which is a PIR scheme with e-payment function. In Chapter 4, we use the concept of a one-server PIR scheme in e-voting. A novel practical e-voting system with low cost and good efficiency is proposed. The PIR schemes proposed in Chapter 3 and Chapter 4 use SC (secure coprocessor) in the scheme to promote the efficiency. The concept is inspired by Smith and Asonov. In Chapter 5 of this dissertation, we point out the security leak of their PIR schemes with SC, proposing our PIR scheme with SC to strengthen the security. In summary, this dissertation introduces PIR schemes and presents a computational one-server PIR scheme to achieve the fair privacy between the server side and the user side. We also make effort on the applications of PIR schemes to build e-payment function and to set up a one-server e-voting system. Finally, in this dissertation we strengthen the security of PIR schemes with SC.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Yizhou. "Outsourced Private Information Retrieval with Pricing and Access Control." Thesis, 2013. http://hdl.handle.net/10012/7576.

Full text
Abstract:
We propose a scheme for outsourcing Private Information Retrieval (PIR) to untrusted servers while protecting the privacy of the database owner as well as that of the database clients. We observe that by layering PIR on top of an Oblivious RAM (ORAM) data layout, we provide the ability for the database owner to perform private writes, while database clients can perform private reads from the database even while the owner is offline. We can also enforce pricing and access control on a per-record basis for these reads. This extends the usual ORAM model by allowing multiple database readers without requiring trusted hardware; indeed, almost all of the computation in our scheme during reads is performed by untrusted cloud servers. Built on top of a simple ORAM protocol, we implement a real system as a proof of concept. Our system privately updates a 1 MB record in a 16 GB database with an average end-to-end overhead of 1.22 seconds and answers a PIR query within 3.5 seconds over a 2 GB database. We make an observation that the database owner can always conduct a private read as an ordinary database client, and the private write protocol does not have to provide a "read" functionality as a standard ORAM protocol does. Based on this observation, we propose a second construction with the same privacy guarantee, but much faster. We also implement a real system for this construction, which privately writes a 1 MB record in a 1 TB database with an amortized end-to-end response time of 313 ms. Our first construction demonstrates the fact that a standard ORAM protocol can be used for outsourcing PIR computations in a privacy-friendly manner, while our second construction shows that an ad-hoc modification of the standard ORAM protocol is possible for our purpose and allows more efficient record updates.
APA, Harvard, Vancouver, ISO, and other styles
27

Vajha, Myna. "Codes for distributed storage, private information retrieval and low-latency streaming." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4706.

Full text
Abstract:
This thesis presents results on error correcting codes for settings such as: (a) distributed storage, (b) private information retrieval, and (c) low-latency streaming. It also presents two new decoding algorithms for Polar codes under low memory setting. Codes for Distributed Storage: Current distributed storage systems (DSS) store huge volumes of data running in to several exabytes (a billion GB) where recovery of a failed/unavailable units such as disk/node/rack is a common operation. In order to ensure reliable and efficient storage, erasure codes are increasingly preferred over replication. This is due to the smaller storage overheads that are possible with the erasure codes in comparison to the replication for the same reliability guarantees. When an [n,k] erasure code is used to store a file, the file is partitioned in to k chunks and (n-k) party chunks are computed from the k chunks and then the n chunks are stored across n nodes. It was observed in DSSs that the most common failure that is that of a single node failure among the n nodes. However, the classical erasure codes are not very efficient in performing {repair} of the single node failure. This lead to a new class of erasures codes called as Regenerating codes (RGC) that intend to minimize the repair bandwidth, the amount of data communicated during repair of the failed node. In an RGC, a data file composed of B symbols from a finite field is encoded in to n*alpha symbols and stored across n nodes with each node storing alpha symbols. RGCs needs to satisfy two properties 1. data collection and 2. node repair. The data collection property requires that by contacting any k out of n nodes and downloading alpha symbols from each, one should be able to recover the B symbols of the file. From this it is clear that B <= k*alpha. Node repair property requires that given any node failure, by downloading beta <= alpha symbols from any d out of remaining n-1 nodes, one should be able to reconstruct alpha symbols such that the data collection and node repair properties still continue to hold. A subclass of RGCs where the node repair property requires that the recovered node is exactly same as the failed node are called as exact repair regenerating codes (ER-RGCs) while the general case is referred to as functional repair regenerating codes (FR-RGCs). Given the parameter set (n, k, d, B), there exists a tradeoff between alpha, the storage per node and d*beta, the repair bandwidth. This tradeoff will be referred to as storage repair bandwidth (SRB) tradeoff. While SRB tradeoff is completely characterized for the FR case, it remains an open problem for the ER case. In this thesis we restrict to ER-RGCs. A subclass of RGCs called minimum storage regenerating codes (MSR) are of particular interest as they minimize storage overhead i.e., B = k*alpha and therefore are maximum distance separable (MDS) codes by data collection property. MSR codes are MDS codes, that minimize the repair bandwidth given by d*beta = d *alpha/(d-k+1). Some additional attributes that are desirable for MSR code are (1) optimal access property, where the beta symbols communicated during repair of a failed node are the only symbols accessed directly from the node, (2) small field size, q and (3) small sub-packetization level, alpha. Contribution (a): We present the Coupled Layer (Clay) code construction for any (n, k, d) parameters with alpha = s^{n/s} where s=d-k+1 and show that it is an optimal access MSR code when d=n-1. We also show that for the case when d < n-1, Clay codes satisfy the MDS property and can achieve optimal repair bandwidth of d*alpha/d-k+1 under a mild restriction on the set of d helper nodes. These codes can be constructed over any finite field of size q > n-1 and have optimal sub-packetization level. Contribution (b): We implemented the Clay code construction over a popular open source distributed storage system called Ceph . Contributions to Ceph involved enabling vector code support and providing Clay code as an erasure coding option. These changes are available in Ceph's nautilus release. This makes Clay codes the first instance of Regenerating codes being made available through an open-source framework. We evaluated the performance of Clay code in comparison to Reed Solomon (RS) codes over an Amazon EC2 cluster for both small size and large size workloads. It was shown that the theoretical guarantees of Clay codes are realized in the evaluations. We observed that the repair time of single node when (n=20, k=16, d=19) Clay code is used is 3X smaller compared to RS code with same parameters. Contribution (c): Clay code constructions are MSR codes only for d=n-1 and have mild helper node restriction for the case when d < n-1. We therefore present MSR construction for the small d regime where d = k+1,k+2,k+3. This construction satisfies the optimal access property and has lowest possible sub-packetization level for an optimal access MSR code. Furthermore a field size O(n) suffices to construct these codes. Contribution (d): We present an improved upper bound on the file size of ER-RGC code. This leads to improved outer bound for ER SRB tradeoff when d > k in certain regimes of (alpha, beta). Codes for Private Information Retrieval: Private Information Retrieval (PIR) corresponds to retrieving a symbol from database comprising of B symbols without revealing any information about the index of the symbol being retrieved. Communication complexity of a PIR protocol indicates the number of bits sent through query and answers in order to retrieve a symbol privately from a database. It was shown by Chor et al. that in order to achieve PIR by using single server database the communication complexity has to be atleast B. It was also shown that by using two non communicating replicated server databases it is possible to lower the communication complexity to O(B^{1/3})$. There were several PIR protocols that followed that reduced the communication complexity for t >= 2 replicated servers. However all these schemes needed storage overhead >= 2. In order to reduce the storage overhead, Fazeli, Vardy and Yaakobi introduced the notion of a PIR code and showed that this class of codes permit using PIR protocols developed for replicated-server in a coded setting, thereby achieving significant savings in the amount of storage needed. Given an (n, k) t-server PIR code, the requirement is that any message symbol be recoverable from t disjoint recovery sets. Contribution (e): We first show that a sub code of Reed Muller (RM) code generated by evaluations of monomials of degree r in m variables, results in a PIR code with dimension, k={m choose r} and t = 2^{m-r}. This sub-code is systematic and will be referred to as Shortened Reed Muller (SRM) code. We then provide a shortening algorithm for SRM code, that results in constructions for systematic PIR codes for any t = 2^l, 2^l - 1, where l is an positive integer and any k. These codes have lower storage overhead in comparison with known short block length codes in literature. We present a lower bound on the block length of a systematic PIR code and show that for t= 3,4, the codes constructed here are optimal with respect to this bound. The shortening algorithm we provide results in upper bound on Generalized Hamming Weight (GHW) hierarchy of the SRM code. The optimality of the shortening algorithm is shown by proving a matching lower bound on GHW hierarchy of the SRM code. This also results in characterization of the complete GHW hierarchy of the SRM code. The proofs presented for the derivations of lower bound on GHW hierarchy adapt the ideas from derivation of GHW for RM codes by Wei. Codes for Low-Latency Communications: This part of the thesis addresses packet-level forward error correction (FEC) schemes that are aimed at recovering from packet drops under a strict decoding delay constraint. Specifically, the channel model considered here is a delay constrained sliding window (DCSW) channel having parameters a,b,t,w. In an (a, b, t, w)-DCSW channel, within any window of size w there are either at most a random erasures or a burst of consecutive erasures of size at most b. Packet with index i should be recoverable by accessing packets up until t+i excluding the erasures. It was shown by Badr et al. in that the rate R possible in a (a, b, t, w)-DCSW channel is upper bounded as: R <= (t+1-a)/(t+1-a+b) = Ropt Without loss of generality we can set w = t+1. An (a, b, t) streaming code is a packet level FEC scheme that can recover from all the erasure patterns permitted by the (a, b, t, t+1)-DCSW channel. Streaming codes that achieve Ropt were constructed by Krishnan et al. These constructions employ a diagonal embedding framework and have field size quadratic in delay parameter t. The same paper also contains 4 additional constructions with linear field size, but only for some restricted cases: (i) b = a + 1, (ii) (t + a + 1) >= 2b >= 4a, (iii) a | b |(t + 1 - a) and (iv) b = 2a - 1 and b | (t + 2 - a). Contribution (f): The contributions of the current thesis in this area are the following. 1. It is shown how replacing the earlier diagonal embedding approach, by staggered-diagonal embedding (SDE), reduces the burden of erasure recovery placed on the underlying block code C, leading to simpler streaming-code constructions having smaller block length and reduced field size. Under SDE, the n code symbols of C are dispersed across a span of N packets and we will refer to N as the dispersion-span parameter. 2. The SDE approach yields (a,b,t) streaming codes for all parameters. These codes require a linear field size less than t+1, and have decoding complexity equivalent to that of decoding an MDS code of block length at most t+1. Furthermore, when either b | (t+1-a) or else b=a, the resultant codes can be shown to be rate-optimal with respect to Ropt. 3. The limits of the SDE approach while restricting the dispersion-span parameter N to be no larger than t+1 are explored. Limiting N leads to a further reduction in the requirements placed on scalar block code C, thereby simplifying code design. The maximum achievable rate Rmax under the restriction N <= t+1 is determined. It is shown that this rate is always achievable using an MDS code of redundancy a. Of significant practical interest, is the fact that in some instances, this maximum rate Rmax is also achievable by a binary streaming code. Decoding Algorithms for Polar Codes: Polar codes were shown to be capacity achieving for binary discrete memory-less channels (B-DMC) by Arikan under successive cancellation decoding (SCD) algorithm. Though the SCD algorithm is computationally very efficient with complexity O(N log N) and memory O(N) for block length N, the frame error rate can further be improved. Decoding algorithms such as Successive Cancellation List Decoding (SCLD) , Successive Cancellation Stack Decoding (SCS) were proposed in the literature to improve the frame error rate with an increase in complexity and memory. Contribution (g): We take a fresh look at SCD algorithm and try to improve its performance while retaining the memory requirement at O(N). We do so by proposing two decoding algorithms (1) Successive Cancellation with Back-Tracking (SC-BT) and (2) Successive Cancellation with Look-Ahead (SC-LA). The computation complexity of SC-BT varies with the Signal to Noise Ratio (SNR), whereas SC-LA has an approximate complexity of O(2^D/D N log N), where D is a parameter of the algorithm that is independent of N. These algorithms compete with SCLD for smaller block-lengths but need large computation complexity to compete with SCLD at larger block-lengths. We extend the SC-LA algorithm to list decoding, where it uses a memory of O(LN) and has an approximate complexity of O( 2^D/D LN log N). The extended algorithm improves the frame error rate (FER) over SCLD that uses same list size (memory).<br>Visvesvaraya PhD Scheme
APA, Harvard, Vancouver, ISO, and other styles
28

Chou, Jen-Hou, and 周仁厚. "On th Possibility of Basing Oblivious Transfer on Weakened Private Information Retrieval." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/25782305461600957444.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>90<br>We consider the problem of reducing Oblivious Transfer to Private Information Retrieval. We give a simple reduction from 1-out-of-2 Oblivious Transfer to Private Information Retrieval, where the reduction is against malicious players. We also consider the completeness of Private Information Retrieval on weakened assumption. We will give a impossibility result and a possibility result.
APA, Harvard, Vancouver, ISO, and other styles
29

Swanson, Colleen M. "Unconditionally Secure Cryptography: Signature Schemes, User-Private Information Retrieval, and the Generalized Russian Cards Problem." Thesis, 2013. http://hdl.handle.net/10012/7569.

Full text
Abstract:
We focus on three different types of multi-party cryptographic protocols. The first is in the area of unconditionally secure signature schemes, the goal of which is to provide users the ability to electronically sign documents without the reliance on computational assumptions needed in traditional digital signatures. The second is on cooperative protocols in which users help each other maintain privacy while querying a database, called user-private information retrieval protocols. The third is concerned with the generalized Russian cards problem, in which two card players wish to communicate their hands to each other via public announcements without the third player learning the card deal. The latter two problems have close ties to the field of combinatorial designs, and properly fit within the field of combinatorial cryptography. All of these problems have a common thread, in that they are grounded in the information-theoretically secure or unconditionally secure setting.
APA, Harvard, Vancouver, ISO, and other styles
30

"The use of computer by private practitioners in Hong Kong : an opportunity study." Chinese University of Hong Kong, 1986. http://library.cuhk.edu.hk/record=b5885625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Paulet, Russell. "Design and analysis of privacy-preserving protocols." Thesis, 2013. https://vuir.vu.edu.au/24832/.

Full text
Abstract:
More and more of our daily activities are using the Internet to provide an easy way to get access to instant information. The equipment enabling these interactions is also storing information such as: access time, where you are, and what you plan to do. The ability to store this information is very convenient but is also the source of a major concern: once data are stored, it must be protected. If the data was left unprotected, then people would feel reluctant to use the service. The aim of this thesis is to remove the need to store such data, while still maintaining overall utility, by designing and analysing privacy preserving protocols.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!