Dissertations / Theses on the topic 'Code correcteur d'erreurs'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Code correcteur d'erreurs.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bieliczky, Peter. "Implantation vlsi d'un algorithme de code correcteur d'erreurs et validation formelle de la realisation." Paris 6, 1993. http://www.theses.fr/1993PA066670.
Full textKakakhail, Syed Shahkar. "Prédiction et estimation de très faibles taux d'erreurs pour les chaînes de communication codées." Cergy-Pontoise, 2010. http://www.theses.fr/2010CERG0437.
Full textThe time taken by standard Monte Carlo (MC) simulation to calculate the Frame Error Rate (FER) increases exponentially with the increase in Signal-to-Noise Ratio (SNR). Importance Sampling (IS) is one of the most successful techniques used to reduce the simulation time. In this thesis, we investigate an advanced version of IS, called Adaptive Importance Sampling (AIS) algorithm to efficiently evaluate the performance of Forward Error Correcting (FEC) codes at very low error rates. First we present the inspirations and motivations behind this work by analyzing different approaches currently in use, putting an emphasis on methods inspired by Statistical Physics. Then, based on this qualitative analysis, we present an optimized method namely Fast Flat Histogram (FFH) method, for the performance evaluation of FEC codes which is generic in nature. FFH method employs Wang Landau algorithm and is based on Markov Chain Monte Carlo (MCMC). It operates in an AIS framework and gives a good simulation gain. Sufficient statistical accuracy is ensured through different parameters. Extension to other types of error correcting codes is straight forward. We present the results for LDPC codes and turbo codes with different code lengths and rates showing that the FFH method is generic and is applicable for different families of FEC codes having any length, rate and structure. Moreover, we show that the FFH method is a powerful tool to tease out the pseudo-codewords at high SNR region using Belief Propagation as the decoding algorithm for the LDPC codes
Barbier, Johann. "Analyse de canaux de communication dans un contexte non coopératif : application aux codes correcteurs d'erreurs et à la stéganalyse." Phd thesis, Palaiseau, Ecole polytechnique, 2007. http://www.theses.fr/2007EPXX0039.
Full textBarbier, Johann. "Analyse de canaux de communication dans un contexte non coopératif." Phd thesis, Ecole Polytechnique X, 2007. http://pastel.archives-ouvertes.fr/pastel-00003711.
Full textLegeay, Matthieu. "Utilisation du groupe de permutations d'un code correcteur pour améliorer l'efficacité du décodage." Rennes 1, 2012. http://www.theses.fr/2012REN1S099.
Full textError correcting codes and the linked decoding problem are one of the variants considered in post-quantum cryptography. In general, a random code has oftenly a trivial permutation group. However, the codes involved in the construction of cryptosystems and cryptographic functions based on error correcting codes usually have a non-trivial permutation group. Moreover, few cryptanalysis articles use the information contained in these permutation groups. We aim at improving decoding algorithms by using the permutation group of error correcting codes. There are many ways to apply it. The first one we focus on in this thesis is the one that uses the cyclic permutation called “shift” on information set decoding. Thus, we dwell on a work initiated by MacWilliams and put forward a detailed analysis of the complexity. The other way we investigate is to use a permutation of order two to create algebraically a subcode of the first code. Decoding in this subcode, of smaller parameters, is easier and allows to recover information in a perspective of decoding in the first code. Finally, we study the last pattern on the well known correcting codes, i. E. Reed-Muller codes, which extends the work initiated by Sidel'nikov-Pershakov
Roux, Antoine. "Etude d’un code correcteur linéaire pour le canal à effacements de paquets et optimisation par comptage de forêts et calcul modulaire." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS337.
Full textReliably transmitting information over a transmission channel is a recurrent problem in Informatic Sciences. Whatever may be the channel used to transmit information, we automatically observe erasure of this information, or pure loss. Different solutions can be used to solve this problem, using forward error correction codes is one of them. In this thesis, we study a corrector code developped in 2014 and 2015 for Thales society during my second year of master of apprenticeship. It is currently used to ensure the reliability of a transmission based on the UDP protocole, and passing by a network diode, Elips-SD. Elip-SD is an optical diode that can be plugged on an optical fiber to physically ensure that the transmission is unidirectional. The main usecase of such a diode is to enable supervising a critical site, while ensuring that no information can be transmitted to this site. At the opposite, another usecase is the transmission from one or multiple unsecured emitters to one secured receiver who wants to ensure that no information can be robbed. The corrector code that we present is a linear corrector code for the binary erasure channel using packets, that obtained the NATO certification from the DGA ("Direction Générale de Armées" in French). We named it Fauxtraut, for "Fast algorithm using Xor to repair altered unidirectional transmissions". In order to study this code, presenting how it works, its performance and the modifications we added during this thesis, we first establish a state of the art of forward error correction, focusing on non-MDS linear codes such as LDPC codes. Then we present Fauxtraut behavior, and analyse it theorically and with simulations. Finally, we present different versions of this code that were developped during this thesis, leading to other usecases such as transmitting reliable information that can be altered instead of being erased, or on a bidirectionnal channel, such as the H-ARQ protocole, and different results on the number of cycles in particular graphs. In the last part, we present results that we obtained during this thesis and that finally lead to an article in the Technical Computer Science. It concerns a non-polynomial problema of Graphs theorie : maximum matching in temporal graphs. In this article, we propose two algorithms with polynomial complexity : a 2-approximation algorithm and a kernelisation algorithm forthis problema
Charaf, Akl. "Etudes de récepteurs MIMO-LDPC itératifs." Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00913457.
Full textGoy, Guillaume. "Contribution à la sécurisation des implémentations cryptographiques basées sur les codes correcteurs d'erreurs face aux attaques par canaux auxiliaires." Electronic Thesis or Diss., Limoges, 2024. http://www.theses.fr/2024LIMO0036.
Full textSide-channel attacks are a threat for cryptographic security, including post-quantum cryptography (PQC) such as code-based cryptography. In order to twarth these attacks, it is necessary to identify vulnerabilities in implementations. In this thesis, we present three attacks against Hamming Quasi-Cyclic (HQC) scheme, a candidate in the NIST PQC standardization contest for the standardization of post-quantum schemes, along with countermeasures to protect against these attacks. The concatenated code structure of the publickly known HQC’s gives several targets for side-channel attacks. Firstly, we introduce a chosen ciphertext attack targeting HQC outer code, aiming at recovering the secret key. The physical behavior of the fast Hadamard transform (FHT), used during the Reed-Muller decoding step, allows to retrieve the secret key with less than 20,000 physical measurements of the decoding process. Next, we developed a theoretical attack against the inner code to recover the shared key at the end of the protocol. This attack targets the Reed-Solomon code and more specifically the physical leaks during the execution of a Galois field multiplication. These leaks, combined with a Correlation Power Analysis (CPA) strategy, allows us to show that the security of HQC shared key could be reduced from 2^128 to 2^96 operations with the knowledge of a single physical measurement. Finally, we used Belief Propagation tools to improve our attack and make it practically executable. This new approach allows us to practically recover HQC shared key within minutes for all security levels. These efforts also demonstrated that known state-of-the-art countermeasures are not effective against our attack. In order to twarth these attacks, we present maksing and shuffling countermeasures for the implementation of sensitive operations which manipulate secret data
Chaulet, Julia. "Etude de cryptosystèmes à clé publique basés sur les codes MDPC quasi-cycliques." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066064/document.
Full textConsidering the McEliece cryptosystem using quasi-cylcic MDPC (Moderate Density Parity Check matrix) codes allows us to build a post-quantum encryption scheme with nice features. Namely, it has reasonable key sizes and both encryption and decryption are performed using binary operations. Thus, this scheme seems to be a good candidate for embedded and lightweight implementations. In this case, any information obtained through side channels can lead to an attack. In the McEliece cryptosystem, the decryption process essentially consists in decoding. As we consider the use of an iterative and probabilistic algorithm, the number of iterations needed to decode depends on the instance considered and some of it may fail to be decoded. These behaviors are not suitable because they may be used to extract information about the secrets. One countermeasure could be to bound the number of encryptions using the same key. Another solution could be to employ a constant time decoder with a negligible decoding failure probability, that is to say which is about the expected security level of the cryptosystem. The main goal of this thesis is to present new methods to analyse decoder behavior in a cryptographic context.Second, we explain why a McEliece encryption scheme based on polar code does not ensure the expected level of security. To do so, we apply new techniques to resolve the code equivalence problem. This allows us to highlight several common properties shared by Reed-Muller codes and polar codes. We introduce a new family of codes, named decreasing monomial codes, containing both Reed-Muller and polar codes. These results are also of independent interest for coding theory
Dila, Gopal Krishna. "Motifs de codes circulaires dans les gènes codant les protéines et les ARN ribosomaux." Electronic Thesis or Diss., Strasbourg, 2020. http://www.theses.fr/2020STRAD027.
Full textThe thesis focuses on motifs of the circular code X, an error-correcting code found in protein-coding genes, which have the ability to synchronize the reading frame. We first investigated the evolutionary conservation of X motifs in genes of different species and identified specific selective pressures to maintain them. We also identified a set of universal X motifs in ribosomal RNAs, which are located in important functional regions of the ribosome and suggest that circular codes represented an important step in the emergence of the standard genetic code (SGC). Then, we investigated the functional role of X motifs in modern translation processes and identified a strong correlation between X motif enrichment in genes and translation levels. Finally, we compared the frameshift optimality of the circular code X with the SGC and other maximal circular codes, and identified a new functionality of the code X in minimizing the effects of translation errors after frameshift events
Chaulet, Julia. "Etude de cryptosystèmes à clé publique basés sur les codes MDPC quasi-cycliques." Electronic Thesis or Diss., Paris 6, 2017. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2017PA066064.pdf.
Full textConsidering the McEliece cryptosystem using quasi-cylcic MDPC (Moderate Density Parity Check matrix) codes allows us to build a post-quantum encryption scheme with nice features. Namely, it has reasonable key sizes and both encryption and decryption are performed using binary operations. Thus, this scheme seems to be a good candidate for embedded and lightweight implementations. In this case, any information obtained through side channels can lead to an attack. In the McEliece cryptosystem, the decryption process essentially consists in decoding. As we consider the use of an iterative and probabilistic algorithm, the number of iterations needed to decode depends on the instance considered and some of it may fail to be decoded. These behaviors are not suitable because they may be used to extract information about the secrets. One countermeasure could be to bound the number of encryptions using the same key. Another solution could be to employ a constant time decoder with a negligible decoding failure probability, that is to say which is about the expected security level of the cryptosystem. The main goal of this thesis is to present new methods to analyse decoder behavior in a cryptographic context.Second, we explain why a McEliece encryption scheme based on polar code does not ensure the expected level of security. To do so, we apply new techniques to resolve the code equivalence problem. This allows us to highlight several common properties shared by Reed-Muller codes and polar codes. We introduce a new family of codes, named decreasing monomial codes, containing both Reed-Muller and polar codes. These results are also of independent interest for coding theory
Candau, Marion. "Codes correcteurs d'erreurs convolutifs non commutatifs." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0050/document.
Full textAn error correcting code adds redundancy to a message in order to correct it when errors occur during transmission.Convolutional codes are powerful ones, and therefore, often used. The principle of a convolutional code is to perform a convolution product between a message and a transfer function, both defined over the group of integers. These codes do not protect the message if it is intercepted by a third party. That is why we propose in this thesis, convolutional codes with cryptographic properties defined over non-commutative groups. We first studied codes over the infinite dihedral group, which despite good performance, do not have the desired cryptographic properties. Consequently, we studied convolutional block codes over finite groups with a time-varying encoding. Every time a message needs to be encoded, the process uses a different subset of the group. These subsets are chaotically generated from an initial state. This initial state is considered as the symmetric key of the code-induced cryptosystem. We studied many groups and many methods to define these chaotic subsets. We examined the minimum distance of the codes we conceived and we showed that it is slightly smaller than the minimum distance of the linear block codes. Nevertheless, our codes have, in addition, cryptographic properties that the others do not have. These non-commutative convolutional codes are then a compromise between error correction and security
Côte, Maxime. "Reconnaissance de codes correcteurs d'erreurs." Phd thesis, Ecole Polytechnique X, 2010. http://pastel.archives-ouvertes.fr/pastel-00006125.
Full textAdjudeanu, Irina. "Codes correcteurs d'erreurs LDPC structurés." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27423/27423.pdf.
Full textYaoumi, Mohamed. "Energy modeling and optimization of protograph-based LDPC codes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0224.
Full textThere are different types of error correction codes (CCE), each of which gives different trade-offs interms of decoding performanceand energy consumption. We propose to deal with this problem for Low-Density Parity Check (LDPC) codes. In this work, we considered LDPC codes constructed from protographs together with a quantized Min-Sum decoder, for their good performance and efficient hardware implementation. We used a method based on Density Evolution to evaluate the finite-length performance of the decoder for a given protograph.Then, we introduced two models to estimate the energy consumption of the quantized Min-Sum decoder. From these models, we developed an optimization method in order to select protographs that minimize the decoder energy consumption while satisfying a given performance criterion. The proposed optimization method was based on a genetic algorithm called differential evolution. In the second part of the thesis, we considered a faulty LDPC decoder, and we assumed that the circuit introduces some faults in the memory units used by the decoder. We then updated the memory energy model so as to take into account the noise in the decoder. Therefore, we proposed an alternate method in order to optimize the model parameters so as to minimize the decoder energy consumption for a given protograph
Delpeyroux, Emmanuelle. "Contribution à l'étude des codes correcteurs d'erreurs." Toulouse 3, 1996. http://www.theses.fr/1996TOU30148.
Full textLacan, Jérôme. "Contribution à l'étude des codes correcteurs d'erreurs." Toulouse 3, 1997. http://www.theses.fr/1997TOU30274.
Full textCharaf, Akl. "Etudes de récepteurs MIMO-LDPC itératifs." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0017.
Full textThe aim of this thesis is to address the design of iterative MIMO receivers using LDPC Error Correcting codes. MIMO techniques enable capacity increase in wireless networks with no additional frequency ressources. The associationof MIMO with multicarrier modulation techniques OFDM made them the cornerstone of emerging high rate wireless networks. Optimal reception can be achieved using joint detection and decoding at the expense of a huge complexity making it impractical. Disjoint reception is then the most used. The design of iterative receivers for some applications using LDPC codes like Wifi (IEEE 802.11n) is constrained by the standard code structure which is not optimized for such kind of receivers. By observing the effect of the number of iterations on performance and complexity we underline the interest of scheduling LDPC decoding iterations and turboequalization iterations. We propose to define schedules for the iterative receiver in order to reduce its complexity while preserving its performance. Two approaches are used : static and dynamic scheduling. The second part of this work is concerns Multiuser MIMO using Spatial Division Multiple Access. We explore and evaluate the interest of using iterative reception to cancel residual inter-user interference
Sendrier, Nicolas. "Codes correcteurs d'erreurs à haut pouvoir de correction." Paris 6, 1991. http://www.theses.fr/1991PA066630.
Full textUrvoy, De Portzamparc Frédéric. "Sécurités algébrique et physique en cryptographie fondée sur les codes correcteurs d'erreurs." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066106/document.
Full textCode-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art.Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art
Urvoy, De Portzamparc Frédéric. "Sécurités algébrique et physique en cryptographie fondée sur les codes correcteurs d'erreurs." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066106.
Full textCode-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art.Code-based cryptography, introduced by Robert McEliece in 1978, is a potential candidate to replace the asymetric primitives which are threatened by quantum computers. More generral, it has been considered secure for more than thirty years, and allow very vast encryption primitives. Its major drawback lies in the size of the public keys. For this reason, several variants of the original McEliece scheme with keys easier to store were proposed in the last years.In this thesis, we are interested in variants using alternant codes with symmetries and wild Goppa codes. We study their resistance to algebraic attacks, and reveal sometimes fatal weaknesses. In each case, we show the existence of hidden algebraic structures allowing to describe the secret key with non-linear systems of multivariate equations containing fewer variables then in the previous modellings. Their resolutions with Gröbner bases allow to find the secret keys for numerous instances out of reach until now and proposed for cryptographic purposes. For the alternant codes with symmetries, we show a more fondamental vulnerability of the key size reduction process. Prior to an industrial deployment, it is necessary to evaluate the resistance to physical attacks, which target device executing a primitive. To this purpose, we describe a decryption algorithm of McEliece more resistant than the state-of-the-art
Edoukou, Frédéric Aka-Bilé. "Codes correcteurs d'erreurs construits à partir des variétés algébriques." Aix-Marseille 2, 2007. http://theses.univ-amu.fr.lama.univ-amu.fr/2007AIX22011.pdf.
Full textWe study the functional code defined on a projective algebraic variety X on a finite field, as this has been done by V. Goppa on algebraic curves. The minimum distance of this code is determined by computing the number of rational points of the intersection of X with all the hypersurfaces of a given degree. In the case where X is a non-degenerate hermitian surface, A. B. Sorensen has formulated a conjecture in his Ph. D thesis (1991), which should give the exact value of the minimum distance of this code. In this thesis, we give a proof of Sorensen's conjecture for quadratic surfaces. By using some results of finite geometries we give the weight distribution associated to this code. We will study also the functional code of order 2 defined on quadratic surfaces and will show that for the ones built on elliptic quadratic surfaces according to their parameters, their are good codes. We will give the best upper bounds for the number of points of quadratic section of quadric varieties, and non-degenerate hermitian variety, in projective dimension 4. Finally we will propose a generalisation of the studied conjecture for higher dimensional varieties
Dubreuil, Laurent. "Amélioration de l'étalement de spectre par l'utilisation de codes correcteurs d'erreurs." Limoges, 2005. https://aurore.unilim.fr/theses/nxfile/default/3964483f-5b1f-41dd-b862-6c3d029c0d41/blobholder:0/2005LIMO0041.pdf.
Full textIn this thesis we study a communication system named spread spectrum. The principle of this system consists in distributing the energy of the signal to transmit on a frequency band broader than what is really necessary to the transmission of the useful signal. Spread spectrum is based on using "spreading sequences" having good properties of correlation. In this thesis we introduce error correcting codes to improve the efficiency of the spreading signal. The aim of this thesis is to determine the efficiency of this method and the selection criteria of the error-correcting codes to use. The maximum number of users depends on the choice of the error-correcting code used but also on the spreading sequence used. A synthesis of the spread spectrum and CDMA (Code Division Multiple Access) are presented in a first part. Theoretical limits are given and physical limits are posed. Next two systems of spread spectrum using different spreading sequence are presented and compared. The most powerful system, theoretically as well as practically, is the spread spectrum "with multiple dephasing". The last part presents various error-correcting codes and determines which one maximizes the number of users. However, for a binary error rate residual lower than 10-3 and a spreading factor of 31 the maximum number of users obtained in practice is 23 with using error-correcting code and 7 without it, while from the theoretical point of view the expected number is 45
Leverrier, Anthony. "Etude théorique de la distribution quantique de clés à variables continues." Phd thesis, Paris, Télécom ParisTech, 2009. https://theses.hal.science/tel-00451021.
Full textThis thesis is concerned with quantum key distribution (QKD), a cryptographic primitive allowing two distant parties, Alice and Bob, to establish a secret key, in spite of the presence of a potential eavesdropper, Eve. Here, we focus on continuous-variable protocols, for which the information is coded in phase-space. The main advantage of these protocols is that their implementation only requires standard telecom components. The security of QKD lies on the laws of quantum physics: an eavesdropper will necessary induce some noise on the communication, therefore revealing her presence. A particularly difficult step of continuous-variable QKD protocols is the ``reconciliation'' where Alice and Bob use their classical measurement results to agree on a common bit string. We first develop an optimal reconciliation algorithm for the initial protocol, then introduce a new protocol for which the reconciliation problem is automatically taken care of thanks to a discrete modulation. Proving the security of continuous-variable QKD protocols is a challenging problem because these protocols are formally described in an infinite dimensional Hilbert space. A solution is to use all available symmetries of the protocols. In particular, we introduce and study a class of symmetries in phase space, which is particularly relevant for continuous-variable QKD. Finally, we consider finite size effects for these protocols. We especially analyse the influence of parameter estimation on the performance of continuous-variable QDK protocols
Leverrier, Anthony. "Etude théorique de la distribution quantique de clés à variables continues." Phd thesis, Télécom ParisTech, 2009. http://tel.archives-ouvertes.fr/tel-00451021.
Full textAbdmouleh, Ahmed. "Codes correcteurs d'erreurs NB-LDPC associés aux modulations d'ordre élevé." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS452/document.
Full textThis thesis is devoted to the analysis of the association of non-binary LDPC codes (NB-LDPC) with high-order modulations. This association aims to improve the spectral efficiency of future wireless communication systems. Our approach tries to take maximum advantage of the straight association between NB-LDPC codes over a Galois Field with modulation constellations of the same cardinality. We first investigate the optimization of the signal space diversity technique obtained with the Rayleigh channel (with and without erasure) thanks to the rotation of the constellation. To optimize the rotation angle, the mutual information analysis is performed for both coded modulation (CM) and bit-interleaved coded modulation (BICM) schemes. The study shows the advantages of coded modulations over the state-of-the-art BCIM modulations. Using Monte Carlo simulation, we show that the theoretical gains translate into actual gains in practical systems. In the second part of the thesis, we propose to perform a joint optimization of constellation labeling and parity-check coefficient choice, based on the Euclidian distance instead of the Hamming distance. An optimization method is proposed. Using the optimized matrices, a gain of 0.2 dB in performance is obtained with no additional complexity
Peirani, Béatrice. "Cryptographie à base de codes correcteurs d'erreurs et générateurs aléatoires." Aix-Marseille 1, 1994. http://www.theses.fr/1994AIX11046.
Full textGennero, Marie-Claude. "Contribution à l'étude théorique et appliquée des codes correcteurs d'erreurs." Toulouse 3, 1990. http://www.theses.fr/1990TOU30243.
Full textDallot, Léonard. "Sécurité de protocoles cryptographiques fondés sur les codes correcteurs d'erreurs." Caen, 2010. http://www.theses.fr/2010CAEN2047.
Full textCode-based cryptography appeared in 1968, in the early years of public-key cryptography. The purpose of this thesis is the study of reductionist security of cryptographic constructions that belong to this category. After introducing some notions of cryptography and reductionist security, we present a rigorous analysis of reductionist security of three code-based encryption scheme : McEliece's cryptosystem, Niederreiter's variant and a hybrid scheme proposed by N. Sendrier and B. Biswas. The legitimacy of this approach is next illustrated by the cryptanalysis of two variants of McEliece's scheme that aim at reducing the size of the keys necessary for ensure communication confidentiality. Then we present a reductionist security proof of a signature scheme proposed in 2001 by N. Courtois, M. Finiasz and N. Sendrier. In order to manage this, we show that we need to slightly modify the scheme. Finally, we show that technics used in the previous scheme can also be used to build a provably secure threshold ring signature scheme
Rosseel, Joachim. "DÉCODAGE DE CODES CORRECTEURS D'ERREURS ASSISTÉ PAR APPRENTISSAGE POUR L'IOT." Electronic Thesis or Diss., CY Cergy Paris Université, 2023. http://www.theses.fr/2023CYUN1260.
Full textWireless communications, already very present in our society, still raise new challengesas part of the deployment of the Internet of Things (IoT) such as the development of newdecoding methods at the physical layer ensuring good performance for the transmission ofshort messages. In particular, Low Density Parity Check (LDPC) codes are a family of errorcorrecting codes well-known for their excellent asymptotic error correction performanceunder iterative Belief Propagation (BP) decoding. However, the error correcting capacity ofthe BP algorithm is severely deteriorated for short LDPC codes. Thus, this thesis focuses on improving the decoding of short LDPC codes, thanks in particular to machine learning tools such as neural networks.After introducing the notions and characteristics of LDPC codes and BP decoding, aswell as the modeling of the BP algorithm by a Recurrent Neural Network (BP-RecurrentNeural Network or BP-RNN), we develop new training methods specializing the BP-RNN ondecoding error events sharing similar structural properties. These specialization approaches are subsequently associated decoding architectures composed of several specialized BP-RNNs, where each BP-RNN is trained to decode a specific kind of error events (decoding diversity). Secondly, we are interested in the post-processing of the BP (or the BP-RNN) with an Ordered Statistics Decoding (OSD) in order to close the gap the maximum likelihood (ML) decoding performance. To improve the post-processing performance, we optimize its input thanks to a single neuron and we introduce a multiple OSD post-processing decoding strategy. We then show that this strategy effectively takes advantage of the diversity of its inputs, thus providing an effective way to close the gap with ML decoding
Berhault, Guillaume. "Exploration architecturale pour le décodage de codes polaires." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0193/document.
Full textApplications in the field of digital communications are becoming increasingly complex and diversified. Hence, the need to correct the transmitted message mistakes becomes an issue to be dealt with. To address this problem, error correcting codes are used. In particular, Polar Codes that are the subject of this thesis. They have recently been discovered (2008) by Arikan. They are considered an important discovery in the field of error correcting codes. Their practicality goes hand in hand with the ability to propose a hardware implementation of a decoder. The subject of this thesis focuses on the architectural exploration of Polar Code decoders implementing particular decoding algorithms. Thus, the subject revolves around two decoding algorithms: a first decoding algorithm, returning hard decisions, and another decoding algorithm, returning soft decisions.The first decoding algorithm, treated in this thesis, is based on the hard decision algorithm called "successive cancellation" (SC) as originally proposed. Analysis of implementations of SC decoders shows that the partial sum computation unit is complex. Moreover, the memory amount from this analysis limits the implementation of large decoders. Research conducted in order to solve these problems presents an original architecture, based on shift registers, to compute the partial sums. This architecture allows to reduce the complexity and increase the maximum working frequency of this unit. We also proposed a new methodology to redesign an existing decoder architecture, relatively simply, to reduce memory requirements. ASIC and FPGA syntheses were performed to characterize these contributions.The second decoding algorithm treated in this thesis is the soft decision algorithm called SCAN. The study of the state of the art shows that the only other implemented soft decision algorithm is the BP algorithm. However, it requires about fifty iterations to obtain the decoding performances of the SC algorithm. In addition, its memory requirements make it not implementable for huge code sizes. The interest of the SCAN algorithm lies in its performances which are better than those of the BP algorithm with only two iterations. In addition, its lower memory footprint makes it more convenient and allows the implementation of larger decoders. We propose in this thesis a first implementation of this algorithm on FPGA targets. FPGA syntheses were carried out in order to compare the SCAN decoder with BP decoders in the state of the art.The contributions proposed in this thesis allowed to bring a complexity reduction of the partial sum computation unit. Moreover, the amount of memory required by an SC decoder has been decreased. At last, a SCAN decoder has been proposed and can be used in the communication field with other blocks requiring soft inputs. This then broadens the application field of Polar Codes
Belkasmi, Mostafa. "Contribution à l'étude des codes correcteurs multicirculants." Toulouse 3, 1992. http://www.theses.fr/1992TOU30120.
Full textTale, kalachi Herve. "Sécurité des protocoles cryptographiques fondés sur la théorie des codes correcteurs d'erreurs." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR045/document.
Full textContrary to the cryptosystems based on number theory, the security of cryptosystems based on error correcting codes appears to be resistant to the emergence of quantum computers. Another advantage of these systems is that the encryption and decryption are very fast, about five times faster for encryption, and 10 to 100 times faster for decryption compared to RSA cryptosystem. Nowadays, the interest of scientific community in code-based cryptography is highly motivated by the latest announcement of the National Institute of Standards and Technology (NIST). They initiated the Post-Quantum cryptography Project which aims to define new standards for quantum resistant cryptography and fixed the deadline for public key cryptographic algorithm submissions for November 2017. This announcement motivates to study the security of existing schemes in order to find out whether they are secure. This thesis thus presents several attacks which dismantle several code-based encryption schemes. We started by a cryptanalysis of a modified version of the Sidelnikov cryptosystem proposed by Gueye and Mboup [GM13] which is based on Reed-Muller codes. This modified scheme consists in inserting random columns in the secret generating matrix or parity check matrix. The cryptanalysis relies on the computation of the square of the public code. The particular nature of Reed-Muller which are defined by means of multivariate binary polynomials, permits to predict the values of the dimensions of the square codes and then to fully recover in polynomial time the secret positions of the random columns. Our work shows that the insertion of random columns in the Sidelnikov scheme does not bring any security improvement. The second result is an improved cryptanalysis of several variants of the GPT cryptosystem which is a rank-metric scheme based on Gabidulin codes. We prove that any variant of the GPT cryptosystem which uses a right column scrambler over the extension field as advocated by the works of Gabidulin et al. [Gab08, GRH09, RGH11] with the goal to resist Overbeck’s structural attack [Ove08], are actually still vulnerable to that attack. We show that by applying the Frobeniusoperator appropriately on the public key, it is possible to build a Gabidulin code having the same dimension as the original secret Gabidulin code, but with a lower length. In particular, the code obtained by this way corrects less errors than thesecret one but its error correction capabilities are beyond the number of errors added by a sender, and consequently an attacker is able to decrypt any ciphertext with this degraded Gabidulin code. We also considered the case where an isometrictransformation is applied in conjunction with a right column scrambler which has its entries in the extension field. We proved that this protection is useless both in terms of performance and security. Consequently, our results show that all the existingtechniques aiming to hide the inherent algebraic structure of Gabidulin codes have failed. To finish, we studied the security of the Faure-Loidreau encryption scheme [FL05] which is also a rank-metric scheme based on Gabidulin codes. Inspired by our precedent work and, although the structure of the scheme differs considerably from the classical setting of the GPT cryptosystem, we show that for a range of parameters, this scheme is also vulnerable to a polynomial-time attack that recovers the private key by applying Overbeck’s attack on an appropriate public code. As an example we break in a few seconds parameters with 80-bit security claim
Senigon, de Roumefort Ravan de. "Approche statistique du vieillissement des disques optiques CD-Audio, CD-R, CD-RW." Paris 6, 2011. http://www.theses.fr/2011PA066109.
Full textDiab, Menouer. "Contribution à l'étude des architectures systoliques pour les codes correcteurs d'erreurs." Toulouse 3, 1992. http://www.theses.fr/1992TOU30147.
Full textPoli, Christine. "S. E. C. C : système expert pour les codes correcteurs d'erreurs." Toulouse 3, 1995. http://www.theses.fr/1995TOU30223.
Full textOllivier, Harold. "Eléments de théorie de l'information quantique, décohérence et codes correcteurs d'erreurs." Palaiseau, Ecole polytechnique, 2004. http://www.theses.fr/2004EPXX0027.
Full textPlanquette, Guillaume. "Étude de certaines propriétés algébriques et spectrales des codes correcteurs d'erreurs." Rennes 1, 1996. http://www.theses.fr/1996REN10207.
Full textPham, Sy Lam. "Codes correcteurs d'erreurs au niveau applicatif pour les communications par satellite." Thesis, Cergy-Pontoise, 2012. http://www.theses.fr/2012CERG0574.
Full textThe advent of content distribution, IPTV, video-on-demand and other similar services accelerate the demand for reliable data transmission over highly heterogeneous networks and toward terminals potentially heterogeneous too. In this context, Forward Error Correction (FEC) codes that operate at the transport or the Application Layer (AL-FEC) are used in conjunction with the FEC codes implemented at the physical layer, in order to improve the overall performance of the communication system. AL-FEC codes are aimed at recovering erased data packets and they are essential in many multicast/broadcast environments, no matter the way the information is transported, for instance using a wired or wireless link, and a terrestrial, satellite-based or hybrid infrastructure.This thesis addresses the design of Low Density Parity Check (LDPC) codes for AL-FEC applications. One the one hand, we provide an asymptotical analysis of non-binary LDPC codes over erasure channels, as well as waterfall and error-floor optimization techniques for finite-length codes. On the other hand, new concepts and coding techniques are developed in order to fully exploit the potential of non-binary LDPC codes.The first contribution of this thesis consists of the analysis and optimization of two new ensembles of LDPC codes. First, we have derived the density evolution equations for a very general ensemble of non-binary LDPC codes with rank-deficient coefficients. This allows improving the code performance, as well as designing ensembles of LDPC codes that can be punctured in an effective manner. The second approach allows the asymptotical optimization of a particular ensemble of LDPC codes, while ensuring low error-floors at finite lengths.The second contribution is the construction of finite length LDPC codes with good waterfall and error floor performance. Two approaches were investigated, according to the metric used to evaluate the code. The “Scheduled” Progressive Edge Growth (SPEG) algorithm is proposed, in order to optimize the waterfall performance of the code. Another method is proposed which consists in optimizing a specific structure of the parity check matrix. This approach gives low error-floors.The third contribution investigates a new technique of rate adaptability for non-binary LDPC codes. We propose a new method to generate “on-the-fly” incremental redundancy, which allows designing codes with flexible coding rates, in order to cope with severe channel conditions or to enable Fountain-like distribution applications.The fourth contribution focuses on a new class of LDPC codes, called non-binary cluster-LDPC codes. We derive exact equations of the density evolution for the iterative decoding and an upper bound for the maximum-likelihood decoding.Finally, we propose a practical solution to the problem of reliable communication via satellite to high-speed trains. Here, the challenge is that obstacles present along the track regularly interrupt the communication. Our solution offers optimal performance with a minimum amount of redundancy
Richmond, Tania. "Implantation sécurisée de protocoles cryptographiques basés sur les codes correcteurs d'erreurs." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES048/document.
Full textThe first cryptographic protocol based on error-correcting codes was proposed in 1978 by Robert McEliece. Cryptography based on codes is called post-quantum because until now, no algorithm able to attack this kind of protocols in polynomial time, even using a quantum computer, has been proposed. This is in contrast with protocols based on number theory problems like factorization of large numbers, for which efficient Shor's algorithm can be used on quantum computers. Nevertheless, the McEliece cryptosystem security is based not only on mathematical problems. Implementation (in software or hardware) is also very important for its security. Study of side-channel attacks against the McEliece cryptosystem have begun in 2008. Improvements can still be done. In this thesis, we propose new attacks against decryption in the McEliece cryptosystem, used with classical Goppa codes, including corresponding countermeasures. Proposed attacks are based on evaluation of execution time of the algorithm or its power consumption analysis. Associate countermeasures are based on mathematical and algorithmic properties of the underlying algorithm. We show that it is necessary to secure the decryption algorithm by considering it as a whole and not only step by step
Cayrel, Pierre-Louis. "Construction et optimisation de cryptosystèmes basés sur les codes correcteurs d'erreurs." Limoges, 2008. https://aurore.unilim.fr/theses/nxfile/default/46aac3f7-1539-4684-bef6-9b1ae632c183/blobholder:0/2008LIMO4026.pdf.
Full textIn this thesis, we are interested in the study of encryption systems as well as signature schemes whose security relies on difficult problems of error-correcting codes. These research activities have been motivated, a part of a theoretical point of view by creating : new signature schemes with special properties and a way of reducing the size of the key of the McEliece scheme, and on the other hand, a practical point of view to use structural properties to obtain effective implementations of a signature scheme which is based on error-correcting codes. As its title indicates, this thesis deals with the construction and optimization of cryptosystems based on error-correcting codes and more particularly five new protocols. It presents a secure version of the Stern scheme in a low-resources environment, a new construction of the Kabatianski, Krouk and Smeets scheme, a signature scheme based on the identity proved secure in the random oracle model, a threshold ring signature scheme and a reduction of the size of the key of the McEliece scheme using quasi-cyclic alternant codes. In the annex, this work deals with algebraic attacks against linear feedback shift register with memory. It also presents a brief study of cyclic codes on rings of matrices
Khedher, Houda. "Effets des codes correcteurs d'erreurs sur les systèmes CDMA à taux multiples." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0004/MQ44700.pdf.
Full textFiniasz, Matthieu. "Nouvelles constructions utilisant des codes correcteurs d'erreurs en cryptographie à clef publique." Palaiseau, Ecole polytechnique, 2004. http://www.theses.fr/2004EPXX0033.
Full textRochet, Raphaël. "Synthèse Automatique de Contrôleurs avec Contraintes de Sûreté de Fonctionnement." Phd thesis, Grenoble INPG, 1996. http://tel.archives-ouvertes.fr/tel-00345417.
Full textBonvard, Aurélien. "Algorithmes de détection et de reconstruction en aveugle de code correcteurs d'erreurs basés sur des informations souples." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0178.
Full textRecent decades have seen the rise of digital communications. This has led to a proliferation of communication standards, requiring greater adaptability of communication systems. One way to make these systems more flexible is to design an intelligent receiver that would be able to retreive all the parameters of the transmitter from the received signal. In this manuscript, we are interested in the blind identification of error-correcting codes. We propose original methods based on the calculation of Euclidean distances between noisy symbol sequences. First, a classification algorithm allows the detection of a code and then the identification of its code words lenght. A second algorithm based on the number of collisions allows to identify the length of the information words. Then, we propose another method using the minimum Euclidean distances to identify block codes length. Finally, a method for reconstructing the dual code of an error-correcting code is presented
Calas, Yvan. "Performances des codes correcteurs d'erreur au niveau applicatif dans les réseaux." Montpellier 2, 2003. http://www.theses.fr/2003MON20166.
Full textGillot, Valérie. "La methode des sommes exponentielles a plusieurs variables pour les codes correcteurs d'erreurs." Toulon, 1993. http://www.theses.fr/1993TOUL0003.
Full textJaber, Houssein. "Conception architecturale haut débit et sûre de fonctionnement pour les codes correcteurs d'erreurs." Thesis, Metz, 2009. http://www.theses.fr/2009METZ042S/document.
Full textNowadays, modern communication systems require higher and higher data throughputs to transmit increasing volumes of data. They must be flexible to handle multi-norms environments, and progressive to accommodate future norms. For these systems, quality of service (QoS) must be guaranteed despite the evolution of microelectronics technologies that increase the sensitivity of integrated circuits to external perturbations (impact of particles, loss of signal integrity, etc). Fault-tolerance techniques are becoming more and more an important criteria to improve the dependability and the quality of service. This thesis’work continues previous research undertaken at the LICM laboratory on the architectural design of high-speed, low-cost, and dependable transmission systems. It focuses on two principal areas of research : The first research area concerns the speed and flexibility aspects, particularly on the study and implementation of parallel-pipelined architectures dedicated to recursive convolutional encoders. The principle is based on the optimization of blocks that calculate the remainder of the polynomial division which constitute the critical operation of the encoding. This approach is generalized to recursive IIR filters. The main architectural characteristics being aimed are high flexibility and scalability, yet preserving a good trade-off between the amount of resources used (and hence, area consumption) and the obtained performance (operation speed). The second topic concerns the developing of a methodology for designing FS (fault-secure) encoders, improving the tolerance of digital integrated circuits. The proposed approach consists in adding an extra blocks to the encoders, allowing online error detection. The proposed solutions offer a good compromise between complexity and frequency operation. For even higher throughput, parallel-pipelined implementations of FS encoders were considered. Different fault injection campaigns of single, double, and random errors were applied to the encoders in order to evaluate error detection rates. The study of dependable architecture was extended to pipeline-parallel decoders for cyclic block codes. This approach is based on a slight modification of the parallel-pipeline architectures developed at LICM laboratory, introducing some redundancy in order to make it dependable
Jaber, Houssein. "Conception architecturale haut débit et sûre de fonctionnement pour les codes correcteurs d'erreurs." Electronic Thesis or Diss., Metz, 2009. http://www.theses.fr/2009METZ042S.
Full textNowadays, modern communication systems require higher and higher data throughputs to transmit increasing volumes of data. They must be flexible to handle multi-norms environments, and progressive to accommodate future norms. For these systems, quality of service (QoS) must be guaranteed despite the evolution of microelectronics technologies that increase the sensitivity of integrated circuits to external perturbations (impact of particles, loss of signal integrity, etc). Fault-tolerance techniques are becoming more and more an important criteria to improve the dependability and the quality of service. This thesis’work continues previous research undertaken at the LICM laboratory on the architectural design of high-speed, low-cost, and dependable transmission systems. It focuses on two principal areas of research : The first research area concerns the speed and flexibility aspects, particularly on the study and implementation of parallel-pipelined architectures dedicated to recursive convolutional encoders. The principle is based on the optimization of blocks that calculate the remainder of the polynomial division which constitute the critical operation of the encoding. This approach is generalized to recursive IIR filters. The main architectural characteristics being aimed are high flexibility and scalability, yet preserving a good trade-off between the amount of resources used (and hence, area consumption) and the obtained performance (operation speed). The second topic concerns the developing of a methodology for designing FS (fault-secure) encoders, improving the tolerance of digital integrated circuits. The proposed approach consists in adding an extra blocks to the encoders, allowing online error detection. The proposed solutions offer a good compromise between complexity and frequency operation. For even higher throughput, parallel-pipelined implementations of FS encoders were considered. Different fault injection campaigns of single, double, and random errors were applied to the encoders in order to evaluate error detection rates. The study of dependable architecture was extended to pipeline-parallel decoders for cyclic block codes. This approach is based on a slight modification of the parallel-pipeline architectures developed at LICM laboratory, introducing some redundancy in order to make it dependable
Couvreur, Alain. "Résidus de 2-formes différentielles sur les surfaces algébriques et applications aux codes correcteurs d'erreurs." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00376546.
Full text