To see the other types of publications on this topic, follow the link: Low-density parity-check convolutional codes.

Dissertations / Theses on the topic 'Low-density parity-check convolutional codes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Low-density parity-check convolutional codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hussein, Ahmed Refaey Ahmed. "Universal Decoder for Low Density Parity Check, Turbo and Convolutional Codes." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28154/28154.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meidan, Amir. "Linear-time encodable low-density parity-check codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0006/MQ40942.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sharifi, Tehrani Saeed. "Stochastic decoding of low-density parity-check codes." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97010.

Full text
Abstract:
Low-Density Parity-Check (LDPC) codes are one of the most powerful classes of error-control codes known to date. These codes have been considered for many recent digital communication applications. In this dissertation, we propose stochastic decoding of state-of-the-art LDPC codes and demonstrate it as a competitive approach to practical LDPC decoding algorithms.In stochastic decoding, probabilities are represented as streams of random bits using Bernoulli sequences in which the information is contained in the statistics of the bit stream. This representation results in low hardware-complexity processing nodes that perform computationally-intensive operations. However, stochastic decoding is prone to the acute problem of latching. This problem is caused by correlated bit streams within cycles in the code's factor graph, and significantly deteriorates the performance of stochastic LDPC decoders.We propose edge memories, tracking forecast memories, and majority-based tracking forecast memories to address the latching problem. These units efficiently extract the evolving statistics of stochastic bit streams and rerandomize them to disrupt latching. To the best of our knowledge, these methods are the first successful methods for stochastic decoding of state-of-the-art LDPC codes.We present novel decoder architectures and report on several hardware implementations. The most advanced reported implementation is a stochastic decoder that decodes the (2048,1723) LDPC code from the IEEE 802.3an standard. To the best of our knowledge, this decoder is the most silicon area-efficient and, with a maximum core throughput of 61.3 Gb/s, is one of the fastest fully parallel soft-decision LDPC decoders reported in the literature. We demonstrate the performance of this decoder in low bit-error-rate regimes.In addition to stochastic LDPC decoding, we propose the novel application of the stochastic approach for joint decoding of LDPC codes and partial-response channels that are considered in practical magnetic recording applications. Finally, we investigate the application of the stochastic approach for decoding linear block codes with high-density parity-check matrices on factor graphs. We consider Reed-Solomon, Bose-Chaudhuri-Hocquenghem, and block turbo codes.<br>À ce jour, les codes Low-Density Parity-Check (LDPC) font partie des codes correcteurs d'erreurs les plus performants. Ces codes sont inclus dans différents standards de communications numériques. Dans ce manuscrit, nous proposons d'utiliser le décodage stochastique pour les codes LDPC. D'autre part, nous démontrons que pour les codes LDPC, le décodage stochastique représente une alternative réaliste aux algorithmes de décodage existants.Dans le processus de décodage stochastique, les probabilités sont représentées sous forme de séquences de Bernoulli. L'information est contenue dans la statistique de ces flux binaires aléatoires. Cette représentation particulière permet d'exécuter des calculs intensifs avec une faible complexité matérielle. Cependant le décodage stochastique est enclin au problème du verrouillage ("latching"). La corrélation entre les bits des différents flux au sein des cycles du graphe biparti dégrade les performances du décodage stochastique des codes LDPC. Pour résoudre le problème du verrouillage, nous proposons trois solutions: les mémoires de branche, les mémoires de suivi, et les mémoires de suivi à majorité. Ces différents composants permettent de suivre l'évolution de la statistique des flux binaires et de réintroduire des éléments aléatoires au sein des séquences observées, minimisant ainsi le phénomène de verrouillage. À notre connaissance, il s'agit là des premiers résultats probants permettant un décodage stochastique efficace des codes LDPC. Nous proposons de nouvelles architectures de décodeurs associées à leurs implantations matérielles respectives. La plus perfectionnée des architectures présentée ici est celle d'un décodeur stochastique pour le code LDPC (2048,1723) associé au standard IEEE 802.3an. À notre connaissance, en comparaison avec l'état de l'art actuel, ce décodeur dispose du meilleur rapport vitesse/complexité. Le débit maximum (au niveau du coeur), est de 61.3 Gb/s, il s'agit là du plus rapide des décodeurs de codes LDPC à décisions souples connu à ce jour. Nous présentons par ailleurs les performances de ce décodeur à très faible taux d'erreurs binaire. De plus, nous proposons d'appliquer le calcul stochastique au décodage conjoint des codes LDPC et des canaux à réponse partielle qui est utilisé dans les applications d'enregistrement magnétique. Enfin, nous étudions l'extension du décodage stochastique au décodage des codes en blocs ayant une matrice de parité à forte densité. Nous appliquons le décodage stochastique sur des graphes biparti aux codes Reed-Solomon, Bose-Chaudhuri-Hocquenghem, et aux turbocodes en blocs.
APA, Harvard, Vancouver, ISO, and other styles
4

Davey, M. C. "Error-correction using low-density parity-check codes." Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.598305.

Full text
Abstract:
Gallager's low-density parity-check codes are defined by sparse parity-check matrices, usually with a random contruction. Such codes have near Shannon limit performance when decoded using an iterative probabilistic decoding algorithm. We report two advances that improve the error-correction performance of these codes. First, defining the codes over non-binary fields we can obtain a 0.6 dB improvement in signal to noise ratio for a given bit error rate. Second, using irregular parity-check matrices with non-uniform row and column weights we obtain gains of up to 0.5 dB. The empirical error-correction performance of irregular low-density parity-check codes is unbeaten for the additive white Gaussian noise channel. Low-density parity-check codes are also shown to be useful for communicating over channels which make insertions and deletions as well as additive (substitution) errors. Error-correction for such channels has not been widely studied, but is of importance whenever synchronisation of sender and receiver is imperfect. We introduce concatenated codes using novel non-linear inner codes which we call 'watermark' codes, and low-density parity-check codes over non-binary fields as outer codes. The inner code allows resynchronisation using a probabilistic decoder, providing soft outputs for the outer low-density parity-check decoder. Error-correction performance using watermark codes is several orders of magnitude better than any comparable results in the literature.
APA, Harvard, Vancouver, ISO, and other styles
5

Hayes, Bob. "LOW DENSITY PARITY CHECK CODES FOR TELEMETRY APPLICATIONS." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604497.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>Next generation satellite communication systems require efficient coding schemes that enable high data rates, require low overhead, and have excellent bit error rate performance. A newly rediscovered class of block codes called Low Density Parity Check (LDPC) codes has the potential to revolutionize forward error correction (FEC) because of the very high coding rates. This paper presents a brief overview of LDPC coding and decoding. An LDPC algorithm developed by Goddard Space Flight Center is discussed, and an overview of an accompanying VHDL development by L-3 Communications Cincinnati Electronics is presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Moon, Todd K., and Jacob H. Gunther. "AN INTRODUCTION TO LOW-DENSITY PARITY-CHECK CODES." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/607470.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada<br>Low-Density Parity-Check (LDPC) codes are powerful codes capable of nearly achieving the Shannon channel capacity. This paper presents a tutorial introduction to LDPC codes, with a detailed description of the decoding algorithm. The algorithm propagates information about bit and check probabilities through a tree obtained from the Tanner graph for the code. This paper may be useful as a supplement in a course on error-control coding or digital communication.
APA, Harvard, Vancouver, ISO, and other styles
7

Anitei, Irina. "Circular Trellis based Low Density Parity Check Codes." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1226513009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Price, Aiden K. "Improved constructions of low-density parity-check codes." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/128373/1/Aiden_Price_Thesis.pdf.

Full text
Abstract:
There is an ongoing need to improve the efficiency and error-correcting performance of error correcting codes, which are widely used to enhance accuracy when retrieving or communicating information. This research investigates several potential improvements to a high-performing class of error correcting codes known as low-density parity-check (LDPC) codes. The results presented here further the known literature surrounding a specific class of functions (Alltop functions). Additionally, this work demonstrates ways of manipulating existing LDPC code constructions using relaxed difference sets to provide constructions with far more flexible code parameters. These constructions have competitive performance when compared to relevant modern codes.
APA, Harvard, Vancouver, ISO, and other styles
9

Adhikari, Dikshya. "The Role of Eigenvalues of Parity Check Matrix in Low-Density Parity Check Codes." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707297/.

Full text
Abstract:
The new developments in coding theory research have revolutionized the application of coding to practical systems. Low-Density Parity Check (LDPC) codes form a class of Shannon limit approaching codes opted for digital communication systems that require high reliability. This thesis investigates the underlying relationship between the spectral properties of the parity check matrix and LDPC decoding convergence. The bit error rate of an LDPC code is plotted for the parity check matrix that has different Second Smallest Eigenvalue Modulus (SSEM) of its corresponding Laplacian matrix. It is found that for a given (n,k) LDPC code, large SSEM has better error floor performance than low SSEM. The value of SSEM decreases as the sparseness in a parity-check matrix is increased. It was also found from the simulation that long LDPC codes have better error floor performance than short codes. This thesis outlines an approach to analyze LDPC decoding based on the eigenvalue analysis of the corresponding parity check matrix.
APA, Harvard, Vancouver, ISO, and other styles
10

Blad, Anton. "Efficient Decoding Algorithms for Low-Density Parity-Check Codes." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2794.

Full text
Abstract:
<p>Low-density parity-check codes have recently received much attention because of their excellent performance and the availability of a simple iterative decoder. The decoder, however, requires large amounts of memory, which causes problems with memory consumption. </p><p>We investigate a new decoding scheme for low density parity check codes to address this problem. The basic idea is to define a reliability measure and a threshold, and stop updating the messages for a bit whenever its reliability is higher than the threshold. We also consider some modifications to this scheme, including a dynamic threshold more suitable for codes with cycles, and a scheme with soft thresholds which allow the possibility of removing a decision which have proved wrong. </p><p>By exploiting the bits different rates of convergence we are able to achieve an efficiency of up to 50% at a bit error rate of less than 10^-5. The efficiency should roughly correspond to the power consumption of a hardware implementation of the algorithm.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Ha, Jeongseok Ha. "Low-Density Parity-Check Codes with Erasures and Puncturing." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5296.

Full text
Abstract:
In this thesis, we extend applications of Low-Density Parity-Check (LDPC) codes to a combination of constituent sub-channels, which is a mixture of Gaussian channels with erasures. This model, for example, represents a common channel in magnetic recordings where thermal asperities in the system are detected and represented at the decoder as erasures. Although this channel is practically useful, we cannot find any previous work that evaluates performance of LDPC codes over this channel. We are also interested in practical issues such as designing robust LDPC codes for the mixture channel and predicting performance variations due to erasure patterns (random and burst), and finite block lengths. On time varying channels, a common error control strategy is to adapt the coding rate according to available channel state information (CSI). An effective way to realize this coding strategy is to use a single code and puncture it in a rate-compatible fashion, a so-called rate-compatible punctured code (RCPC). We are interested in the existence of good puncturing patterns for rate-changes that minimize performance loss. We show the existence of good puncturing patterns with analysis and verify the results with simulations. Universality of a channel code across a broad range of coding rates is a theoretically interesting topic. We are interested in the possibility of using the puncturing technique proposed in this thesis for designing universal LDPC codes. We also consider how to design high rate LDPC codes by puncturing low rate LDPC codes. The new design method can take advantage of longer effect block lengths, sparser parity-check matrices, and larger minimum distances of low rate LDPC codes.
APA, Harvard, Vancouver, ISO, and other styles
12

Ismail, Mohamed Rafiq. "High throughput decoding of low density parity check codes." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Vijayakumar, Suresh Mikler Armin. "FPGA implementation of low density parity check codes decoder." [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-11003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Pirou, Florent. "Low-density Parity-Check decoding Algorithms." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2160.

Full text
Abstract:
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Planjery, Shiva Kumar. "Low-Complexity Finite Precision Decoders for Low-Density Parity-Check Codes." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605947.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California<br>We present a new class of finite-precision decoders for low-density parity-check (LDPC) codes. These decoders are much lower in complexity compared to conventional floating-point decoders such as the belief propagation (BP) decoder, but they have the potential to outperform BP. The messages utilized by the decoders assume values (or levels) from a finite discrete set. We discuss the implementation aspects as well as describe the underlying philosophy in designing these decoders. We also provide results to show that in some cases, only 3 bits are required in the proposed decoders to outperform floating-point BP.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Kai. "High-Performance Decoder Architectures For Low-Density Parity-Check Codes." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/17.

Full text
Abstract:
The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects. Firstly, we present a high-throughput decoder design for the Quasi-Cyclic (QC) LDPC codes. Two new techniques are proposed for the first time, including parallel layered decoding architecture (PLDA) and critical path splitting. Parallel layered decoding architecture enables parallel processing for all layers by establishing dedicated message passing paths among them. The decoder avoids crossbar-based large interconnect network. Critical path splitting technique is based on articulate adjustment of the starting point of each layer to maximize the time intervals between adjacent layers, such that the critical path delay can be split into pipeline stages. Furthermore, min-sum and loosely coupled algorithms are employed for area efficiency. As a case study, a rate-1/2 2304-bit irregular LDPC decoder is implemented using ASIC design in 90 nm CMOS process. The decoder can achieve an input throughput of 1.1 Gbps, that is, 3 or 4 times improvement over state-of-art LDPC decoders, while maintaining a comparable chip size of 2.9 mm^2. Secondly, we present a high-throughput decoder architecture for rate-compatible (RC) LDPC codes which supports arbitrary code rates between the rate of mother code and 1. While the original PLDA is lack of rate flexibility, the problem is solved gracefully by incorporating the puncturing scheme. Simulation results show that our selected puncturing scheme only introduces the BER performance degradation of less than 0.2dB, compared with the dedicated codes for different rates specified in the IEEE 802.16e (WiMax) standard. Subsequently, PLDA is employed for high throughput decoder design. As a case study, a RC- LDPC decoder based on the rate-1/2 WiMax LDPC code is implemented in CMOS 90 nm process. The decoder can achieve an input throughput of 975 Mbps and supports any rate between 1/2 and 1. Thirdly, we develop a low-complexity VLSI architecture and implementation for LDPC decoder used in China Multimedia Mobile Broadcasting (CMMB) systems. An area-efficient layered decoding architecture based on min-sum algorithm is incorporated in the design. A novel split-memory architecture is developed to efficiently handle the weight-2 submatrices that are rarely seen in conventional LDPC decoders. In addition, the check-node processing unit is highly optimized to minimize complexity and computing latency while facilitating a reconfigurable decoding core. Finally, we propose an LDPC-decoder-like channel detector for sparse ISI channels using belief propagation (BP). The BP-based detection computationally depends on the number of nonzero interferers only and are thus more suited for sparse ISI channels which are characterized by long delay but a small fraction of nonzero interferers. Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.
APA, Harvard, Vancouver, ISO, and other styles
17

Kolayli, Mert. "Comparison Of Decoding Algorithms For Low-density Parity-check Codes." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607731/index.pdf.

Full text
Abstract:
Low-density parity-check (LDPC) codes are a subclass of linear block codes. These codes have parity-check matrices in which the ratio of the non-zero elements to all elements is low. This property is exploited in defining low complexity decoding algorithms. Low-density parity-check codes have good distance properties and error correction capability near Shannon limits. In this thesis, the sum-product and the bit-flip decoding algorithms for low-density parity-check codes are implemented on Intel Pentium M 1,86 GHz processor using the software called MATLAB. Simulations for the two decoding algorithms are made over additive white gaussian noise (AWGN) channel changing the code parameters like the information rate, the blocklength of the code and the column weight of the parity-check matrix. Performance comparison of the two decoding algorithms are made according to these simulation results. As expected, the sum-product algorithm, which is based on soft-decision decoding, outperforms the bit-flip algorithm, which depends on hard-decision decoding. Our simulations show that the performance of LDPC codes improves with increasing blocklength and number of iterations for both decoding algorithms. Since the sum-product algorithm has lower error-floor characteristics, increasing the number of iterations is more effective for the sum-product decoder compared to the bit-flip decoder. By having better BER performance for lower information rates, the bit-flip algorithm performs according to the expectations<br>however, the performance of the sum-product decoder deteriorates for information rates below 0.5 instead of improving. By irregular construction of LDPC codes, a performance improvement is observed especially for low SNR values.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Jaehong. "Design of rate-compatible structured low-density parity-check codes." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19723.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.<br>Committee Chair: McLaughlin, Steven; Committee Member: Barry, John; Committee Member: Boldyreva, Alexandra; Committee Member: Clements, Mark; Committee Member: Li, Ye.
APA, Harvard, Vancouver, ISO, and other styles
19

Richter, Gerd. "Puncturing, mapping, and design of low-density parity-check codes." Düsseldorf VDI-Verl, 2008. http://d-nb.info/99372230X/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

DE, AMICIS AMEDEO. "Serially concatenated low-density parity-check codes for error correction." Doctoral thesis, Università Politecnica delle Marche, 2011. http://hdl.handle.net/11566/241943.

Full text
Abstract:
Questa tesi studia il progetto di codici Multiple Serially Concatenated Multiple Parity-Check (M-SC-MPC) che sono una classe di codici LDPC strutturati, caratterizzati da una codifica molto semplice. Si è anche studiato come il progetto dei codici M-SC-MPC può essere ottimizzato per l'uso in applicazioni wireless. E' stato dimostrato che i codici LDPC irregolari presentano migliori prestazioni rispetto a quelli regolari, specialmente per i codici a basso tasso di rate. Per questo motivo, particolare attenzione è dedicata alla semplice modifica della struttura interna dei codici M-SC-MPC che può migliorare le prestazioni nella correzione d'errore introducendo delle irregolarità nella matrice a parità di controllo e incrementando la lunghezza dei cicli locali nel grafo di Tanner associato. Inoltre, questa tesi presenta una versione modificata dell'algoritmo PEG (Progressive Edge Growth) che migliora i codici M-SC-MPC in termini di lunghezza dei cicli locali. Tali codici possono essere visti come codici M-SC-MPC dove è aggiunto un interleaver ad ogni coppia di codici componenti. Pertanto sono denominati Permuted Serially Concatenated Multiple Parity-Check (P-SC-MPC). Le simulazioni numeriche mostrano come i codici P-SC-MPC hanno prestazioni simili o spesso migliori rispetto ai codici M-SC-MPC regolari e irregolari e ai codici Quasi-Ciclici inclusi nello standard IEEE 802.16e.<br>This thesis elaborates on the design of Multiple Serially Concatenated Multiple Parity-Check (M-SC-MPC) codes, that are a class of structured Low-Density Parity-Check (LDPC), characterized by very simple encoding. It is also studied how the design of M-SC-MPC codes can be optimized for their usage in wireless applications. Irregular LDPC codes, in fact, have been proved to be better than regular ones, especially for low code rates. Particular attention is devoted to a simple modification of the inner structure of M-SC-MPC codes that can help to improve their error correction performance by introducing irregularity in the parity-check matrix and increasing the length of local cycles in the associated Tanner graph. Furthermore, this thesis presents a modified version of the Progressive Edge Growth (PEG) algorithm to improve the design of M-SC-MPC codes in terms of local cycles length. The proposed codes can be seen as M-SC-MPC codes where an interleaver is added between each pair of component codes; so they are denoted as Permuted Serially Concatenated Multiple Parity-Check (P-SC-MPC) codes. The numerical simulations show that the proposed codes perform comparably or even better than both regular and irregular M-SC-MPC codes and Quasi-Cyclic (QC) codes included in the IEEE 802.16e standard.
APA, Harvard, Vancouver, ISO, and other styles
21

Selvarathinam, Anand Manivannan. "High throughput low power decoder architectures for low density parity check codes." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2529.

Full text
Abstract:
A high throughput scalable decoder architecture, a tiling approach to reduce the complexity of the scalable architecture, and two low power decoding schemes have been proposed in this research. The proposed scalable design is generated from a serial architecture by scaling the combinational logic; memory partitioning and constructing a novel H matrix to make parallelization possible. The scalable architecture achieves a high throughput for higher values of the parallelization factor M. The switch logic used to route the bit nodes to the appropriate checks is an important constituent of the scalable architecture and its complexity is high with higher M. The proposed tiling approach is applied to the scalable architecture to simplify the switch logic and reduce gate complexity. The tiling approach generates patterns that are used to construct the H matrix by repeating a fixed number of those generated patterns. The advantages of the proposed approach are two-fold. First, the information stored about the H matrix is reduced by onethird. Second, the switch logic of the scalable architecture is simplified. The H matrix information is also embedded in the switch and no external memory is needed to store the H matrix. Scalable architecture and tiling approach are proposed at the architectural level of the LDPC decoder. We propose two low power decoding schemes that take advantage of the distribution of errors in the received packets. Both schemes use a hard iteration after a fixed number of soft iterations. The dynamic scheme performs X soft iterations, then a parity checker cHT that computes the number of parity checks in error. Based on cHT value, the decoder decides on performing either soft iterations or a hard iteration. The advantage of the hard iteration is so significant that the second low power scheme performs a fixed number of iterations followed by a hard iteration. To compensate the bit error rate performance, the number of soft iterations in this case is higher than that of those performed before cHT in the first scheme.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Jinghu. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/6885.

Full text
Abstract:
Iterative decoding techniques have been receiving more and more attentions with the invention of turbo codes and the rediscovery of low-density parity-check (LDPC) codes. An important aspect in the study of iterative decoding is the tradeoff between decoding performance and complexities. For both LDPC codes and turbo codes, optimum decoding algorithms can provide very good performance. However, complicated operations are involved in the optimum decoding, and prohibit the wide applications of LDPC codes and turbo codes in the next generation digital communication and storage systems. Although there exist sub-optimum decoding algorithms for both LDPC codes and turbo codes, the decoding performance is degraded with the sub-optimum algorithms, and under some circumstances, the gap is very large. This research investigates the reduced complexity decoding algorithms of LDPC codes and turbo codes. For decoding LDPC codes, new algorithms, namely the normalized BP-based algorithm and the offset BP-based algorithm, are proposed. For these two reduced complexity algorithms, density evolution algorithms are derived, and are used to determine the best decoder parameters associated with each of the algorithms. Numerical results show that the new algorithms can achieve near optimum decoding performances for infinite code lengths, and simulation results reveal the same conclusion for short to medium code lengths. In addition to the advantage of low computational complexities, the two new algorithms are less subject to quantization errors and correlation effects than the optimum BP algorithm, and consequently are more suitable for hardware implementation. For a special kind of LDPC codes - the geometric LDPC codes, we propose the normalized APP-based algorithm, which is even more simplified yet still can achieve the near optimum performance. For decoding turbo codes, two new sub-optimum decoding algorithms are proposed. The first is the bi-directional soft-output Viterbi algorithm (bi-SOVA), which is based on utilizing a backward SOYA decoding in addition to the conventional forward one, and can achieve better performance than the uni-directional SOYA. The second is the normalized Max-Log-MAP algorithm, which improves the performance of the Max-Log-MAP decoding by scaling the soft outputs with some predetermined factors.<br>xiii, 117 leaves
APA, Harvard, Vancouver, ISO, and other styles
23

Ho, Ki-hiu, and 何其曉. "Study of quantum low density parity check and quantum degeneratecodes." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B41897109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kopparthi, Sunitha. "Flexible encoder and decoder designs for low-density parity-check codes." Diss., Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Yue Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Design of structured nonbinary quasi-cyclic low-density parity-check codes." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43616.

Full text
Abstract:
Since the rediscovery, LDPC codes attract a large amount of research efforts. In 1998, nonbinary LDPC codes were firstly investigated and the results shown that they are better than their binary counterparts in performance. Recently, there is always a requirement from the industry to design applied nonbinary LDPC codes. In this dissertation, we firstly propose a novel class of quasi-cyclic (QC) LDPC codes. This class of QC-LDPC codes embraces both linear encoding complexity and excellent compatibility in various degree distributions and nonbinary expansions. We show by simulation results that our proposed QC-LDPC codes perform as well as their comparable counterparts. However, this proposed code structure is more flexible in designing. This feature may show its power when we change the code length and rate adaptively. Further more, we present two algorithms to generate codes with short girth and better girth distribution. The two algorithms are designed based on progressive edge growth (PEG) algorithm and they are specifically designed for quasi-cyclic structure. The simulation results show the improvement they achieved. In this thesis, we also investigate the believe propagation based iterative algorithms for decoding of nonbinary LDPC codes. The algorithms include sum-product (SP) algorithm, SP algorithm using fast Fourier transform, min-sum (MS) algorithm and complexity reduced extended min-sum (EMS) algorithm. In particular, we present the proposed modified min-sum algorithm with threshold filtering which further reduces the computation complexity.
APA, Harvard, Vancouver, ISO, and other styles
26

Planjery, Shiva Kumar. "Iterative decoding beyond belief propagation for low-density parity-check codes." Thesis, Cergy-Pontoise, 2012. http://www.theses.fr/2012CERG0618.

Full text
Abstract:
Les codes Low-Density Parity-Check (LDPC) sont au coeur de larecherche des codes correcteurs d'erreurs en raison de leur excellenteperformance de décodage en utilisant un algorithme de décodageitératif de type propagation de croyances (Belief Propagation - BP).Cet algorithme utilise la représentation graphique d'un code, ditgraphe de Tanner, et calcule les fonctions marginales sur le graphe.Même si l'inférence calculée n'est exacte que sur un graphe acyclique(arbre), l'algorithme BP estime de manière très proche les marginalessur les graphes cycliques, et les codes LDPC peuvent asymptotiquementapprocher la capacité de Shannon avec cet algorithme.Cependant, sur des codes de longueurs finies dont la représentationgraphique contient des cycles, l'algorithme BP est sous-optimal etdonne lieu à l'apparition du phénomène dit de plancher d'erreur. Leplancher d'erreur se manifeste par la dégradation soudaine de la pentedu taux d'erreur dans la zone de fort rapport signal à bruit où lesstructures néfastes au décodage sont connues en termes de TrappingSets présents dans le graphe de Tanner du code, entraînant un échec dudécodage. De plus, les effets de la quantification introduite parl'implémentation en hardware de l'algorithme BP peuvent amplifier ceproblème de plancher d'erreur.Dans cette thèse nous introduisons un nouveau paradigme pour ledécodage itératif à précision finie des codes LDPC sur le canalbinaire symétrique. Ces nouveaux décodeurs, appelés décodeursitératifs à alphabet fini (Finite Alphabet Iterative Decoders – FAID)pour préciser que les messages appartiennent à un alphabet fini, sontcapables de surpasser l'algorithme BP dans la région du plancherd'erreur. Les messages échangés par les FAID ne sont pas desprobabilités ou vraisemblances quantifiées, et les fonctions de miseà jour des noeuds de variable ne copient en rien le décodage par BP cequi contraste avec les décodeurs BP quantifiés traditionnels. Eneffet, les fonctions de mise à jour sont de simples tables de véritéconçues pour assurer une plus grande capacité de correction d'erreuren utilisant la connaissance de topologies potentiellement néfastes audécodage présentes dans un code donné. Nous montrons que sur demultiples codes ayant un poids colonne de trois, il existe des FAIDutilisant 3 bits de précision pouvant surpasser l'algorithme BP(implémenté en précision flottante) dans la zone de plancher d'erreursans aucun compromis dans la latence de décodage. C'est pourquoi lesFAID obtiennent des performances supérieures comparées au BP avecseulement une fraction de sa complexité.Par ailleurs, nous proposons dans cette thèse une décimation amélioréedes FAID pour les codes LDPC dans le traitement de la mise à jour desnoeuds de variable. La décimation implique de fixer certains bits ducode à une valeur particulière pendant le décodage et peut réduire demanière significative le nombre d'itérations requises pour corriger uncertain nombre d'erreurs fixé tout en maintenant de bonnesperformances d'un FAID, le rendant plus à même d'être analysé. Nousillustrons cette technique pour des FAID utilisant 3 bits de précisioncodes de poids colonne trois. Nous montrons également comment cettedécimation peut être utilisée de manière adaptative pour améliorer lescapacités de correction d'erreur des FAID. Le nouveau modèle proposéde décimation adaptative a, certes, une complexité un peu plus élevée,mais améliore significativement la pente du plancher d'erreur pour unFAID donné. Sur certains codes à haut rendement, nous montrons que ladécimation adaptative des FAID permet d'atteindre des capacités decorrection d'erreur proches de la limite théorique du décodage au sensdu maximum de vraisemblance<br>At the heart of modern coding theory lies the fact that low-density parity-check (LDPC) codes can be efficiently decoded by message-passing algorithms which are traditionally based on the belief propagation (BP) algorithm. The BP algorithm operates on a graphical model of a code known as the Tanner graph, and computes marginals of functions on the graph. While inference using BP is exact only on loop-free graphs (trees), the BP still provides surprisingly close approximations to exact marginals on loopy graphs, and LDPC codes can asymptotically approach Shannon's capacity under BP decoding.However, on finite-length codes whose corresponding graphs are loopy, BP is sub-optimal and therefore gives rise to the error floor phenomenon. The error floor is an abrupt degradation in the slope of the error-rate performance of the code in the high signal-to-noise regime, where certain harmful structures generically termed as trapping sets present in the Tanner graph of the code, cause the decoder to fail. Moreover, the effects of finite precision that are introduced during hardware realizations of BP can further contribute to the error floor problem.In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the Binary Symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs) to signify that the message values belong to a finite alphabet, are capable of surpassing the BP in the error floor region. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder, which is in contrast to traditional quantized BP decoders. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability by using the knowledge of potentially harmful topologies that could be present in a given code. We show that on several column-weight-three codes of practical interest, there exist 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor without any compromise in decoding latency. Hence, they are able to achieve a superior performance compared to BP with only a fraction of its complexity.Additionally in this dissertation, we propose decimation-enhanced FAIDs for LDPC codes, where the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during the decoding process, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by maximum-likelihood decoding
APA, Harvard, Vancouver, ISO, and other styles
27

Ho, Ki-hiu. "Study of quantum low density parity check and quantum degenerate codes." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B41897109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hou, Jilei. "Capacity-approaching coding schemes based on low-density parity-check codes /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3076341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Healy, Cornelius Thomas. "Short-length low-density parity-check codes : construction and decoding algorithms." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/7875/.

Full text
Abstract:
Error control coding is an essential part of modern communications systems. LDPC codes have been demonstrated to offer performance near the fundamental limits of channels corrupted by random noise. Optimal maximum likelihood decoding of LDPC codes is too complex to be practically useful even at short block lengths and so a graph-based message passing decoder known as the belief propagation algorithm is used instead. In fact, on graphs without closed paths known as cycles the iterative message passing decoding is known to be optimal and may converge in a single iteration, although identifying the message update schedule which allows single-iteration convergence is not trivial. At finite block lengths graphs without cycles have poor minimum distance properties and perform poorly even under optimal decoding. LDPC codes with large block length have been demonstrated to offer performance close to that predicted for codes of infinite length, as the cycles present in the graph are quite long. In this thesis, LDPC codes of shorter length are considered as they offer advantages in terms of latency and complexity, at the cost of performance degradation from the increased number of short cycles in the graph. For these shorter LDPC codes, the problems considered are: First, improved construction of structured and unstructured LDPC code graphs of short length with a view to reducing the harmful effects of the cycles on error rate performance, based on knowledge of the decoding process. Structured code graphs are particularly interesting as they allow benefits in encoding and decoding complexity and speed. Secondly, the design and construction of LDPC codes for the block fading channel, a particularly challenging scenario from the point of view of error control code design. Both established and novel classes of codes for the channel are considered. Finally the decoding of LDPC codes by the belief propagation algorithm is considered, in particular the scheduling of messages passed in the iterative decoder. A knowledge-aided approach is developed based on message reliabilities and residuals to allow fast convergence and significant improvements in error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Planjery, Shiva Kumar. "Iterative Decoding Beyond Belief Propagation of Low-Density Parity-Check Codes." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/305883.

Full text
Abstract:
The recent renaissance of one particular class of error-correcting codes called low-density parity-check (LDPC) codes has revolutionized the area of communications leading to the so-called field of modern coding theory. At the heart of this theory lies the fact that LDPC codes can be efficiently decoded by an iterative inference algorithm known as belief propagation (BP) which operates on a graphical model of a code. With BP decoding, LDPC codes are able to achieve an exceptionally good error-rate performance as they can asymptotically approach Shannon's capacity. However, LDPC codes under BP decoding suffer from the error floor phenomenon, an abrupt degradation in the error-rate performance of the code in the high signal-to-noise ratio region, which prevents the decoder from achieving very low error-rates. It arises mainly due to the sub-optimality of BP decoding on finite-length loopy graphs. Moreover, the effects of finite precision that stem from hardware realizations of BP decoding can further worsen the error floor phenomenon. Over the past few years, the error floor problem has emerged as one of the most important problems in coding theory with applications now requiring very low error rates and faster processing speeds. Further, addressing the error floor problem while taking finite precision into account in the decoder design has remained a challenge. In this dissertation, we introduce a new paradigm for finite precision iterative decoding of LDPC codes over the binary symmetric channel (BSC). These novel decoders, referred to as finite alphabet iterative decoders (FAIDs), are capable of surpassing the BP in the error floor region at a much lower complexity and memory usage than BP without any compromise in decoding latency. The messages propagated by FAIDs are not quantized probabilities or log-likelihoods, and the variable node update functions do not mimic the BP decoder. Rather, the update functions are simple maps designed to ensure a higher guaranteed error correction capability which improves the error floor performance. We provide a methodology for the design of FAIDs on column-weight-three codes. Using this methodology, we design 3-bit precision FAIDs that can surpass the BP (floating-point) in the error floor region on several column-weight-three codes of practical interest. While the proposed FAIDs are able to outperform the BP decoder with low precision, the analysis of FAIDs still proves to be a difficult issue. Furthermore, their achievable guaranteed error correction capability is still far from what is achievable by the optimal maximum-likelihood (ML) decoding. In order to address these two issues, we propose another novel class of decoders called decimation-enhanced FAIDs for LDPC codes. For this class of decoders, the technique of decimation is incorporated into the variable node update function of FAIDs. Decimation, which involves fixing certain bits of the code to a particular value during decoding, can significantly reduce the number of iterations required to correct a fixed number of errors while maintaining the good performance of a FAID, thereby making such decoders more amenable to analysis. We illustrate this for 3-bit precision FAIDs on column-weight-three codes and provide insights into the analysis of such decoders. We also show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly increase the slope of the error floor in the error-rate performance of a particular FAID. On certain high-rate column-weight-three codes of practical interest, we show that adaptive decimation-enhanced FAIDs can achieve a guaranteed error-correction capability that is close to the theoretical limit achieved by ML decoding.
APA, Harvard, Vancouver, ISO, and other styles
31

Mei, Zhen. "Analysis of low-density parity-check codes on impulsive noise channels." Thesis, University of Newcastle upon Tyne, 2017. http://hdl.handle.net/10443/3758.

Full text
Abstract:
Communication channels can severely degrade a signal, not only due to fading effects but also interference in the form of impulsive noise. In conventional communication systems, the additive noise at the receiver is usually assumed to be Gaussian distributed. However, this assumption is not always valid and examples of non-Gaussian distributed noise include power line channels, underwater acoustic channels and manmade interference. When designing a communication system it is useful to know the theoretical performance in terms of bit-error probability (BEP) on these types of channels. However, the effect of impulses on the BEP performance has not been well studied, particularly when error correcting codes are employed. Today, advanced error-correcting codes with very long block lengths and iterative decoding algorithms, such as Low-Density Parity-Check (LDPC) codes and turbo codes, are popular due to their capacity-approaching performance. However, very long codes are not always desirable, particularly in communications systems where latency is a serious issue, such as in voice and video communication between multiple users. This thesis focuses on the analysis of short LDPC codes. Finite length analyses of LDPC codes have already been presented for the additive white Gaussian noise channel in the literature, but the analysis of short LDPC codes for channels that exhibit impulsive noise has not been investigated. The novel contributions in this thesis are presented in three sections. First, uncoded and LDPC-coded BEP performance on channels exhibiting impulsive noise modelled by symmetric -stable (S S) distributions are examined. Different sub-optimal receivers are compared and a new low-complexity receiver is proposed that achieves near-optimal performance. Density evolution is then used to derive the threshold signal-tonoise ratio (SNR) of LDPC codes that employ these receivers. In order to accurately predict the waterfall performance of short LDPC codes, a nite length analysis is proposed with the aid of the threshold SNRs of LDPC codes and the derived uncoded BEPs for impulsive noise channels. Second, to investigate the e ect of impulsive noise on wireless channels, the analytic BEP on generalized fading channels with S S noise is derived. However, it requires the evaluation of a double integral to obtain the analytic BEP, so to reduce the computational cost, the Cauchy- Gaussian mixture model and the asymptotic property of S S process are used to derive upper bounds of the exact BEP. Two closed-form expressions are derived to approximate the exact BEP on a Rayleigh fading channel with S S noise. Then density evolution of different receivers is derived for these channels to nd the asymptotic performance of LDPC codes. Finally, the waterfall performance of LDPC codes is again estimated for generalized fading channels with S S noise by utilizing the derived uncoded BEP and threshold SNRs. Finally, the addition of spatial diversity at the receiver is investigated. Spatial diversity is an effective method to mitigate the effects of fading and when used in conjunction with LDPC codes and can achieve excellent error-correcting performance. Hence, the performance of conventional linear diversity combining techniques are derived. Then the SNRs of these linear combiners are compared and the relationship of the noise power between different linear combiners is obtained. Nonlinear detectors have been shown to achieve better performance than linear combiners hence, optimal and sub-optimal detectors are also presented and compared. A non-linear detector based on the bi-parameter Cauchy-Gaussian mixture model is used and shows near-optimal performance with a significant reduction in complexity when compared with the optimal detector. Furthermore, we show how to apply density evolution of LDPC codes for different combining techniques on these channels and an estimation of the waterfall performance of LDPC codes is derived that reduces the gap between simulated and asymptotic performance. In conclusion, the work presented in this thesis provides a framework to evaluate the performance of communication systems in the presence of additive impulsive noise, with and without spatial diversity at the receiver. For the first time, bounds on the BEP performance of LDPC codes on channels with impulsive noise have been derived for optimal and sub-optimal receivers, allowing other researchers to predict the performance of LDPC codes in these type of environments without needing to run lengthy computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
32

Mantha, Ramesh. "Hybrid automatic repeat request schemes using turbo codes and low-density parity check codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0019/MQ58728.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Lei. "Construction of structured low-density parity-check codes : combinatorial and algebraic approaches /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hur, Woonhaing. "Incremental Redundancy Low-Density Parity-Check Codes for Hybrid FEC/ARQ Schemes." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14491.

Full text
Abstract:
The objective of this dissertation is to investigate incremental redundancy low-density parity-check (IR-LDPC) codes for hybrid forward error correction / automatic repeat request (HybridARQ) schemes. Powerful capacity-approaching IR-LDPC codes are one of the key functional elements in high-throughput HybridARQ schemes and provide a flexible rate-compatible structure, which is necessary for low-complexity HybridARQ schemes. This dissertation first studies the design and performance evaluation of IR-LDPC codes, which have good error rate performance at short block lengths. The subset codes of the IR-LDPC codes are compared to conventional random punctured codes and multiple dedicated codes. As a system model for this work, an adaptive LDPC coded system is presented. This adaptive system can confront the nature of time-varying channels and approach the capacity of the system with the aid of LDPC codes. This system shows remarkable throughput improvement over a conventional punctured system and, for systems that use multiple dedicated codes, provides comparable performance with low-complexity at every target error rate. This dissertation also focuses on IR-LDPC codes with a wider operating code range because the previous IR-LDPC codes exhibited performance limitation related to the maximum achievable code rate. For this reason, this research proposes a new way to increase the maximum code rate of the IR-LDPC codes, which provides throughput improvement at high throughput regions over conventional random punctured codes. Also presented is an adaptive code selection algorithm using threshold parameters. This algorithm reduces the number of the unnecessary traffic channels in HybridARQ schemes. This dissertation also examines how to improve throughput performance in HybridARQ schemes with low-complexity by exploiting irregular repeat accumulate (IRA) codes. The proposed adaptive transmission method with adaptive puncturing patterns of IRA codes shows higher throughput performance in all of operating code ranges than does any other single mode in HybridARQ schemes.
APA, Harvard, Vancouver, ISO, and other styles
35

Kazanci, Onur Husnu. "Performance Of Pseudo-random And Quasi-cyclic Low Density Parity Check Codes." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609036/index.pdf.

Full text
Abstract:
Low Density Parity Check (LDPC) codes are the parity check codes of long block length, whose parity check matrices have relatively few non-zero entries. To improve the performance at relatively short block lengths, LDPC codes are constructed by either pseudo-random or quasi-cyclic methods instead of random construction methods. In this thesis, pseudo-random code construction methods, the effects of closed loops and the graph connectivity on the performance of pseudo-random LDPC codes are investigated. Moreover, quasi-cyclic LDPC codes, which have encoding and storage advantages over pseudo-random LDPC codes, their construction methods and performances are reviewed. Finally, performance comparison between pseudo-random and quasi-cyclic LDPC codes is given for both regular and irregular cases.
APA, Harvard, Vancouver, ISO, and other styles
36

Bardak, Erinc Deniz. "Design And Performance Of Capacity Approaching Irregular Low-density Parity-check Codes." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611084/index.pdf.

Full text
Abstract:
In this thesis, design details of binary irregular Low-Density Parity-Check (LDPC) codes are investigated. We especially focus on the trade-off between the average variable node degree, wa, and the number of length-6 cycles of an irregular code. We observe that the performance of the irregular code improves with increasing wa up to a critical value, but deteriorates for larger wa because of the exponential increase in the number of length-6 cycles. We have designed an irregular code of length 16,000 bits with average variable node degree wa=3.8, that we call &lsquo<br>2/3/13&rsquo<br>since it has some variable nodes of degree 2 and 13 in addition to the majority of degree-3 nodes. The observed performance is found to be very close to that of the capacity approaching commercial codes. Time spent for decoding 50,000 codewords of length 1800 at Eb/No=1.6 dB for an irregular 2/3/13 code is measured to be 19% less than that of the regular (3, 6) code, mainly because of the smaller number of decoding failures.
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Sizhen Michael. "Construction of low-density parity-check codes for data storage and transmission." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280665.

Full text
Abstract:
This dissertation presents a new class of irregular low-density parity-check (LDPC) codes of moderate length and high rate. The codes in this class admit low-complexity encoding and have lower error rate floors than other irregular LDPC code design approaches. It is also shown that this class of LDPC codes is equivalent to a class of systematic serial turbo codes and is an extension of irregular repeat-accumulate codes. A code design algorithm based on the combination of density evolution and differential evolution optimization with a modified cost function is presented. Moderate-length, high-rate codes with no error-rate floors down to a bit error-rate of 10-9 are presented. Although our focus is on moderate-length, high-rate codes, the proposed coding scheme is applicable to irregular LDPC codes with other lengths and rates. Applications of these codes to magnetic data storage and wireless transmission channels are then studied. In the case of data storage, we assume an EPR4 partial response model with noise bursts which models media defects and thermal asperities. We show the utility of sending burst noise channel state information to both the partial response detector and the decoder. Doing so eliminates the error rate curve flattening seen by other researchers. The simulation results presented have demonstrated that LDPC codes are very effective against noise bursts and, in fact, are superior to Reed-Solomon codes in the regime simulated. We also have presented an algorithm for finding the maximum resolvable erasure-burst length, Lmax, for a given LDPC code. The simulation results make the possibility of an error control system based solely on an LDPC code very promising. For the wireless communication channel, we assume two types of Gilbert-Elliott channels and design LDPC codes for such channels. Under certain assumptions, this model leads us to what we call the burst-erasure channel with AWGN (BuEC-G), in which bits are received in Gaussian noise or as part of an erasure burst. To design codes for this channel, we take a "shortcut" and instead design codes for the burst-erasure channel (BuEC) in which a bit is received correctly or it is received as an erasure, with erasures occurring in bursts. We show that optimal BuEC code ensembles are equal to optimal binary erasure channel (BEC) code ensembles and we design optimal codes for these channels. The burst-erasure efficacy can also be measured by the maximum resolvable erasure-burst length Lmax. Finally, we present error-rate results which demonstrate the superiority of the designed codes on the BuEC-G over other codes that appear in the literature.
APA, Harvard, Vancouver, ISO, and other styles
38

Baldi, Marco. "Quasi-cyclic low density parity-check codes and their application to cryptography." Doctoral thesis, Università Politecnica delle Marche, 2006. http://hdl.handle.net/11566/242651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ländner, Stefan [Verfasser]. "Improving the Error-Floor Behavior of Low-Density Parity-Check Codes / Stefan Ländner." Aachen : Shaker, 2011. http://d-nb.info/1070151254/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Papaharalabos, Stylianos. "Efficient iterative decoding algorithms for turbo and low-density parity-check (LDPC) codes." Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/804383/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Krishnan, Anantha Raman, and Shashi Kiran Chilappagari. "Low-Density Parity-Check Codes Which Can Correct Three Errors Under Iterative Decoding." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606118.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>In this paper, we give necessary and sufficient conditions for low-density parity-check (LDPC) codes with column-weight four to correct three errors when decoded using hard-decision message-passing decoding. We then give a construction technique which results in codes satisfying these conditions. We also provide numerical assessment of code performance via simulation results.
APA, Harvard, Vancouver, ISO, and other styles
42

Sandberg, Sara. "Low-density parity-check codes : unequal error protection and reduction of clipping effects /." Luleå, 2009. http://pure.ltu.se/ws/fbspretrieve/2546109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Xiaoxiao. "Efficient design and decoding of the rate-compatible low-density parity-check codes /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20WUXX.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lai, Bi-Lien, and 賴碧蓮. "Decoding Long Convolutional Codes by Low Density Parity Check Code Decoding Schemes." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/67029037439222306454.

Full text
Abstract:
碩士<br>逢甲大學<br>電子工程所<br>92<br>Since the improvement of wireless communication technology, it is impossible for people to be lacking of various wireless products gradually. Such as Cellular Phone, Wireless LAN, Bluetooth and so on, they bring our life convenience and enjoyment. However, the resources of the wireless frequency spectra are very limited and valuable. Hence every possible kind of methods to improve the transmission efficiency for wireless communication becomes the focus of research in communication systems. Use of error correction codes is a chief way to raise the transmission efficiency. There are many kinds of error correction codes. One called Low Density Parity Check Codes which gets a lot attention recently due to the coding gain is near Shannon limit. Moreover, the decoder is easy and the cost of the hardware is low. Another error correction code, Convolutional codes, its encoder is pretty simple. Only using several shift registers and the simple circuits can yield good coding gain. We find that Convolutional Codes is exactly an irregular low density parity check codes. This research will focus on combining these two techniques. Use Convolutional Codes encoder to encode then use low density parity check codes decoder to decode. Design a simple and effective error correction codes scheme to achieve high bandwidth efficiency for wireless communication. Finally, we will use two different decoder algorithms to decode, sum-product algorithm of Low Density Parity Check Codes and Viterbi algorithm of Convolutional Codes. Then compare their error probability and memories for decoding.
APA, Harvard, Vancouver, ISO, and other styles
45

Lu, Zu-Han, and 呂祖漢. "A Study on Modified Graph Decoding for Low‐Density Parity‐Check Convolutional Codes." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/8s4jcb.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>104<br>If there exist cycle in tanner graph of error-correcting code, the messages for computation in the sum-product algorithm (SPA) are observed to be statistically dependent, thereby degrading the decoding performance from the optimal one. Such performance degradation is even severer for low‐density parity‐check convolutional codes due to the repetitive structure among check and variable nodes in their tanner graphs. In this thesis, we propose two modified methods of SPA for mitigation of the cycle effect. Both the methods are based on a simple idea that the harmful cycles are replaced by proper trellis-decoding modules such that the dependence among messages in the modified tanner graph can be alleviated. By the simulation results, the proposed methods are verified to achieve satisfactory performance enhancements compared with SPA.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Shu-Huan, and 李叔桓. "Masking technique and encoding constraint relaxation on Low Density Parity Check Convolutional Codes." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/72999300693796960590.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>102<br>Low density parity check convolutional codes (LDPC-CCs) are usually defined as the null space of their syndrome former matrices. With a certain constraint on the syndrome former matrices, LDPC-CCs can be fast encoded by syndrome former encoders. When encoding, zero padding and tail biting are two often-used schemes for trellis termination. There are three topics for investigation in this thesis. Two methods used to improve the performance of the terminated LDPC-CCs and tail biting LDPCCCs are introduced in the first two topics. For the tail biting LDPC-CCs, masking technique is used to improve the bit-error-rate (BER) performance. From the viewpoint of trapping sets, the performance improvement is then discussed. The variations on encoder and decoder of the tail biting LDPC-CCs corresponding to the masking technique are also proposed. For the terminated LDPC-CCs, by analyzing the structure of its syndrome former encoder, wesuggest to avoid sending some known bits when codewords are transmitted, thus alleviating the possible rate-loss. Simulation results show that the codes utilizing these two methods have much better performance. In the last topic, we show that syndrome former matrices violating the constraint, which make fast encoding feasible, are not necessarily with poor BER performance. Thus we relax the constraint on syndrome former matrices of LDPC-CCs. Under the relaxed constraint, the encoding method is then presented.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Mu-Chen, and 吳牧諶. "Improved Residual-Based Dynamic Scheduling for Decoding of Low-Density Parity-Check Convolutional Codes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/70121832118271022637.

Full text
Abstract:
碩士<br>國立交通大學<br>電信工程研究所<br>99<br>Previous studies on low-density parity-check convolutional codes (LDPC-CCs) revealed that LDPC-CCs with rational parity-check matrices (RPCM) have poor bit-error-rate (BER) performances due to the existence of lenth-4 cycles in their Tanner graphs. In our recent work, we found that we can transform the original Tanner graph of an LDPC-CC with an RPCM into a new Tanner graph with larger girth based on the concept of puncturing such that the LDPC-CC can have a comparable or even better BER performance than those of LDPC-CCs with polynomial parity-check matrices (PPCMs). For the decoding of punctured LDPC codes, sequential schedules are usually used to improve BER performances or speed up the convergence of the decoding. We select the well-performed ecient dynamic scheduling (EDS) among the available sequential schedules to decode those LDPC-CCs with RPCMs in order to obtain better BER performances. In this thesis, we rstly modify the residual function of EDS to have a more appropriate updating order. Besides, since several observations indicate that the decoding based on the original EDS or our improved EDS may not converge or converge to non-optimal codewords, two rened strategies based on the perturbation and the bit- ipping are hence proposed to mitigate these problems. Revealed by the simulations results, not only for RPCMs but also for PPCMs, our proposed algorithm can provide better BER performances than those of several existent schemes.
APA, Harvard, Vancouver, ISO, and other styles
48

zhong, wei-shuo, and 鐘偉碩. "Combining Low-Density Parity-Check Codes with Recursive Convolutional Codes for Error-Correction in Binary Erasure Channels." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/73912170852030929043.

Full text
Abstract:
碩士<br>逢甲大學<br>產業研發碩士班<br>95<br>With the progressing day by day of electronic science and technology, people''s life already can''t lack the relevant products of electron gradually. among them, the wireless communication has been paid more attention to day by day. The ones that must be overcome are that all information transmitted can be accepted infallibly in any several communication system, with the development of perfection of the theory of the code and digital circuit, many channel coding so as to succeed and widely application to the wireless communication system. There are many kinds of error correction mechanism nowadays. This thesis discusses mainly how to use low-density parity check codes(LDPC Code) ,With Recursive Convolutional Codes Design a simple and easy and effective error-correction the mechanism. Its will focus on combining two systematic linear block of compiled code technology, make recursive convolutional codes as encoder of this error correction mechanism, Convolutional Codes‘s generator matrix have low density and his regularity can be in accordance with following, through passing form of Linaer-Feedback Shift Registers(LFSR''s)can produced limited regular and being formulation Generator Sequences. Make low density-check code as decoder of this error correction mechanism, it is dealt with Erasure and that made an error when transmitting Separately by Bit-Flipping Decoding Algorithm and The Sum-Product Algorithm of low-density parity check code algorithm. Make use of error correct mechanism is it can improve efficiency of encoding , decoding, is it reduce transmission channels probability that make errors, can also increase the speed transmitted in channels. Keywords:error correction, low-density parity-check codes, convolutional codes, Linaer-Feedback Shift Registers, Erasure
APA, Harvard, Vancouver, ISO, and other styles
49

Brandon, Tyler. "Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder Architectures." Phd thesis, 2010. http://hdl.handle.net/10048/981.

Full text
Abstract:
We present novel architectures for parallel-node low-density parity-check convolutional code (PN-LDPC-CC) encoders and decoders. Based on a recently introduced implementation-aware class of LDPC-CCs, these encoders and decoders take advantage of increased node-parallelization to simultaneously decrease the energy-per-bit and increase the decoded information throughput. A series of progressively improved encoder and decoder designs are presented and characterized using synthesis results with respect to power, area and throughput. The best of the encoder and decoder designs significantly advance the state-of-the-art in terms of both the energy-per-bit and throughput/area metrics. One of the presented decoders, for an Eb /N0 of 2.5 dB has a bit-error-rate of 106, takes 4.5 mm2 in a CMOS 90-nm process, and achieves an energy-per-decoded-information-bit of 65 pJ and a decoded information throughput of 4.8 Gbits/s. We implement an earlier non-parallel node LDPC-CC encoder, decoder and a channel emulator in silicon. We provide readers, via two sets of tables, the ability to look up our decoder hardware metrics, across four different process technologies, for over 1000 variations of our PN-LDPC-CC decoders. By imposing practical decoder implementation constraints on power or area, which in turn drives trade-offs in code size versus the number of decoder processors, we compare the code BER performance. An extensive comparison to known LDPC-BC/CC decoder implementations is provided.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Cheng-hsun, and 李政勳. "Recovering Continuous Erasures in Binary Erasure Channels using by Low Density Parity Check Codes andnon-Recursive Convolutional Codes." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/78404887817807501542.

Full text
Abstract:
碩士<br>逢甲大學<br>產業研發碩士班<br>95<br>In the last few years due to wireless communication fast development,the people gradually already more and more not can lacking in wireless communication products.To be related product such as cell phone, PDA,wireless LAN and so on. Let the present generation people much convenient in the communication. Although wireless communication is convenient than wired communication, but the wireless communication to produce bit error rate in information transmission than the wired communication. So reduce the error rate of the communication in wireless communication, become a very important goal. There are many error correcting codes can reduce wireless communication error rate efficiently, non-Recursive convolutional codes and LDPC codes play an important role in recent years. Because they not only have higher coding performance, but also have simpler circuit and lower hardware cost. Non-recursive convolutional codes using shift-register and simple loop circuit to create higher encoding gain. LDPC(Low Density Parity Check) codes play better decoding function, its decoding gain approach Shannon’s limit. The goal of this research utilize non-Recursive convolutional codes as encoder and LDPC codes as decoder to optimize error correcting performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography