To see the other types of publications on this topic, follow the link: Error control coding.

Dissertations / Theses on the topic 'Error control coding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Error control coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Popplewell, Andrew. "Combined line and error control coding." Thesis, Bangor University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Matrakidis, Chris. "Error control coding for constrained channels." Thesis, University College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdelhamid, Awad Aly Ahmed Sala. "Quantum error control codes." Diss., Texas A&M University, 2008. http://hdl.handle.net/1969.1/85910.

Full text
Abstract:
It is conjectured that quantum computers are able to solve certain problems more quickly than any deterministic or probabilistic computer. For instance, Shor's algorithm is able to factor large integers in polynomial time on a quantum computer. A quantum computer exploits the rules of quantum mechanics to speed up computations. However, it is a formidable task to build a quantum computer, since the quantum mechanical systems storing the information unavoidably interact with their environment. Therefore, one has to mitigate the resulting noise and decoherence effects to avoid computational errors. In this dissertation, I study various aspects of quantum error control codes - the key component of fault-tolerant quantum information processing. I present the fundamental theory and necessary background of quantum codes and construct many families of quantum block and convolutional codes over finite fields, in addition to families of subsystem codes. This dissertation is organized into three parts: Quantum Block Codes. After introducing the theory of quantum block codes, I establish conditions when BCH codes are self-orthogonal (or dual-containing) with respect to Euclidean and Hermitian inner products. In particular, I derive two families of nonbinary quantum BCH codes using the stabilizer formalism. I study duadic codes and establish the existence of families of degenerate quantum codes, as well as families of quantum codes derived from projective geometries. Subsystem Codes. Subsystem codes form a new class of quantum codes in which the underlying classical codes do not need to be self-orthogonal. I give an introduction to subsystem codes and present several methods for subsystem code constructions. I derive families of subsystem codes from classical BCH and RS codes and establish a family of optimal MDS subsystem codes. I establish propagation rules of subsystem codes and construct tables of upper and lower bounds on subsystem code parameters. Quantum Convolutional Codes. Quantum convolutional codes are particularly well-suited for communication applications. I develop the theory of quantum convolutional codes and give families of quantum convolutional codes based on RS codes. Furthermore, I establish a bound on the code parameters of quantum convolutional codes - the generalized Singleton bound. I develop a general framework for deriving convolutional codes from block codes and use it to derive families of non-catastrophic quantum convolutional codes from BCH codes. The dissertation concludes with a discussion of some open problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Liang. "Error control coding in ADSL DMT system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0001/MQ36760.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Catterall, Noel James. "Public key cryptosystems based error control coding." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ives, Robert W. "Error Control Coding for Multi-Frequency Modulation." Thesis, Monterey, California: Naval Postgraduate School, 1990. http://hdl.handle.net/10945/27762.

Full text
Abstract:
Approved for public release; distribution is unlimited.
Multi-frequency modulation (MFM) has been developed at NPS using both quadrature-phase-shift-keyed (QPSK) and quadrature-amplitude-modulated (QAM) signals with good bit error performance at reasonable signal-to-noise ratios. Improved performance can be achieved by the introduction of error control coding. This report documents a Fortran simulation of the implementation of error control coding into an MFM communication link with additive white Gaussian noise. Four Reed-Solomon codes were incorporated, two for 16-QAM and two for 32- QAM modulation schemes. The error control codes used were modified from the conventional Reed-Solomon codes in that one information symbol was sacrificed to parity in order to use a simplified decoding algorithm which requires no iteration and enhances error detection capability. Bit error rates as a function of SNR and EbN0 were analyzed, and bit error performance waa weighed against reduction in information rate to determine the value of the codes.
APA, Harvard, Vancouver, ISO, and other styles
7

McPhail, Bernard N. B. (Bernard Nicolas Bruce) Carleton University Dissertation Engineering Electrical. "Error control coding for land mobile communications." Ottawa, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Trichard, Marc Henri. "Study of Trellis Coded Modulation and error control coding." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9658.

Full text
Abstract:
In this thesis, we examine some general theoretical features of Trellis Coded Modulation. The codes in use are the recursive Ungerboeck convolutional codes and they are combined with Quadrature Amplitude Modulation over an Additive White Gaussian Noise channel. The search of optimum mappings is first treated and the question of having some smaller punctured constellations from a larger one is considered. Performance evaluation of TCM codes is provided according to geometric conditions that the mappings must stand in order to reduce the complexity of the calculation. Computation of transfer function is provided and this leads to a theoretical analysis for probability expressions of even, symbol and bit errors. A second order term expression is also elaborated which takes into consideration both error events on the trellis and errors on parallel transitions. Computer simulations have been run along with High Density Television transmission parameters. The results are presented and discussed. The use of error control coding techniques makes a great improvement in performance. For this purpose, Reed-Solomon codes have been concatenated with TCM codes, and gone through interleaving to ensure a higher error recovery efficiency.
APA, Harvard, Vancouver, ISO, and other styles
9

Fragiacomo, Simon. "Development in channel coding for error and spectral control." Thesis, University College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Xiaopeng. "Wyner-ziv coding and error control for video communication /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20FAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Marple, Steven Robert. "Improved error control techniques for data transmission." Thesis, Lancaster University, 2000. http://eprints.lancs.ac.uk/8074/.

Full text
Abstract:
Error control coding is frequently used to minimise the errors which occur naturally in the transmission and storage of digital data. Many methods for decoding such codes already exist. The choice falls mainly into two areas: hard-decision algebraic decoding, a computationally-efficient method, and soft-decision combinatorial decoding, which although more complex offers better error-correction. The work presented in this Thesis is intended to provide practical decoding algorithms which can be implemented in real systems. Soft-decision maximum-likelihood decoding of Reed-Solomon codes can be obtained by using the Viterbi algorithm over a suitable trellis. Two-stage decoding of Reed-Solomon codes is presented. It is an algorithm by which near-optimum performance may be achieved with a complexity lower than the Viterbi algorithm. The soft-output Viterbi algorithm (SOVA) has been investigated as a means of providing soft-decision information for subsequent decoders. Considerations of how to apply SOVA to multi-level codes are given. The use of SOVA in a satellite downlink channel is discussed. The results of a computer simulation, which showed a 1.8dB improvement in coding gain for only a 20% increase in decoding complexity, are presented. SOVA was also used to improve the decoding performance when applied to an RS product code. Several different decoding methods were evaluated, including cascade decoding, and a method where the row and columns were decoded alternately. A complexity measurement was developed which allows accurate comparisons of decoding complexity for trellis-based and algebraic decoders. With this technique the decoding complexity of all the algorithms implemented are compared. Also included in the comparison are the Euclidean and Berlekamp-Massey algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Bate, Stephen Donald. "Adaptive coding algorithms for data transmission." Thesis, Coventry University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rashwan, Haitham. "Public key cryptosystem based on error control coding and its applications to network coding." Thesis, Lancaster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.654464.

Full text
Abstract:
attack; the recent introduction of list decoding for binary Goppa codes; and the possibility of choosing code lengths that are not a power of 2. The resulting public-key sizes are considerably smaller than previous parameter choices for the same level of security. The smallest key size against all known attacks is 460 647 bits which is too large for practical implementations to be efficient. In this thesis, we attempt to reduce McEliece's public key size by using other codes instead of Goppa codes. This thesis focuses on Gabidulin -Paramonov-Trejtakov (GPT) cryptosystem which is based on rank distance codes, which is connected to the difficulty of general decoding problem. The GPT cryptosystem is a variant of the McEliece cryptosystem. The use of rank codes in cryptographic applications is advantageous since it is practically impossible to utilize combinatoric decoding. This has enabled using public keys of a smaller size. Respective structural attacks against this system were proposed by Gibson and recently by Overbeck. Overbeck's attacks break many variants of the GPT cryptosystem in polynomial time. Gabidulin has introduced Advanced approach to prevent Overbeck's attacks. We evaluate the overall security of the GPT cryptosystem and its variants against both the structural attacks and the decoding (Brute force) attacks. Furthermore, we apply the Advanced approach to secure other variants of the GPT cryptosystem which are still vulnerable to Overbeck's attacks. Moreover, we introduce two new approaches to combating the GPT cryptosystem against all known attacks; the first approach is called Smart approach and the second one is called constructed Smart approach. We study how to choose the GPT PKC parameters so as to minimize the public key size and implementation complexity, and to maximize the overall security of the GPT cryptosystem against all known attacks in order to make an efficient system for low power handsets. We present different trade-offs for using a combined system for error protection and cryptography. Our results suggest that the McEliece key size has been reduced just 4000 bits with security of 280 , a public key size of 4800 bits with security of 276 , and a public key size of 17 200 bits with security of 2116 that corresponds respectively with the Advanced approach for standard variant of GPT, Advanced approach for simple variant of GPT, and - - ~- - - - . - - - ---- - - _. ~- ---. --- ~- -- ~-- - - ---~- - -- . - .• -.• '- ' . , ....- ..-.......iI: - ...... ... :. . -------~-- ABSTRACT iv Advanced approach for the simple variant of OPT based on reducible rank codes. Similarly, the Smart approach and the constructed Smart approach for simple variant of OPT cryptosystem have reduced McEliece's key size to 5000 bits with security of 294 for the Smart approach and to 7200 bits with security of 295 for the constructed Smart approach. By using the OPT PKC and its variants, we have approximately a 99% reduction in the size of the public key than McEliece cryptosystem with reasonable security level against all known attacks. Network coding substantially increases network throughput. Random network coding is an effective technique for information dissemination in communications networks. The security of network coding is designed against two types of attacks: Wiretapping and Byzantine attacks. The Wiretapping attack can tap some original packets, outgoing from the source to the destination with the purpose of recovering the message; The Byzantine attack can inject error packets; this type of attack has the potential to affect all packets gathered by an information receiver. We introduce a new scheme to provide information security by using the OPT public key cryptosystem together with Silva- Kotter-Kschischang random network codes. Moreover, we investigate the performance of the system, transmitting the encrypted packets to the destination (sink) through wire communication networks using different random network coding models. Our results show that the introduced scheme is secure against Wiretapping and Byzantine attacks under some conditions which depend on rank code parameters.
APA, Harvard, Vancouver, ISO, and other styles
14

Daniel, J. S. "Synthesis and decoding of array error control codes." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Du, Bing Bing. "ECC video : an active second error control approach for error resilience in video coding." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15847/1/Bing_Bing_Du_Thesis.pdf.

Full text
Abstract:
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
16

Du, Bing Bing. "ECC Video: An Active Second Error Control Approach for Error Resilience in Video Coding." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15847/.

Full text
Abstract:
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
17

Sartipi, Mina. "Modern Error Control Codes and Applications to Distributed Source Coding." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19795.

Full text
Abstract:
This dissertation first studies two-dimensional wavelet codes (TDWCs). TDWCs are introduced as a solution to the problem of designing a 2-D code that has low decoding- complexity and has the maximum erasure-correcting property for rectangular burst erasures. The half-rate TDWCs of dimensions N1 X N2 satisfy the Reiger bound with equality for burst erasures of dimensions N1 X N2/2 and N1/2 X N2, where GCD(N1,N2) = 2. Examples of TDWC are provided that recover any rectangular burst erasure of area N1N2/2. These lattice-cyclic codes can recover burst erasures with a simple and efficient ML decoding. This work then studies the problem of distributed source coding for two and three correlated signals using channel codes. We propose to model the distributed source coding problem with a set of parallel channel that simplifies the distributed source coding to de- signing non-uniform channel codes. This design criterion improves the performance of the source coding considerably. LDPC codes are used for lossless and lossy distributed source coding, when the correlation parameter is known or unknown at the time of code design. We show that distributed source coding at the corner point using LDPC codes is simplified to non-uniform LDPC code and semi-random punctured LDPC codes for a system of two and three correlated sources, respectively. We also investigate distributed source coding at any arbitrary rate on the Slepian-Wolf rate region. This problem is simplified to designing a rate-compatible LDPC code that has unequal error protection property. This dissertation finally studies the distributed source coding problem for applications whose wireless channel is an erasure channel with unknown erasure probability. For these application, rateless codes are better candidates than LDPC codes. Non-uniform rateless codes and improved decoding algorithm are proposed for this purpose. We introduce a reliable, rate-optimal, and energy-efficient multicast algorithm that uses distributed source coding and rateless coding. The proposed multicast algorithm performs very close to network coding, while it has lower complexity and higher adaptability.
APA, Harvard, Vancouver, ISO, and other styles
18

Ong, Leh Kui. "Source reliant error control for low bit rate speech communications." Thesis, University of Surrey, 1994. http://epubs.surrey.ac.uk/843456/.

Full text
Abstract:
Contemporary and future speech telecommunication systems now utilise low bit rate (LBR) speech coding techniques in efforts to eliminate bandwidth expansion as a disadvantage of digital coding and transmission. These speech coders employ model-based approaches in compressing human speech into a number of parameters, using a well-known process known as linear predictive coding (LPC). However, a major side-effect observed in these coders is that errors in the model parameters have noticeable and undesirable consequences on the synthesised speech quality, and unless they are protected from such corruptions, the level of service quality will deteriorate rapidly. Traditionally, forward error correction (FEC) coding is used to remove these errors, but these require substantial redundancy. Therefore, a different perspective of the error control problems and solutions is necessary. In this thesis, emphasis is constantly placed on exploiting the constraints and residual redundancies present in the model parameters. It is also shown that with such source criteria in the LBR speech coders, varying degrees of error protection from channel corruptions are feasible. From these observations, error control requirements and methodologies, using both block- and parameter-orientated aspects, are analysed, devised and implemented. It is evident, that under the unusual circumstances which LBR speech coders have to operate in, the importance and significance of source reliant error control will continue to attract research and commercial interests. The work detailed in this thesis is focused on two LPC-based speech coders. One of the ideas developed for these two coders is an advanced zero redundancy scheme for the LPC parameters which is designed to operate at high channel error rates. Another concept proposed here is the use of source criteria to enhance the decoding capabilities of FEC codes to exceed that of maximum likelihood decoding performance. Lastly, for practical operation of LBR speech coders, lost frame recovery strategies are viewed to be an indispensable part of error control. This topic is scrutinised in this thesis by investigating the behaviour of a specific speech coder under irrecoverable error conditions. In all of the ideas pursued above, the effectiveness of the algorithms formulated here are quantified using both objective and subjective tests. Consequently, the capabilities of the techniques devised in this thesis can be demonstrated, examples of which are: (1) higher speech quality produced under noisy channels, using an improved zero-redundancy algorithm for the LPC filter coefficients; (2) as much as 50% improvement in the residual BER and decoding failures of FEC schemes, through the utilisation of source criteria in LBR speech coders; and (3) acceptable speech quality produced under high frame loss rates (14%), after formulating effective strategies for recovery of speech coder parameters. It is hoped that the material described here provide concepts which can help achieve the ideals of maximum efficiency and quality in LBR speech telecommunications.
APA, Harvard, Vancouver, ISO, and other styles
19

Edwards, Reuben C. "Multifunctional coding investigations for the tactical communications environment." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Yue Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Design of structured nonbinary quasi-cyclic low-density parity-check codes." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43616.

Full text
Abstract:
Since the rediscovery, LDPC codes attract a large amount of research efforts. In 1998, nonbinary LDPC codes were firstly investigated and the results shown that they are better than their binary counterparts in performance. Recently, there is always a requirement from the industry to design applied nonbinary LDPC codes. In this dissertation, we firstly propose a novel class of quasi-cyclic (QC) LDPC codes. This class of QC-LDPC codes embraces both linear encoding complexity and excellent compatibility in various degree distributions and nonbinary expansions. We show by simulation results that our proposed QC-LDPC codes perform as well as their comparable counterparts. However, this proposed code structure is more flexible in designing. This feature may show its power when we change the code length and rate adaptively. Further more, we present two algorithms to generate codes with short girth and better girth distribution. The two algorithms are designed based on progressive edge growth (PEG) algorithm and they are specifically designed for quasi-cyclic structure. The simulation results show the improvement they achieved. In this thesis, we also investigate the believe propagation based iterative algorithms for decoding of nonbinary LDPC codes. The algorithms include sum-product (SP) algorithm, SP algorithm using fast Fourier transform, min-sum (MS) algorithm and complexity reduced extended min-sum (EMS) algorithm. In particular, we present the proposed modified min-sum algorithm with threshold filtering which further reduces the computation complexity.
APA, Harvard, Vancouver, ISO, and other styles
21

Thayananthan, V. "Design of run-length limited partial unit memory codes for digital magnetic recording and trellis coded quantisation based on PUM codes." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Martin, Ian. "Practical error control techniques for transmission over noisy channels." Thesis, Lancaster University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Fekri, Faramarz. "Finite-field wavelet transforms and their application to error-control coding." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Irman, Martin. "Rapid system prototyping for error-control coding in optical CDMA networks." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98972.

Full text
Abstract:
With increasing bandwidth requirements of individual users, fibre-to-the-home systems are promising candidates for last mile communication. In case of local area networks, optical CDMA with nonorthogonal spreading sequences has emerged as an attractive technology that can manage quickly varying user requirements, while enabling total bandwidth utilization, avoiding network congestion and preventing denial of service. However, powerful error-control codes have to be utilized within optical CDMA systems to prevent errors caused by multiuser interference. Due to data rates common in optical communication, dedicated hardware is required to run such error-control codes at very high speeds, e.g., 155 Mbps or 652 Mbps.
This work presents a rapid system prototyping platform for error-control codes which are to be incorporated into the above mentioned optical CDMA systems. The platform consists of a design methodology, an extensive library of modules and an environment for testing designed specifically for optical CDMA systems but applicable to other communication systems as well. It is built on System Generator from Xilinx, a Matlab/Simulink based visual design tool, and enables a "push of a button" transition from code specification to real-time implementation on an FPGA chip.
Initially, both the hardware specifics of this platform and the details of the developed modular design methodology are presented. Consequently, implementation of a custom design construct (a Generate block) and a library of communication system modules are described together with blocks and methodology for testing the developed algorithms. Finally, design and evaluation of three different communication systems are presented. These designs show that the platform can be used for prototyping and evaluation of error-control algorithms running at the required processing speeds mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
25

Aitken, D. G. "Error control coding for mixed wireless and internet packet erasure channels." Thesis, University of Surrey, 2008. http://epubs.surrey.ac.uk/804436/.

Full text
Abstract:
Recent years have seen dramatic growth in Internet usage and an increasing convergence between Internet and wireless communications. There has also been renewed interest in iteratively decoded low-density parity-check (LDPC) codes due to their capacity approaching performance on AWGN channels.
APA, Harvard, Vancouver, ISO, and other styles
26

Schultz, Steven E. "Simulation Study of a GPRAM System: Error Control Coding and Connectionism." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5486.

Full text
Abstract:
A new computing platform, the General Purpose Reprsentation and Association Machine is studied and simulated. GPRAM machines use vague measurements to do a quick and rough assessment on a task; then use approximated message-passing algorithms to improve assessment; and finally selects ways closer to a solution, eventually solving it. We illustrate concepts and structures using simple examples.
ID: 031001523; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Adviser: Lei Wei.; Title from PDF title page (viewed August 19, 2013).; Thesis (M.S.E.E.)--University of Central Florida, 2012.; Includes bibliographical references (p. 45).
M.S.E.E.
Masters
Electrical Engineering and Computing
Engineering and Computer Science
Electrical Engineering
APA, Harvard, Vancouver, ISO, and other styles
27

Kao, Johnny Wei-Hsun. "Methods of artificial intelligence for error control coding and multi-user detection." Thesis, University of Auckland, 2010. http://hdl.handle.net/2292/5962.

Full text
Abstract:
This thesis investigates error control coding and multi-user detection for a single and multi-user communication system using methods of artificial intelligence. The main motivation of the research is to improve on the drawbacks and the constraints of the existing decoding/detection algorithms by proposing alternative methods based on the principle of soft computing. This thesis contains two main parts: the first part investigates the decoding of convolutional codes using a recurrent neural network and a support vector machine on a classical single-user communication system, and the second part studies a multi-user detector for a CDMA system based on a support vector machine. In particular, this thesis investigates a chaos-based CDMA system and compares it with other conventional systems. The theoretical analysis of the proposed methods are studied in detail by mathematical modeling and numerical examples, where all relevant design parameters and issues are considered. A quantitative approach is used to measure and compare the performance of the system by a series of Monte-Carlo simulations. The regular methods for convolutional decoding such as the Viterbi and the Turbo algorithms are reviewed and compared with the proposed methods. It is shown that the recurrent neural network decoder has a similar performance to the conventional Viterbi decoder, while the complexity reduces from an exponential to a polynomial function with respect to the encoder size. The inherent parallel processing capability of this decoder makes it suitable for high data-rate applications. On the other hand, the support vector machine decoder can learn and adapt to the surrounding environment, hence it achieves an extra coding gain of 2dB over the Viterbi decoder under a Rayleigh fading channel. Furthermore, the semi-blind support vector machine detector has a comparable performance with the well-known MMSE detector, and it is suitable for the forward link. The complexity of the detector is made even simpler than a matched filter receiver once feature extraction is incorporated. The results from the thesis suggest that these proposed methodologies can effectively make the radio links smarter and more flexible for future wireless systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Aissa, Sonia. "Robust image transmission over wireless CDMA channels using combined error-resilient source coding and channel error control." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq44343.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Papadopoulos, Constantinos. "Codes and trellises." Thesis, Queen Mary, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ellis, Jason D. "Effects of phase and amplitude errors on QAM systems with error-control coding and soft-decision decoding." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1252937616/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rice, Michael. "Performance of Soft-Decision Block-Decoded Hybrid-ARQ Error Control." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608852.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Soft-decision correlation decoding with retransmission requests for block codes is proposed and the resulting performance is analyzed. The correlation decoding rule is modified to allow retransmission requests when the received word is rendered unreliable by the channel noise. The modification is realized by a reduction in the volume in Euclidean space of the decoding region corresponding to each codeword. The performance analysis reveals the typical throughput - reliability trade-off characteristic of error control systems which employ retransmissions. Performance comparisons with hard-decision decoding reveal performance improvements beyond those attainable with hard-decision decoding algorithms. The proposed soft-decision decoding rule permits the use of a simplified codeword searching algorithm which reduces the complexity of the correlation decoder to the point where practical implementation is feasible.
APA, Harvard, Vancouver, ISO, and other styles
32

Clark, Alan. "On Coding for Orthogonal Frequency Division Multiplexing Systems." Thesis, University of Canterbury. Electrical and Computer Engineering, 2006. http://hdl.handle.net/10092/1092.

Full text
Abstract:
The main contribution of this thesis is the statistical analysis of orthogonal frequency di- vision multiplexing (OFDM) systems operating over wireless channels that are both fre- quency selective and Rayleigh fading. We first describe the instantaneous capacity of such systems using a central limit theorem, as well as the asymptotic capacity of a power lim- ited OFDM system as the number of subcarriers approaches infinity. We then analyse the performance of uncoded OFDM systems by first developing bounds on the block error rate. Next we show that the distribution of the number of symbol errors within each block may be tightly approximated, and derive the distribution of an upper bound on the total variation distance. Finally, the central result of this thesis proposes the use of lattices for encodingOFDMsystems. For this, we detail a particular method of using lattices to encode OFDMsystems, and derive the optimalmaximumlikelihood decodingmetric. Generalised Minimum Distance (GMD) decoding is then introduced as a lower complexity method of decoding lattice encoded OFDM. We derive the optimal reliability metric for GMD decod- ing of OFDM systems operating over frequency selective channels, and develop analytical upper bounds on the error rate of lattice encoded OFDM systems employing GMD decod- ing.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xiaohan Sasha. "Investigation of Forward Error Correction Coding Schemes for a Broadcast Communication System." Thesis, University of Canterbury. Computer Science and Software Engineering, 2013. http://hdl.handle.net/10092/7902.

Full text
Abstract:
This thesis investigates four FEC (forward error correction) coding schemes for their suitability for a broadcast system where there is one energy-rich transmitter and many energy-constrained receivers with a variety of channel conditions. The four coding schemes are: repetition codes (the baseline scheme); Reed-Solomon (RS) codes; Luby-Transform (LT) codes; and a type of RS and LT concatenated codes. The schemes were tested in terms of their ability to achieve both high average data reception success probability and short data reception time at the receivers (due to limited energy). The code rate (Rc) is fixed to either 1/2 or 1/3. Two statistical channel models were employed: the memoryless channel and the Gilbert-Elliott channel. The investigation considered only the data-link layer behaviour of the schemes. During the course of the investigation, an improvement to the original LT encoding process was made, the name LTAM (LT codes with Added Memory) was given to this improved coding method. LTAM codes reduce the overhead needed for decoding short-length messages. The improvement can be seen for decoding up to 10000 number of user packets. The maximum overhead reduction is as much as 10% over the original LT codes. The LT-type codes were found to have the property that can both achieve high success data reception performance and flexible switch off time for the receivers. They are also adaptable to different channel characteristics. Therefore it is a prototype of the ideal coding scheme that this project is looking for. This scheme was then further developed by applying an RS code as an inner code to further improve the success probability of packet reception. The results show that LT&RS code has a significant improvement in the channel error tolerance over that of the LT codes without an RS code applied. The trade-off is slightly more reception time needed and more decoding complexity. This LT&RS code is then determined to be the best scheme that fulfils the aim in the context of this project which is to find a coding scheme that both has a high overall data reception probability and short overall data reception time. Comparing the LT&RS code with the baseline repetition code, the improvement is in three aspects. Firstly, the LT&RS code can keep full success rate over channels have approximately two orders of magnitude more errors than the repetition code. This is for the two channel models and two code rates tested. Secondly, the LT&RS code shows an exceptionally good performance under burst error channels. It is able to maintain more than 70% success rate under the long burst error channels where both the repetition code and the RS code have almost zero success probability. Thirdly, while the success rates are improved, the data reception time, measured in terms of number of packets needed to be received at the receiver, of the LT&RS codes can reach a maximum of 58% reduction for Rc = 1=2 and 158% reduction for Rc = 1=3 compared with both the repetition code and the RS code at the worst channel error rate that the LT&RS code maintains almost 100% success probability.
APA, Harvard, Vancouver, ISO, and other styles
34

Raibroycharoen, Pronsak. "Adaptive error control for wireless channels : employing turbo coding, hybrid ARQ and MIMO-OFDM techniques." Thesis, University of Essex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dorfman, Vladimir. "Detection and coding techniques for partial response channels /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3094619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

McCartney, Mark. "SRAM Reliability Improvement Using ECC and Circuit Techniques." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/531.

Full text
Abstract:
Reliability is of the utmost importance for safety of electronic systems built for the automotive, industrial, and medical sectors. In these systems, the embedded memory is especially sensitive due to the large number of minimum-sized devices in the cell arrays. Memory failures which occur after the manufacture-time burnin testing phase are particularly difficult to address since redundancy allocation is no longer available and fault detection schemes currently used in industry generally focus on the cell array while leaving the peripheral logic vulnerable to faults. Even in the cell array, conventional error control coding (ECC) has been limited in its ability to detect and correct failures greater than a few bits, due to the high latency or area overhead of correction [43]. Consequently, improvements to conventional memory resilience techniques are of great importance to continued reliable operation and to counter the raw bit error rate of the memory arrays in future technologies at economically feasible design points [11, 36, 37, 53, 56, 70]. In this thesis we examine the landscape of design techniques for reliability, and introduce two novel contributions for improving reliability with low overhead. To address failures occurring in the cell array, we have implemented an erasurebased ECC scheme (EB-ECC) that can extend conventional ECC already used in memory to correct and detect multiple erroneous bits with low overhead. An important component of this scheme is the method for detecting erasures at runtime; we propose a novel ternary-output sense amplifier design which can reduce the risk of undetected read latency failures in small-swing bitline designs. While most study has focused on the static random access memory (SRAM) cell array, for high-reliability products, it is important to examine the effects of failures on the peripheral logic as well. We have designed a wordline assertion comparator (WLAC) which has lower area overhead in large cache designs than competing techniques in the literature to detect address decoder failure.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Xiaopeng. "Applications and Principle Designs of Ghost Imaging." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/18793.

Full text
Abstract:
Ghost Imaging (GI) is an emerging imaging technique which originated from quantum and optical areas. GI can achieve 2-dimensional (2D) and 3-dimensional (3D) reconstructions of objects without direct involvements of spatial resolving detectors. Starting from its first experimental demonstration back to 1995, GI has drawn much attention and has been now adopted to other imaging scenarios besides optical as well. As for its potential applications in microwave imaging, I first introduce GI into the scenario of Through-wall Imaging (TWI) by using chaotic modulated signals and an array of antennas to illuminate the scenario. Then a high-resolution image of target objects is obtained by using modified reconstruction method from optical GI. As for the second approach in investigations of microwave GI, I further consider extending the indoor TWI scenario to the surveillance upon urban areas. By replacing the chaotic modulated signal with long-term-evolution (LTE) and Wi-Fi signals, the proposed system can be further integrated into existing communication networks. Then in order to reduce the difficulty in practical implementations of microwave GI, I propose a novel microwave GI scheme based on non-random EM fields. By applying purposely designed EM fields to illuminate the imaging scenario, both the requirement of randomness and the involvement of field estimations in traditional microwave GI has been removed. Motivated by the fact that both communication and imaging can be considered as an information transfer process, I integrate error-control-coding (ECC) techniques into the physical imaging procedure of GI. The proof-of-concept experiment validates that the object image can be effectively reconstructed while errors induced by noisy reception can be significantly reduced under this new scheme. The demonstrated approach informs a new imaging technique: ECC assisted imaging, which can be scaled into applications such as remote sensing, spectroscopy and biomedical imaging.
APA, Harvard, Vancouver, ISO, and other styles
38

Oza, Maulik D. "Performance Analysis of Turbo Coded Waveforms and Link Budget Analysis (LBA) based Range Estimation over Terrain Blockage." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1278524750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Baldiwala, Aliasgar M. "Distance Distribution and Error Performance of Reduced Dimensional Circular Trellis Coded Modulation." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1079387217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Licona-Nunez, Jorge Estuardo. "M-ary Runlength Limited Coding and Signal Processing for Optical Data Storage." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5195.

Full text
Abstract:
Recent attempts to increase the capacity of the compact disc (CD) and digital versatile disc (DVD) have explored the use of multilevel recording instead of binary recording. Systems that achieve an increase in capacity of about three times that of conventional CD have been proposed for production. Marks in these systems are multilevel and fixed-length as opposed to binary and variable length in CD and DVD. The main objective of this work is to evaluate the performance of multilevel ($M$-ary) runlength-limited (RLL) coded sequences in optical data storage. First, the waterfilling capacity of a multilevel optical recording channel ($M$-ary ORC) is derived and evaluated. This provides insight into the achievable user bit densities, as well as a theoretical limit against which simulated systems can be compared. Then, we evaluate the performance of RLL codes on the $M$-ary ORC. A new channel model that includes the runlength constraint in the transmitted signal is used. We compare the performance of specific RLL codes, namely $M$-ary permutation codes, to that of real systems using multilevel fixed-length marks for recording and the theoretical limits. The Viterbi detector is used to estimate the original recorded symbols from the readout signal. Then, error correction is used to reduce the symbol error probability. We use a combined ECC/RLL code for phrase encoding. We evaluate the use of trellis coded modulation (TCM) for amplitude encoding. The detection of the readout signal is also studied. A post-processing algorithm for the Viterbi detector is introduced, which ensures that the detected word satisfies the code constraints. Specifying the codes and detector for the $M$-ary ORC gives a complete system whose performance can be compared to that of the recently developed systems found in the literature and the theoretical limits calculated in this research.
APA, Harvard, Vancouver, ISO, and other styles
41

Cai, Jianfei. "Robust error control and optimal bit allocation for image and video transmission over wireless channels /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3052158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pujaico, Rivera Fernando 1982. "Algoritmos de decodificação abrupta para códigos LDGM." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259244.

Full text
Abstract:
Orientador: Jaime Portugheis
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-18T15:49:19Z (GMT). No. of bitstreams: 1 PujaicoRivera_Fernando_M.pdf: 2105653 bytes, checksum: f62384ffbf5226353eba3406ba7735b8 (MD5) Previous issue date: 2011
Resumo: Desde que Gallager introduziu o algoritmo de decodificação Bit-Flipping (BF) com decisão abrupta para códigos Low Density Parity Check (LDPC), outras duas variantes foram propostas por Sipser e Spielman para os códigos conhecidos como "Expander Codes". Posteriormente, uma versão da decodificação BF por decisão suave conhecida como decodificação Modified Weighted BF (MWBF), foi investigada. Esta tese propõe versões modificadas dos algoritmos de Sipser e Spielman. Resultados de simulações para códigos Low Density Generator Matrix (LDGM) sistemáticos, com comprimento longo mostraram um melhor desempenho da versão proposta. Adicionalmente, para um comprimento médio dos códigos LDGM, resultados de simulações mostraram um desempenho similar à decodificação MWBF com a vantagem de não ser necessário o uso de operações em ponto flutuante
Abstract: Since Gallager introduced Bit-Flipping (BF) decoding with hard-decision for Low-Density Parity- Check Codes (LDPC), other two variants were proposed by Sipser and Spielman for expander codes. Later, a soft-decision version of BF decoding, known as Modified Weighted BF (MWBF) decoding, was investigated. This thesis proposes modified versions of Sipser and Spielman algorithms. Simulation results for long systematic Low-Density Generator Matrix (LDGM) codes show a better performance of the proposed versions. Moreover, for moderate length systematic LDGM codes, simulation results show performance similar to that of MWBF decoding with the advantage of not requiring floating-point operations
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
43

Welling, Kenneth. "Coded Orthogonal Frequency Division Multiplexing for the Multipath Fading Channel." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608525.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
This paper presents a mathematical model for Coded Orthogonal Frequency Division Multiplexing (COFDM) in frequency selective multipath encountered in aeronautical telemetry. The use of the fast Fourier transform (FFT) for modulation and demodulation is reviewed. Error control coding with interleaving in frequency is able to provide reliable data communications during frequency selective multipath fade events. Simulations demonstrate QPSK mapped COFDM performs well in a multipath fading environment with parameters typically encountered in aeronautical telemetry.
APA, Harvard, Vancouver, ISO, and other styles
44

Panagos, Adam G. "SIMULATED PERFORMANCE OF SERIAL CONCATENATED LDPC CODES." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/607472.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
With the discovery of Turbo Codes in 1993, interest in developing error control coding schemes that approach channel capacity has intensified. Some of this interest has been focused on lowdensity parity-check (LDPC) codes due to their high performance characteristics and reasonable decoding complexity. A great deal of literature has focused on performance of regular and irregular LDPC codes of various rates and on a variety of channels. This paper presents the simulated performance results of a serial concatenated LDPC coding system on an AWGN channel. Performance and complexity comparisons between this serial LDPC system and typical LDPC systems are made.
APA, Harvard, Vancouver, ISO, and other styles
45

Rahnavard, Nazanin. "Coding for wireless ad-hoc and sensor networks unequal error protection and efficient data broadcasting /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/26673.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Professor Faramarz Fekri; Committee Member: Professor Christopher Heil; Committee Member: Professor Ian F. Akyildiz; Committee Member: Professor James H. McClellan; Committee Member: Professor Steven W. McLaughlin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
46

Rice, Michael, and Kenneth Welling. "CODED OFDM FOR AERONAUTICAL TELEMETRY." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606496.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Three Quadrature Phase Shift Keying (QPSK) mapped COFDM systems demonstrating a continuum of complexity levels are simulated over an evolving three ray model of the multipath fading channel with parameters interpolated from actual channel sounding experiments. The first COFDM system uses coherent QPSK and convolutional coding with interleaving in frequency, channel equalization and soft decision decoding; the second uses convolutional coding with interleaving in frequency, Differential Phase Shift Keying (DPSK) and soft decision decoding; the third system uses a quaternary BCH code with DPSK mapping and Error and Erasure Decoding (EED). All three systems are shown to be able to provide reliable data communication during frequency selective fade events. Simulations demonstrate QPSK mapped COFDM with reasonable complexity performs well in a multipath frequency selective fading environment under parameters typically encountered in aeronautical telemetry.
APA, Harvard, Vancouver, ISO, and other styles
47

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Altamirano, Carrillo Carlos Daniel. "Avaliação de desempenho de esquemas de modulação e codificação na presença de interferência de co-canal." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258822.

Full text
Abstract:
Orientador: Celso de Almeida
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-19T04:18:46Z (GMT). No. of bitstreams: 1 AltamiranoCarrillo_CarlosDaniel_M.pdf: 813233 bytes, checksum: 862b6d12c2e773acbd4f388a59e9eadd (MD5) Previous issue date: 2011
Resumo: Este trabalho avalia os efeitos da interferência de co-canal na taxa de erro de bits (BER) de sistemas de transmissão digitais sem fio. O ambiente do sistema considera canais com ruído gaussiano (AWGN) e canais com desvanecimento Rayleigh na presença de um interferente de co-canal dominante, onde os usuários empregam esquemas de modulação BPSK e M-QAM e também códigos corretores de erros. Os códigos corretores de erros utilizados em sistemas com expansão de banda são os códigos convolucional e turbo, e em sistemas sem expansão de banda são a modulação-codificada por treliça (TCM) e a modulação-codificada turbo (TTCM). Os efeitos da interferência de co-canal na taxa de erro de bit serão avaliados derivando-se expressões teóricas e mediante a simulação de Monte Carlo, variando o tipo de canal e os esquemas de modulação e codificação. Este trabalho mostra que a interferência de co-canal introduz patamares na taxa de erro de bit, que os sistemas sem expansão de banda são mais susceptíveis à interferência e que os códigos corretores de erro são uma boa ferramenta para mitigar os efeitos da interferência de co-canal
Abstract: This work evaluates the effects of co-channel interference on the bit error rate (BER) of digital transmission systems. The transmission system considers gaussian noise channels (AWGN) and Rayleigh fading channels in the presence of a dominant co-channel interferer, where all users employ BPSK and M-QAM modulations and error control coding. For systems that present bandwidth expansion the considered error control codes are convolutional and turbo codes, and for systems that do not present bandwidth expansion are considered trellis coded modulation (TCM) and turbo trellis coded modulation (TTCM). The effects of co-channel interference on the bit error rate are evaluated by deriving theoretical expressions and via Monte Carlo simulation, varying the channel type, the modulation and coding schemes. This work shows that co-channel interference introduces floors on the bit error rate, that systems without bandwidth expansion are more susceptible to interference, and that error control codes are a good tool to mitigate the co-channel interference effects
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
49

Subramanian, Arunkumar. "Coding techniques for information-theoretic strong secrecy on wiretap channels." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42776.

Full text
Abstract:
Traditional solutions to information security in communication systems act in the application layer and are oblivious to the effects in the physical layer. Physical-layer security methods, of which information-theoretic security is a special case, try to extract security from the random effects in the physical layer. In information-theoretic security, there are two asymptotic notions of secrecy---weak and strong secrecy This dissertation investigates the problem of information-theoretic strong secrecy on the binary erasure wiretap channel (BEWC) with a specific focus on designing practical codes. The codes designed in this work are based on analysis and techniques from error-correcting codes. In particular, the dual codes of certain low-density parity-check (LDPC) codes are shown to achieve strong secrecy in a coset coding scheme. First, we analyze the asymptotic block-error rate of short-cycle-free LDPC codes when they are transmitted over a binary erasure channel (BEC) and decoded using the belief propagation (BP) decoder. Under certain conditions, we show that the asymptotic block-error rate falls according to an inverse square law in block length, which is shown to be a sufficient condition for the dual codes to achieve strong secrecy. Next, we construct large-girth LDPC codes using algorithms from graph theory and show that the asymptotic bit-error rate of these codes follow a sub-exponential decay as the block length increases, which is a sufficient condition for strong secrecy. The secrecy rates achieved by the duals of large-girth LDPC codes are shown to be an improvement over that of the duals of short-cycle-free LDPC codes.
APA, Harvard, Vancouver, ISO, and other styles
50

Mirzaee, Alireza. "Turbo Receiver for Spread Spectrum Systems Employing Parity Bit Selected Spreading Sequences." Thesis, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/20635.

Full text
Abstract:
In spread spectrum systems employing parity bit selected spreading sequences, parity bits generated from a linear block encoder are used to select a spreading code from a set of mutually orthogonal spreading sequences. In this thesis, turbo receivers for SS-PB systems are proposed and investigated. In the transmitter, data bits are rst convolutionally encoded before being fed into SS-PB modulator. In fact, the parity bit spreading code selection technique acts as an inner encoder in this system without allocating any transmit energy to the additional redundancy provided by this technique. The receiver implements a turbo processing by iteratively exchanging the soft information on coded bits between a SISO detector and a SISO decoder. In this system, detection is performed by incorporating the extrinsic information provided by the decoder in the last iteration into the received signal to calculate the likelihood of each detected bit in terms of LLR which is used as the input for a SISO decoder. In addition, SISO detectors are proposed for MC-CDMA and MIMO-CDMA systems that employ parity bit selected and permutation spreading. In the case of multiuser scenario, a turbo SISO multiuser detector is introduced for SS-PB systems for both synchronous and asynchronous channels. In such systems, MAI is estimated from the extrinsic information provided by the SISO channel decoder in the previous iteration. SISO multiuser detectors are also proposed for the case of multiple users in MC-CDMA and MIMO-CDMA systems when parity bit selected and permutation spreading are used. Simulations performed for all the proposed turbo receivers show a signi cant reduction in BER in AWGN and fading channels over multiple iterations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography